abstract
stringlengths
1
4.43k
claims
stringlengths
14
189k
description
stringlengths
5
1.46M
A system for managing a virtual memory is provided. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping.
1.A computer implemented method for accessing data of a virtual memory subsystem, the method comprising:A page fault is issued in response to a memory access request, wherein the local page table does not include an entry corresponding to a virtual memory address included in the memory access request;In response to the page fault, performing a page fault sequence, the page fault sequence comprising:Identifying an entry in a page status directory corresponding to a memory page associated with the virtual memory address;Reading the ownership status associated with the memory page from the entry in the page status directory;The local page table is updated to include an entry corresponding to the virtual memory address and associating the virtual memory address with the memory page.2.The method of claim 1 wherein:The ownership status associated with the memory page indicates that the memory page belongs to a central processing unit (belonging to the CPU) prior to executing the page fault sequence;Performing the page fault sequence further includes modifying the ownership state associated with the memory page to be shared by the CPU;The local page table includes a parallel processing unit (PPU) page table.3.The method of claim 2 wherein said memory page table resides in system memory and both said PPU page table and CPU page table include an entry that associates said virtual memory address to said memory page .4.The method of claim 1 wherein:The ownership status associated with the memory page indicates that the memory page belongs to a CPU prior to executing the page fault sequence;Performing the page fault sequence further includes modifying the ownership status associated with the memory page to belong to a PPU;The local page table includes a PPU page table.5.The method of claim 4 wherein performing the page fault sequence further comprises migrating the memory page from system memory to a PPU memory.6.The method of claim 1 wherein:The ownership status associated with the memory page indicates that the memory page belongs to a PPU prior to executing the page fault sequence;Performing the page fault sequence further includes modifying the ownership status associated with the memory page to belong to a CPU;The local page table includes a CPU page table.7.The method of claim 6 wherein performing the page fault sequence further comprises migrating the memory page from a PPU memory to a system memory.8.The method of claim 1 wherein:The ownership status associated with the memory page indicates that the memory page belongs to a PPU prior to executing the page fault sequence;Performing the page fault sequence further includes modifying the ownership state associated with the memory page to be shared by the CPU;The local page table includes a CPU page table.9.The method of claim 8 wherein executing the page fault sequence further comprises migrating the memory page from a PPU memory to a system memory.10.A system for accessing data in a virtual memory subsystem, comprising:Local page table;a first processing unit configured to:Determining that the local page table does not include an entry corresponding to a virtual memory address,In response to determining that the local page table does not include the entry, issuing a page fault;Identifying an entry in a page status directory corresponding to a memory page associated with the virtual memory address;Reading the ownership status associated with the memory page from the entry in the page status directory;The local page table is updated to include an entry corresponding to the virtual memory address and associating the virtual memory address with the memory page.
Migration scenario for unified virtual storage systemsCross-reference to related applicationsThe present application claims priority to U.S. Provisional Patent Application Serial No. 61/782,349, filed on March 14, 2013. The present application also claims priority to U.S. Provisional Patent Application Serial No. 61/800,004, filed on March 15, 2013, entitled > The subject matter of these related applications is hereby incorporated by reference.Technical fieldEmbodiments of the present invention generally relate to virtual memory and, more particularly, to a migration scheme for a unified virtual memory system.Background techniqueMany modern computer systems typically implement some type of virtual memory architecture. In other aspects, the virtual memory architecture enables instructions to access memory using virtual memory addresses instead of physical memory addresses. By providing this virtual memory layer between physical memory and application software, user-level software is shielded from the details of physical memory management, which is left to the dedicated memory management system.A typical computer system implementing a virtual memory architecture includes a central processing unit (CPU) and one or more parallel processing units (GPUs). In operation, a software process executing on a CPU or GPU can request data via a virtual memory address. In many conventional architectures, virtual memory systems for CPUs and GPUs that handle requests for data via virtual memory addresses are independent. More specifically, separate CPU memory management systems and separate GPU memory management systems handle requests for data from the CPU and GPU, respectively.There are several deficiencies associated with such independent memory management systems. For example, each individual memory management system does not necessarily have knowledge of the contents of the memory cells associated with other memory management systems. Therefore, memory management systems may not necessarily cooperate to provide some efficiency, such as determining where data should be stored for improved access latency. In addition, because the memory management system is independent, pointers to one such system are not necessarily compatible with other systems. Therefore, application programmers must keep track of two different types of pointers.As indicated above, what is needed in the art is a more efficient method of managing virtual memory in a system having heterogeneous processors such as CPUs and GPUs.Summary of the inventionOne embodiment of the present invention sets forth a system for managing virtual memory to physical memory mapping via a page state directory. The system includes a first processing unit configured to perform a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to determine that the first page table stored in the first memory unit associated with the first processing unit does not include the first A mapping corresponding to the virtual memory address generates a first page fault. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read the first command queue to determine a first mapping corresponding to the first virtual memory address and included in the first page status directory. The first copy engine is also configured to update the first page table to include the first mapping.One advantage of the disclosed method is that the user-level application does not need to keep track of multiple pointers depending on where a particular piece of data is stored. An additional advantage is based on the use of migrating memory pages between memory cells, which allows the memory pages to be local to the cells that access the memory pages more frequently. Another advantage is that a fault buffer that allows faults generated by the PPU to be merged for efficient execution is provided.DRAWINGSAccordingly, the above-described features of the present invention can be understood in detail, and a more detailed description of the present invention as set forth in the It is to be understood, however, that the appended claims are in the1 is a block diagram showing a computer system configured to implement one or more aspects of the present invention;2 is a block diagram showing a unified virtual memory system in accordance with one embodiment of the present invention;3 is a schematic diagram of a system 300 for tracking the status of a memory page, in accordance with one embodiment of the present invention;4 is a schematic diagram of a system for implementing a migration operation, in accordance with one embodiment of the present invention;Figure 5 illustrates a virtual memory system for storing faults in a fault buffer, in accordance with one embodiment of the present invention;Figure 6 illustrates a virtual memory system for resolving page faults generated by a PPU, in accordance with one embodiment of the present invention;7 illustrates a flow diagram of method steps for managing virtual memory to physical memory mapping via a page state directory, in accordance with one embodiment of the present invention;Figure 8 illustrates a flow chart of method steps for tracking page faults in accordance with one embodiment of the present invention;Figure 9 illustrates a flow chart of method steps for utilizing a fault buffer to resolve a page fault, in accordance with one embodiment of the present invention;10 illustrates a flow chart of method steps for creating and managing a common pointer in a virtual memory architecture, in accordance with one embodiment of the present invention;Figure 11 illustrates a flow chart of method steps for managing ownership status in a virtual memory subsystem, in accordance with one embodiment of the present invention.Detailed waysIn the following description, numerous specific details are set forth However, it is apparent to those skilled in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.System OverviewFIG. 1 is a block diagram showing a computer system 100 configured to implement one or more aspects of the present invention. Computer system 100 includes a central processing unit (CPU) 102 and system memory 104 that communicate via an interconnect path that can include a memory bridge 105. The memory bridge 105 can be, for example, a north bridge chip connected to an I/O (input/output) bridge 107 via a bus or other communication path 106 (e.g., a HyperTransport link). I/O bridge 107, which may be, for example, a south bridge chip, receives user input from one or more user input devices 108 (e.g., a keyboard, mouse) and forwards the input to CPU 102 via communication path 106 and memory bridge 105. Parallel processing subsystem 112 is coupled to memory bridge 105 via a bus or second communication path 113 (eg, Peripheral Component Interconnect (PCI) Express, accelerated graphics port, or hypertransport link); in one embodiment, parallel processing subsystem 112 is a graphics subsystem that delivers pixels to display device 110, which may be any conventional cathode ray tube, liquid crystal display, light emitting diode display, or the like. System disk 114 is also coupled to I/O bridge 107 and can be configured to store content and applications and data for use by CPU 102 and parallel processing subsystem 112. The system disk 114 provides non-volatile storage for applications and data and may include fixed or removable hard drives, flash memory devices, and CD-ROM (Compact Disc Read Only Memory), DVD-ROM (Digital Versatile Disc ROM), Blu-ray, HD-DVD (HD DVD) or other magnetic, optical or solid state storage devices.Switch 116 provides a connection between I/O bridge 107 and other components such as network adapter 118 and various add-in cards 120 and 121. Other components (not explicitly shown), including Universal Serial Bus (USB) or other port connections, compact disc (CD) drives, digital versatile disc (DVD) drives, film recording devices, and the like, can also be connected to I /O bridge 107. The various communication paths shown in Figure 1 including specifically named communication paths 106 and 113 can be implemented using any suitable protocol, such as PCI-Express, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol. And, as is known in the art, connections between different devices may use different protocols.In one embodiment, parallel processing subsystem 112 includes circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constituting one or more parallel processing units (PPUs) 202. In another embodiment, parallel processing subsystem 112 includes circuitry optimized for general processing while preserving an underlying computing architecture, as will be described in greater detail herein. In yet another embodiment, parallel processing subsystem 112 may be integrated with one or more other system components in a single subsystem, such as in conjunction with memory bridge 105, CPU 102, and I/O bridge 107 to form a system on chip (SoC). . As is well known, many graphics processing units (GPUs) are designed to implement parallel operations and computations and are therefore considered to be a type of parallel processing unit (PPU).Any number of PPUs 202 can be included in the parallel processing subsystem 112. For example, multiple PPUs 202 may be provided on a single card, or multiple cards may be connected to communication path 113, or one or more PPUs 202 may be integrated into a bridge chip. The PPUs 202 in a multiple PPU system may be the same or different from each other. For example, different PPUs 202 may have different numbers of processing cores, different amounts of local parallel processing memory, and the like. Where multiple PPUs 202 are present, those PPUs can be operated in parallel to process data at a higher throughput than is possible with a single PPU 202. A system containing one or more PPUs 202 can be implemented in a variety of configurations and form factors, including desktop computers, laptop or handheld personal computers, servers, workstations, game consoles, embedded systems, and the like.PPU 202 advantageously implements a highly parallel processing architecture. PPU 202 includes a number of general processing clusters (GPCs). Each GPC can execute a large number (for example, hundreds or thousands) of threads concurrently, where each thread is an instance of a program. In some embodiments, single instruction, multiple data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In other embodiments, single instruction, multi-threading (SIMT) techniques are used to support the parallel execution of a large number of threads that are generally synchronous. Unlike SIMD execution mechanisms where all processing engines typically execute the same instructions, SIMT execution allows different threads to more easily follow the decentralized execution path by a given thread program.A GPC includes a number of Streaming Multiple Processors (SMs), each of which is configured to process one or more thread groups. As previously defined, a series of instructions transmitted to a particular GPC constitute a thread, and a set of concurrent execution threads across a parallel processing engine within the SM is referred to herein as a "warp" or " Thread group." As used herein, "thread group" refers to a group of threads that concurrently execute the same program for different input data, one thread of which is assigned to a different processing engine within the SM. In addition, multiple related thread groups can be active simultaneously within the SM (at different stages of execution). This thread group collection is referred to herein as a "cooperative thread array" ("CTA") or "thread array."In an embodiment of the invention, it may be desirable to use a PPU 202 of a computing system or other processor to perform general purpose computations using a thread array. Each thread in the thread array is assigned a unique thread identifier ("thread ID") that is accessible to the thread during execution of the thread. Thread IDs, which can be defined as one-dimensional or multi-dimensional values, control aspects of thread processing behavior. For example, the thread ID can be used to determine which portion of the input data set the thread will process and/or determine which portion of the output data set the thread will generate or write.In operation, CPU 102 is the main processor of computer system 100, controlling and coordinating the operation of other system components. Specifically, the CPU 102 issues a command to control the operation of the PPU 202. In one embodiment, communication path 113 is a PCI Express link, as is known in the art, wherein a dedicated channel is assigned to each PPU 202. Other communication paths can also be used. The PPU 202 advantageously implements a highly parallel processing architecture. The PPU 202 can have any amount of local parallel processing memory (PPU memory).In some embodiments, system memory 104 includes a unified virtual memory (UVM) driver 101. The UVM driver 101 includes instructions for implementing various tasks related to the management of a unified virtual memory (UVM) system shared by both the CPU 102 and the PPU 202. In other aspects, the architecture enables CPU 102 and PPU 202 to access physical memory locations using a common virtual memory address regardless of whether the physical memory location is in system memory 104 or local to memory of PPU 202.It should be understood that the systems shown herein are exemplary and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of CPUs 102, and the number of parallel processing subsystems 112, can be modified as needed. For example, in some embodiments, system memory 104 is directly connected to CPU 102 rather than through a bridge, and other devices communicate with system memory 104 via memory bridge 105 and CPU 102. In other alternative topologies, parallel processing subsystem 112 is coupled to I/O bridge 107 or directly to CPU 102 instead of to memory bridge 105. In other embodiments, I/O bridge 107 and memory bridge 105 may be integrated onto a single chip rather than being present as one or more discrete devices. Large embodiments may include two or more CPUs 102 and two or more parallel processing subsystems 112. The specific components shown in this article are optional; for example, any number of cards or peripherals may be supported. In some embodiments, switch 116 is removed and network adapter 118 and add-in cards 120, 121 are directly connected to I/O bridge 107.Unified virtual memory system architecture2 is a block diagram showing a unified virtual memory (UVM) system 200, in accordance with one embodiment of the present invention. As shown, unified virtual memory system 200 includes, but is not limited to, CPU 102, system memory 104, and parallel processing unit (PPU) 202 coupled to parallel processing unit memory (PPU memory) 204. CPU 102 and system memory 104 are coupled to each other and to PPU 202 via memory bridge 105.The CPU 102 executes threads that can request data stored in the system memory 104 or the PPU memory 204 via a virtual memory address. The virtual memory address masks the threads executing in the CPU 102 from being aware of the internal workings of the memory system. Therefore, the thread may only know the virtual memory address and can access the data by requesting data via the virtual memory address.The CPU 102 includes a CPU MMU 209 that processes requests from the CPU 102 for translating virtual memory addresses to physical memory addresses. Accessing data stored in physical memory cells, such as system memory 104 and PPU memory 204, requires physical memory addresses. The CPU 102 includes a CPU fault handler 211 that performs steps in response to the CPU MMU 209 generating a page fault to make the requested data available to the CPU 102. CPU Failure The processor 211 is typically software that resides in the system memory 104 and executes on the CPU 102, which is invoked by an interrupt to the CPU 102.System memory 104 stores various memory pages (not shown) including data for use by threads executing on CPU 102 or PPU 202. As shown, system memory 104 stores a CPU page table 206 that includes a mapping between virtual memory addresses and physical memory addresses. System memory 104 also stores page state directory 210, which serves as a "master page table" for UVM system 200, as discussed in more detail below. System memory 104 stores a fault buffer 216 that includes entries written by PPU 202 to notify CPU 102 of page faults generated by PPU 202. In some embodiments, system memory 104 includes a unified virtual memory (UVM) driver 101 that includes instructions that, when executed, cause CPU 102 to execute commands for remediating page faults in other aspects. In an alternate embodiment, any combination of page state directory 210, fault buffer 216, and one or more command queues 214 may be stored in PPU memory 204. Additionally, PPU page table 208 can be stored in system memory 104.In a similar manner to CPU 102, PPU 202 executes instructions that can request data stored in system memory 104 or PPU memory 204 via a virtual memory address. PPU 202 includes a PPU MMU 213 that processes requests from PPU 202 for translating virtual memory addresses into physical memory addresses. The PPU 202 includes a copy engine 212 that executes commands stored in the command queue 214 for copying memory pages, modifying data in the PPU page table 208, and other commands. The PPU fault handler 215 performs the steps in response to a page fault on the PPU 202. The PPU fault processor 215 may be software running a processor or a dedicated microcontroller in the PPU 202, or the PPU fault processor 215 may be software running on the CPU 102, with the latter being the preferred choice. In some embodiments, CPU fault handler 211 and PPU fault handler 215 may be unified software programs invoked by a failure on CPU 102 or PPU 202. Command queue 214 may be in PPU memory 204 or system memory 104, but is preferentially located in system memory 104.In some embodiments, CPU fault processor 211 and UVM driver 101 may be unified software programs. In such cases, the unified software program can be software that resides in system memory 104 and executes on CPU 102. The PPU fault handler 215 can be a separate software program running on a processor or dedicated microcontroller in the PPU 202, or the PPU fault handler 215 can be a separate software program running on the CPU 102.In other embodiments, PPU fault handler 215 and UVM driver 101 may be unified software programs. In such cases, the unified software program can be software that resides in system memory 104 and executes on CPU 102. The CPU fault handler 211 can be a separate software program that resides in the system memory 104 and executes on the CPU 102.In other embodiments, CPU fault handler 211, PPU fault handler 215, and UVM driver 101 may be unified software programs. In such cases, the unified software program can be software that resides in system memory 104 and executes on CPU 102.In some embodiments, as described above, CPU fault handler 211, PPU fault handler 215, and UVM driver 101 may all reside in system memory 104. As shown in FIG. 2, the UVM driver 101 resides in the system memory 104, and the CPU fault handler 211 and the PPU fault handler 215 reside in the CPU 102.The CPU fault handler 211 and the PPU fault handler 215 are responsive to hardware interrupts that may originate from the CPU 102 or the PPU 202, such as an interrupt due to a page fault. As further described below, the UVM driver 101 includes instructions for implementing various tasks related to the management of the UVM system 200, including but not limited to remediation of page faults, as well as access to the CPU page table 206, page state directory 210, commands. Queue 214 and/or fault buffer 216.In some embodiments, CPU page table 206 and PPU page table 208 have different formats and contain different information; for example, PPU page table 208 can include the following, while CPU page table 206 does not include them: atomic disable bits; compression Label; and the type of swap (swizzling).In a similar manner to system memory 104, PPU memory 204 stores various memory pages (not shown). As shown, PPU memory 204 also includes a PPU page table 208 that includes a mapping between virtual memory addresses and physical memory addresses. Alternatively, PPU page table 208 can be stored in system memory 104.Page status directory3 is a schematic diagram of a system 300 for tracking the status of a memory page, in accordance with one embodiment of the present invention. As shown, system 300 includes a page status directory 210 that is coupled to both CPU page table 206 and PPU page table 208.The page status directory 210 is a data structure that stores mappings associated with each memory page in a particular virtual memory address space. In order to obtain a physical address from the PSD 210, the requestor provides the requested virtual address to the PSD 210, and the PSD 210 performs a lookup operation based on the virtual address. In some embodiments, PSD 210 is capable of maintaining tracking of memory pages of different sizes. To this end, the PSD 210 includes a plurality of arrays. For example, the first array manages CPU sized pages and the second array manages PPU sized pages.Although in one embodiment, the page status directory 210 includes a multi-level table, the page status directory 210 can be implemented in any technically feasible manner. Each non-leaf level includes an array of pointers to entries in the next level. The pointer can point to an entry in PPU memory 204 or system memory 104.The CPU 102 or PPU 202 can update the PSD 210. Updates to the PSD 210 page in system memory 104 can be accomplished by using a compare-and-swap across the PCI-E bus. The update of the PSD 210 page in the PPU memory 204 is done by placing an update request into the PSD update circular buffer stored in system memory. The agent checks the circular buffer to apply the update before any read operations on the PSD210.As described below, multiple virtual memory address spaces may exist. Two different virtual memory address spaces can be assigned to two different processes running on CPU 102. Some processes can share the address space. PSD 210 exists for each virtual memory address space.Different PSDs 210 may each include a mapping to the same memory location in system memory 104 or PPU memory 204. In such cases, a single process can be identified as the owner of the memory location. The PSD 210 corresponding to the single process is considered to be "owner PSD." The owner PSD includes a mapping to a memory location, and the PSD 210 for all other processes including the memory location includes a link to a mapping in the owner PSD.When a process associated with a particular PSD 210 no longer requires a particular mapping associated with a particular memory location, the process causes the mapping to be removed from the PSD 210 associated with the process. The map is placed on the retired list. At this point, the other PSDs 210 may still include a mapping to the memory location. Those PSDs 210 continue to include this mapping until the process associated with those PSDs 210 determines that mapping is no longer needed. When no PSD 210 includes a mapping associated with a memory location, the mapping is removed from the revocation list.The entries in PSD 210 include a mapping between virtual memory addresses and physical memory addresses. The entry also includes status information for the memory page associated with the entry. The following list includes several exemplary states that can be included in a PSD entry in various embodiments of the present invention. "Exclusive" - a memory page can be considered "exclusive", meaning that the memory page is not copied and is visible to PPU 202 or CPU 102, but not both. As discussed below, the "exclusive" status is similar to the "belonging to PPU" or "belonging to CPU" state. "Shared-uncached" - a memory page can be considered "shared-uncached", meaning that the memory page is not copied, but is for one or more PPUs 202 and/or one or more CPUs 102 visible. The "shared-uncached" state is similar to the "CPU Share" state discussed below, and the "uncached" additional quality means "not copied." Memory pages can reside in more than one memory unit (e.g., both system memory 104 and PPU memory 204) and are therefore "replicated." "Read-duplicated" - a memory page can be considered "read-to-copy", meaning that more than one copy of the memory page exists and at least one of the copies is local to the CPU 102 or PPU 202 and The CPU 102 or PPU 202 is only available for reading. "migrating-read-only" - A memory page can be considered "migrate-read-only", meaning that the memory page is in the process of being migrated. For example, the UVM system 200 can be in the process of migrating memory pages from the PPU memory 204 to the system memory 104. Because the memory page is considered "migrate-read-only", it can be read from but not written to the memory page when in this state. "migrating-invisible" - a memory page can be considered "migrated-invisible", meaning that the memory page is in the process of being migrated, but the memory page is "invisible", meaning no The process can be read from or written to the memory page. A "peer-forwarding-entry"-specific entry in the PSD 210 may be considered a "peer-to-peer forwarding entry", meaning that the entry contains a different entry into the PSD 210 that includes the mapping associated with the memory page. link.The UVM system 200 can store memory pages in a backing store, such as hard drive disk space. The UVM driver 101 or operating system maintains a trace of the memory pages stored in the backing store. If the lookup operation implemented on PSD 210 indicates that the memory page is stored in the backing store, then UVM driver 101 moves the memory page from the backing store to system memory 104 or PPU memory 204. After copying the memory page from the backup storage, the UVM driver 101 retries the PSD 210 lookup.The table below depicts an exemplary PSD entry. Each row depicts a different exemplary entry.Translating virtual memory addressesReferring back to FIG. 2, when a thread executing in the CPU 102 requests data via a virtual memory address, the CPU 102 requests a translation of the virtual memory address to the physical memory address from the CPU memory management unit (CPU MMU) 209. In response, CPU MMU 209 attempts to translate the virtual memory address into a physical memory address that specifies a location in memory unit, such as system memory 104, that stores data requested by CPU 102.In order to translate the virtual memory address to a physical memory address, CPU MMU 209 performs a lookup operation to determine if CPU page table 206 includes a mapping associated with the virtual memory address. In addition to the virtual memory address, a request to access data may also indicate a virtual memory address space. Unified virtual memory system 200 can implement multiple virtual memory address spaces, each of which is assigned to one or more threads. The virtual memory address is unique within any given virtual memory address space. Moreover, the virtual memory address within a given virtual memory address space is consistent across CPU 102 and PPU 202, thereby allowing the same virtual address to refer to the same data across CPU 102 and PPU 202. In some embodiments, two virtual memory addresses in the same virtual address space may refer to the same data, but may not generally map to the same physical memory address (eg, CPU 102 and PPU 202 may each have local read-only data) copy).For any given virtual memory address, CPU page table 206 may or may not include a mapping between a virtual memory address and a physical memory address. If the CPU page table 206 includes a map, the CPU MMU 209 reads the map to determine the physical memory address associated with the virtual memory address and provides the physical memory address to the CPU 102. However, if the CPU page table 206 does not include a mapping associated with the virtual memory address, the CPU MMU 209 is not able to translate the virtual memory address to a physical memory address, and the CPU MMU 209 generates a page fault. To remedy the page fault and make the requested data available to the CPU 102, a "page fault sequence" is executed. More specifically, CPU 102 reads PSD 210 to find the current mapping state of the page and then determines the appropriate page fault sequence. The page fault sequence typically maps memory pages associated with the requested virtual memory address or changes the type of access granted (e.g., read access, write access, atomic access) unless a fatal fault has occurred. The different types of page fault sequences implemented in the UVM system 200 are discussed in more detail below.Within UVM system 200, data associated with a given virtual memory address may be stored as a read-only copy of the same data in system memory 104, in PPU memory 204, or in both system memory 104 and PPU memory 204. Moreover, for any such data, either or both of the CPU page table 206 or the PPU page table 208 can include a mapping associated with the data. Obviously, there is some data for the map that exists in one page table rather than in other page tables. However, PSD 210 includes all of the mappings stored in PPU page table 208, as well as PPU-related mappings stored in CPU page table 206. The PSD 210 thus functions as a "master" page table for unifying the virtual memory system 200. Thus, when CPU MMU 209 cannot find a mapping in CPU page table 206 associated with a particular virtual memory address, CPU 102 reads PSD 210 to determine if PSD 210 includes a mapping associated with the virtual memory address. Various embodiments of PSD 210 can include different types of information associated with virtual memory addresses in addition to mappings associated with virtual memory addresses.When the CPU MMU 209 generates a page fault, the CPU fault processor 211 performs an operational sequence for a suitable page fault sequence to remedy the page fault. Additionally, during the page fault sequence, CPU 102 reads PSD 210 and performs additional operations to change the mapping or permissions within CPU page table 206 and PPU page table 208. Such operations may include reading and/or modifying the CPU page table 206, reading and/or modifying the page state directory 210 entries, and/or migrating between memory cells (eg, system memory 104 and PPU memory 204) is referred to as The data block of the "memory page".4 is a schematic diagram of a system 400 that implements a migration operation, in accordance with one embodiment of the present invention. As shown, system 400 includes page state directory 210, system memory 104, and PPU memory 204.As explained above, the page status directory 210 stores a PSD entry 401 indicating all or a portion of the virtual memory address 402, all or a portion of the physical memory address 404, and status information 406. The PSD entry 401 therefore maps the virtual memory address 402 to the physical memory address 404.In response to a page fault, the UVM driver 101 can determine that a memory page, such as memory page 408, is to be migrated from one memory unit to another to resolve the page fault. For example, the UVM driver 101 can determine that the memory page 408 is to be migrated from the system memory 104 to the PPU memory 204. In response to this determination, the UVM driver 101 performs a series of operations, hereinafter referred to as a page fault sequence, to cause the memory page 408 to be migrated. Additionally, the page fault sequence can change a portion of page state directory entry 401 that is associated with memory page 408. More specifically, the page fault sequence may update the physical memory address 404 to the physical location of the memory page 408 after the memory page has been migrated. However, virtual memory address 402 in page state directory entry 401 remains unchanged, which allows pointers in the application to remain constant and refers to memory page 408 regardless of where memory page 408 is stored.To determine which operations are performed in the page fault sequence, CPU 102 identifies the memory page associated with the virtual memory address. The CPU 102 then reads status information for the memory page from the PSD 210 associated with the virtual memory address, which is associated with the memory access request that caused the page fault. Such status information may include, among other things, the ownership status of the memory page associated with the virtual memory address. Several ownership states are possible for any given memory page. For example, the memory page can be "CPU-dependent", "PPU-independent" or "CPU-shared". If the CPU 102 can access the memory page via the virtual address, and if the PPU 202 cannot access the memory page via the virtual address without causing a page fault, the memory page is considered to belong to the CPU. Preferably, pages belonging to the CPU reside in system memory 104, but may reside in PPU memory 204. If the PPU 202 can access the page via the virtual address, and if the CPU 102 is unable to access the memory page via the virtual address without causing a page fault, the memory page is considered to belong to the PPU. Preferably, pages belonging to the PPU reside in the PPU memory 204, but may reside in the system memory 104 when migration from the system memory 104 to the PPU memory 204 is generally incomplete due to the short-term nature of PPU ownership. If the memory page is stored in system memory 104 and the mapping to the memory page is present in PPU page table 208, which allows PPU 202 to access the memory page in system memory 104 via the virtual memory address, the memory page is considered to be CPU shared.The UVM system 200 can assign ownership status to a memory page based on a variety of factors including the usage history of the memory page, optionally the usage history stored in the PSD 210 entry. The usage history may include information as to whether the CPU 102 or PPU 202 has recently accessed the memory page and how many times such access has been made. For example, if the UVM system 200 determines that a memory page may be used mostly or only by the CPU 102 based on the usage history of the memory page, the UVM system 200 may assign a ownership status of the "belonging to the CPU" for a given memory page and position the page in the system. In the memory 104. Similarly, if the UVM system 200 determines that a memory page may be used mostly or only by the PPU 202 based on the usage history of the memory page, the UVM system 200 may assign ownership of the "PPU" to a given memory page and position the page on the PPU. In memory 204. Finally, if the UVM system 200 determines that the memory page may be used by both the CPU 102 and the PPU 202 based on the usage history of the memory page, and the memory page is repeatedly migrated from the system memory 104 to the PPU memory 204, it will consume too much time, then the UVM system 200 You can assign ownership of a "CPU share" to a given memory page.As an example, fault processors 211 and 215 can implement any or all of the following heuristics for migration:(a) when the CPU 102 accesses an unmapped page that has not been migrated recently and is mapped to the PPU 202, cancels the mapping of the failed page from the PPU 202, migrates the page to the CPU 102, and maps the page to the CPU 102;(b) when the PPU 202 accesses the unmapped page that has not been migrated recently and is mapped to the CPU 102, the mapping of the failed page is cancelled from the CPU 102, the page is migrated to the PPU 202 and the page is mapped to the PPU 202;(c) when the CPU 102 accesses the unmapped page that has been recently migrated and mapped to the PPU 202, the faulty page is migrated to the CPU 102 and the page is mapped on both the CPU 102 and the PPU 202;(d) mapping the page to both the CPU 102 and the PPU 202 when the PPU 202 accesses an unmapped page that has been recently migrated and mapped on the CPU 102;(e) when the PPU 102 atomic access is mapped to both the CPU 102 and the PPU 202 but is not enabled by the PPU 202 for atomic operations, the page 102 is unmapped from the CPU 102 and mapped to the PPU 202 using the enabled atomic operations;(f) When the PPU 202 write accesses the pages mapped to the CPU 102 and the PPU 202 in accordance with the copy-on-write (COW), the page is copied to the PPU 202, thereby performing independent copying of the page, and mapping the new page to the PPU according to the read-write. And leaving the current page as being mapped on the CPU 102;(g) When the PPU 202 reads the page mapped to the CPU 102 and the PPU 202 in accordance with the on-demand zero-fill (ZFOD), the page of the physical memory is allocated on the PPU 202 and padded with zeros, and the page is mapped on the PPU, but Change it to not mapped on CPU 102.(h) when the first PPU 202(1) accesses an unmapped page that has not been migrated recently and is mapped on the second PPU 202(2), the mapping of the faulty page is cancelled from the second PPU 202(2), and the page is migrated Go to the first PPU 202(1) and map the page to the first PPU 202(1);(i) when accessing an unmapped page that has been recently migrated on the second PPU 202(2) by the first PPU 202(1), mapping the fault page to the first PPU 202(1), and keeping the page in Mapping on the second PPU 202(2).In summary, many heuristic rules are possible, and the scope of the invention is not limited to these examples.In addition, any migration heuristics can be "rounded up" to include more pages or larger page sizes, such as:(j) When the CPU 102 accesses an unmapped page that has not been migrated recently and is mapped to the PPU 202, the PPU 202 cancels the mapping of the failed page plus the additional page adjacent to the failed page in the virtual address space, and the page is migrated to the CPU 102. And mapping the page to the CPU 102 (in a more detailed example: for a 4kB faulty page, migrating the aligned 64kB region of the 4kB faulty page);(k) When the PPU 202 accesses an unmapped page that has not been migrated recently and is mapped to the CPU 102, the CPU 102 cancels the mapping of the failed page plus the additional page adjacent to the failed page in the virtual address space, and the page is migrated to the PPU 202. And mapping the page to PPU 202 (in a more detailed example: for a 4kB faulty page, migrating includes aligned 64kB regions of the 4kB faulty page);(l) When the CPU 102 accesses an unmapped page that has not been migrated recently and is mapped to the PPU 202, the PPU 202 cancels the mapping of the failed page plus the additional page adjacent to the failed page in the virtual address space, and the page is migrated to the CPU 102. Mapping the page to the CPU 102 and treating all migrated pages as one or more larger pages on the CPU 102 (in a more detailed example: for a 4kB faulty page, migrating includes the 4kB faulty page Aligned 64kB regions and treat aligned 64kB regions as 64kB pages);(m) When the PPU 202 accesses an unmapped page that has not been migrated recently and is mapped on the CPU 102, the CPU 102 cancels the mapping of the failed page plus the additional page adjacent to the failed page in the virtual address space, and migrates the page to PPU 202, maps pages to PPU 202, and treats all migrated pages as one or more larger pages on PPU 202 (in a more detailed example: for 4kB faulty pages, migration includes the 4kB faulty page Aligned 64kB region and treat the aligned 64kB region as a 64kB page);(n) when the first PPU 202(1) accesses an unmapped page that has not been migrated recently and is mapped to the second PPU 202(2), cancels the faulty page from the second PPU 202(2) plus the virtual address space Mapping of adjacent pages adjacent to the fault page, migrating the page to the first PPU 202(1), and mapping the page to the first PPU 202(1);(o) when the first PPU 202(1) accesses an unmapped page that has been recently migrated and is mapped to the second PPU 202(2), adding the faulty page to the adjacent to the faulty page in the virtual address space The page maps to the first PPU 202(1) and maintains the mapping of the page on the second PPU 202(2).In summary, many heuristic rules including "clustering" are possible, and the scope of the present invention is not limited to these examples.In some embodiments, the PSD entry may include transitional state information to ensure proper synchronization between the various requests made by the CPU 102 and the cells within the PPU 202. For example, the PSD 210 entry may include a transitional state indicating a particular page in the process of subordinate to the CPU to a transition belonging to the PPU. Once the CPU 102 and various units in the PPU 202, such as the CPU fault processor 211 and the PPU fault handler 215, determine that the page is in such a transition state, a portion of the page fault sequence can be discarded to avoid being preceded by the same virtual memory address. The steps in the sequence of page faults triggered by virtual memory access. As a specific example, if a page failure causes a page to be migrated from system memory 104 to PPU memory 204, then different page faults that would cause the same migration are detected and no other page migration is caused. Moreover, where there is more than one writer for PSD 210, various units in CPU 102 and PPU 202 may implement atomic operations for proper ordering of operations of PSD 210. For example, for a modification to the PSD 210 entry, the CPU fault processor 211 or the PPU fault handler 215 can issue an atomic comparison and swap operation to modify the page state of a particular entry in the PSD 210. Therefore, the modification is completed without interference from operations of other units.Multiple PSDs 210 can be stored in system memory 104 - one for each virtual memory address space. A memory access request generated by CPU 102 or PPU 202 may thus include a virtual memory address and also identify a virtual memory address space associated with the virtual memory address. Additional details regarding the page status directory are provided below with respect to FIG.Just as CPU 102 can execute a memory access request including a virtual memory address (i.e., an instruction including a request to access data via a virtual memory address), PPU 202 can also perform a similar type of memory access request. More specifically, as described below in connection with FIG. 1, PPU 202 includes a plurality of execution units, such as GPCs and SMs, configured to execute a plurality of threads and thread groups. In operation, those threads can request data from memory (e.g., system memory 104 or PPU memory 204) by specifying a virtual memory address. Just like CPU 102 and CPU MMU 209, PPU 202 includes a PPU Memory Management Unit (MMU) 213. The PPU MMU 213 receives a request for translation of the virtual memory address from the PPU 202 and attempts to provide translation of the PPU page table 208 for the virtual memory address. Similar to CPU page table 206, PPU page table 208 includes a mapping between virtual memory addresses and physical memory addresses. Also as in the case of CPU page table 206, for any given virtual address, PPU page table 208 may not include page table entries that map virtual memory addresses to physical memory addresses. As with CPU MMU 209, PPU MMU 213 generates a page fault when PPU MMU 213 requests translation of the virtual memory address from PPU page table 208 and the type of mapping or access that is not present in PPU page table 208 is not allowed by PPU page table 208. Subsequently, the PPU fault handler 215 triggers a page fault sequence. Additionally, the different types of page fault sequences implemented in the UVM system 200 are described in more detail below.As stated above, in response to receiving a request for translation of a virtual memory address, if the CPU page table 206 does not include a mapping associated with the requested virtual memory address or does not permit the type of access being requested, then the CPU The MMU 209 generates a page fault. Similarly, in response to receiving a request for translation of a virtual memory address, if the PPU page table 208 does not include a mapping associated with the requested virtual memory address or does not permit the type of access being requested, the PPU MMU 213 generates a page. malfunction. When the CPU MMU 209 or the PPU MMU 213 generates a page fault, the thread requesting the data at the virtual memory address stall and the "local fault handler" - the CPU fault handler 211 for the CPU 102 or the PPU fault handler 215 for the PPU 202 - Try to remedy page faults by executing a "page fault sequence". As indicated above, the page fault sequence includes a series of operations that enable the failed unit (i.e., unit-CPU 102 or PPU 202 that caused the page fault) to access data associated with the virtual memory address. After the page fault sequence is completed, the thread requesting data via the virtual memory address resumes execution. In some embodiments, fault recovery is simplified by allowing fault recovery logic to track fault memory accesses in reverse to fault instructions.The operations performed during the page fault sequence depend, if any, on the memory page associated with the page fault, the change in ownership status or the change in access permissions that must be experienced. The transition from the current ownership status to the new ownership status or the change in access permissions can be part of the page fault sequence. In some instances, migrating memory pages associated with page faults from system memory 104 to PPU memory 204 is also part of a page fault sequence. In other instances, migrating memory pages associated with page faults from PPU memory 204 to system memory 104 is also part of the page fault sequence. The various heuristics described more fully herein can be used to configure the UVM system 200 to change memory page ownership status or migrate memory pages under various operating conditions and style sets. Described in more detail below are page fault sequences for the following four memory page ownership state transitions: belonging to CPU to CPU sharing, belonging to CPU to belonging to PPU, belonging to PPU to belonging to CPU, and belonging to PPU to CPU sharing.A failure of the PPU 202 can initiate a transition from CPU to CPU sharing. Prior to such a transition, threads executing in PPU 202 attempt to access data at virtual memory addresses that are not mapped in PPU page table 208. This access attempt causes a PPU based page fault, which then causes the fault buffer entry to be written to the fault buffer 216. In response, PPU fault handler 215 reads the PSD 210 entry corresponding to the virtual memory address and identifies the memory page associated with the virtual memory address. After reading the PSD 210, the PPU fault handler 215 determines that the current ownership status of the memory page associated with the virtual memory address belongs to the CPU. Based on the current ownership status and other factors, such as the usage characteristics for the memory page or the type of memory access, the PPU fault processor 215 determines that the new ownership state for the page should be CPU sharing.To change the ownership status, the PPU fault handler 215 writes a new entry corresponding to the virtual memory address into the PPU page table 208 and associates the virtual memory address with the memory page identified via the PSD 210 entry. The PPU fault handler 215 also modifies the PSD 210 entry for the memory page to indicate that the ownership status is CPU sharing. In some embodiments, entries in the Translation Lookaside Buffer (TLB) in PPU 202 are invalidated to account for the translation of the translation to the invalid page. At this point, the page fault sequence is complete. The ownership status for the memory page belongs to the CPU, meaning that the memory page is accessible to both CPU 102 and PPU 202. Both CPU page table 206 and PPU page table 208 include entries that associate virtual memory addresses to memory pages.A failure of the PPU 202 can initiate a transition from subordinate to the CPU to belong to the PPU. Prior to such a transition, operations performed in PPU 202 attempt to access the memory at a virtual memory address that is not mapped in PPU page table 208. This memory access attempt causes a PPU based page fault, which then causes the fault buffer entry to be written to the fault buffer 216. In response, PPU fault handler 215 reads the PSD 210 entry corresponding to the virtual memory address and identifies the memory page associated with the virtual memory address. After reading the PSD 210, the PPU fault handler 215 determines that the current ownership status of the memory page associated with the virtual memory address belongs to the CPU. Based on the current ownership status and other factors, such as the usage characteristics for the page or the type of memory access, the PPU fault processor 215 determines that the new ownership status for the page belongs to the PPU.The PPU 202 writes a fault buffer entry that instructs the PPU 202 to generate a page fault into the fault buffer 216 and indicates the virtual memory address associated with the page fault. The PPU fault handler 215 executing on the CPU 102 reads the fault buffer entry and, in response, the CPU 102 removes the mapping in the CPU page table 206 associated with the virtual memory address that caused the page fault. The CPU 102 can flush the cache before and/or after the mapping is removed. The CPU 102 also writes a command instructing the PPU 202 to copy pages from the system memory 104 into the PPU memory 204 into the command queue 214. The copy engine 212 in the PPU 202 reads the commands in the command queue 214 and copies the pages from the system memory 104 into the PPU memory 204. PPU 202 writes a page table entry that corresponds to the virtual memory address and associates the virtual memory address with the newly copied memory page in PPU memory 204 into PPU page table 208. The writing to the PPU page table 208 can be done via the copy engine 212. Alternatively, CPU 102 may update PPU page table 208. The PPU fault handler 215 also modifies the PSD 210 entry for the memory page to indicate that the ownership status belongs to the PPU. In some embodiments, entries in the TLB in the PPU 202 or CPU 102 may be invalidated to be responsible for the case where the translation is cached. At this point, the page fault sequence is complete. The ownership status for the memory page belongs to the PPU, meaning that the memory page is only accessible to the PPU 202. Only PPU page table 208 includes entries that associate virtual memory addresses with memory pages.A failure of the CPU 102 can initiate a transition from being subordinate to the PPU to belonging to the CPU. Prior to such a transition, an operation performed in CPU 102 attempts to access the memory at a virtual memory address that is not mapped in CPU page table 206, which causes a CPU-based page fault. CPU Failure The processor 211 reads the PSD 210 entry corresponding to the virtual memory address and identifies the memory page associated with the virtual memory address. After reading the PSD 210, the CPU failure processor 211 determines that the current ownership status of the memory page associated with the virtual memory address belongs to the PPU. Based on the current ownership status and other factors, such as the usage characteristics of the page or the type of access, the CPU failure processor 211 determines that the new ownership status for the page should be CPU sharing.The CPU failure processor 211 changes the ownership status associated with the memory page to belong to the CPU. The CPU fault handler 211 writes a command to the command queue 214 to cause the copy engine 212 to remove the entry from the PPU page table 208 that associates the virtual memory address with the memory page. Various TLB entries can be invalidated. The CPU fault handler 211 also copies the memory pages from the PPU memory 204 into the system memory 104, which can be done via the command queue 214 and the copy engine 212. The CPU fault handler 211 writes the page table entry into the CPU page table 206, which associates the virtual memory address with the memory page that was copied into the system memory 104. The CPU fault processor 211 also updates the PSD 210 to associate the virtual memory address with the newly copied memory page. At this point, the page fault sequence is complete. The ownership status for the memory page belongs to the CPU, meaning that the memory page is only accessible to the CPU 102. The CPU page only table 206 includes entries that associate virtual memory addresses with memory pages.A failure of CPU 102 can initiate a transition from PPU to CPU sharing. Prior to such a transition, an operation performed in CPU 102 attempts to access the memory at a virtual memory address that is not mapped in CPU page table 206, which causes a CPU-based page fault. CPU Failure The processor 211 reads the PSD 210 entry corresponding to the virtual memory address and identifies the memory page associated with the virtual memory address. After reading the PSD 210, the CPU failure processor 211 determines that the current ownership status of the memory page associated with the virtual memory address belongs to the PPU. Based on the current ownership status or type of access and other factors, such as for usage characteristics of the page, the CPU failure processor 211 determines that the new ownership status for the memory page is CPU sharing.The CPU failure processor 211 changes the ownership status associated with the memory page to CPU sharing. The CPU fault handler 211 writes a command to the command queue 214 to cause the copy engine 212 to remove the entry from the PPU page table 208 that associates the virtual memory address with the memory page. Various TLB entries can be invalidated. The CPU fault handler 211 also copies the memory page from the PPU memory 204 into the system memory 104. This copy operation can be done via command queue 214 and copy engine 212. The CPU fault handler 211 then writes the command into the command queue 214 to cause the copy engine 212 to change the entries in the PPU page table 208 to associate the virtual memory address with the memory page in the system memory 104. Various TLB entries can be invalidated. The CPU fault handler 211 writes the page table entries into the CPU page table 206 to associate the virtual memory addresses with the memory pages in the system memory 104. The CPU fault processor 211 also updates the PSD 210 to associate the virtual memory address with the memory page in the system memory 104. At this point, the page fault sequence is complete. The ownership status for the page belongs to the CPU, and the memory page has been copied into system memory 104. The page is accessible to CPU 102 because CPU page table 206 includes entries that associate virtual memory addresses with memory pages in system memory 104. The page is also accessible to PPU 202 because PPU page table 208 includes entries that associate virtual memory addresses with memory pages in system memory 104.Detailed example of a page fault sequenceUsing this context, a detailed description of the page fault sequence performed by the PPU fault handler 215 in the case of a transition from CPU to CPU sharing is now provided to show how atomic operations and transition states can be used to manage more efficiently. Page fault sequence. The page fault sequence is triggered by a PPU 202 thread attempting to access the virtual address for which there is no mapping in the PPU page table 208. PPU 202 (specifically a user-level thread) requests translation from PPU page table 208 when a thread attempts to access data via a virtual memory address. In response, a PPU page fault occurs because the PPU page table 208 does not include a mapping associated with the requested virtual memory address.After a page fault occurs, the thread enters a trap, stalls, and the PPU fault handler 215 performs a page fault sequence. The PPU fault handler 215 reads the PSD 210 to determine which memory page is associated with the virtual memory address and determines the state for the virtual memory address. The PPU fault handler 215 determines from the PSD 210 that the ownership status for the memory page belongs to the CPU. Therefore, the data requested by PPU 202 is inaccessible to PPU 202 via the virtual memory address. The status information for the memory page also indicates that the requested data cannot be migrated to the PPU memory 204.Based on the status information obtained from the PSD 210, the PPU fault handler 215 determines that the new state for the memory page should be CPU sharing. The PPU fault handler 215 changes the state to "transition to CPU sharing." This status indicates that the page is currently in the process of being converted to CPU sharing. When the PPU fault handler 215 is running on a microcontroller in the memory management unit, the two processors will asynchronously update the PSD 210, using an atomic comparison and exchange ("CAS") operation on the PSD 210 to change the state to "transform". Visible to the GPU" (CPU sharing).PPU 202 updates PPU page table 208 to associate a virtual address with a memory page. PPU 202 also invalidates the TLB cache entry. Next, PPU 202 performs another atomic comparison and swap operation on PSD 210 to change the ownership state associated with the memory page to CPU sharing. Finally, the page fault sequence ends and the thread resumes execution via the virtual memory address request data.Fault bufferThe solution to the page fault generated by the CPU 102 does not involve the fault buffer 216. However, the solution to the page fault generated by the PPU MMU 213 involves the fault buffer 216. The role of the fault buffer 216 in resolving page faults generated by the PPU MMU 213 is described in more detail below with respect to Figures 5 and 6.Figure 5 illustrates a virtual memory system 500 for storing faults in a fault buffer, in accordance with one embodiment of the present invention. As shown, virtual memory system 500 includes a PPU fault processor 215, a fault buffer 216, and a PPU 202 that includes a plurality of stream multiprocessors 504.The fault buffer 216 stores a fault buffer entry 502 indicating information related to page faults generated by the PPU 202. The fault buffer entry 502 may include, for example, the type of access attempted (eg, read, write, or atomic), a virtual memory address, a virtual address space, and an indication of a unit or thread causing a page fault for the attempted access that caused the page fault. . In operation, when the PPU 202 causes a page fault, the PPU 202 can write the fault buffer entry 502 into the fault buffer 216 to inform the PPU fault handler 215 about the fault memory page and the type of access that caused the fault. The PPU fault handler 215 then implements an action to remedy the page fault. The fault buffer 216 can store multiple faults because the PPU 202 is executing multiple threads, each of which can cause one or more faults due to the pipelined nature of the memory access of the PPU 202. Each of the failed buffer entries 502 can be generated by one or more stream multiprocessors 504 included within the PPU 202.FIG. 6 illustrates a virtual memory system 600 for resolving page faults generated by PPU 202, in accordance with one embodiment of the present invention. As shown, virtual memory system 600 includes a PPU fault handler 215, a fault buffer 216, system memory 104 including command queue 214, and PPU 202 including copy engine 212.The PPU fault handler 215 reads the fault buffer entry 502 stored in the fault buffer 216 to determine how to resolve the page fault associated with the fault buffer entry 502. To resolve page faults, PPU fault handler 215 performs a page fault sequence to otherwise modify the PSD entries associated with the memory page corresponding to fault buffer entry 502, and/or migrates associated with fault buffer entry 502. Memory page. During a page fault sequence, CPU 102 or PPU 202 may write commands to command queue 214 for execution by copy engine 212. Such a method releases CPU 102 or PPU 202 to perform other tasks while copy engine 212 reads and executes the commands stored in command queue 214, and allows all commands for the fault sequence to be queued at the same time, thereby avoiding fault sequences. Monitoring of progress. In other aspects, the commands executed by copy engine 212 may include deleting, creating or modifying page table entries in PPU page table 208, reading or writing data from system memory 104, and reading or writing data to PPU memory 204.The CPU 102 and the PPU 202 can perform context switching separately. In other words, in response to detecting a fault, PPU 202 can write a fault buffer entry into fault buffer 216. This fault buffer may not be resolved immediately by the PPU fault handler 215 in the CPU. Instead, CPU 102 can perform other processing tasks and ultimately handle PPU failures. Therefore, CPU 102 and PPU 202 may not necessarily operate in the same context at the same time. In other words, the CPU 102 may be performing a different process than spawning the process currently executing on the PPU 202. In order to inform the PPU fault handler 215 which process is associated with the PPU 202 operation that generated the fault buffer entry 502, the PPU 202 provides a fault buffer entry to the instance pointer to inform the CPU 102 of the address space in which the PPU 202 caused the failure. The fault buffer 216 can include a number of page fault entries associated with the same memory page because the multiple stream multiprocessors 504 are running in parallel and can generate page faults leading to the same memory page. The PPU fault handler 215 checks the fault buffer 216 to determine which faults are resolved.UVM system architecture changesVarious modifications to the unified virtual memory system 200 are possible. For example, in some embodiments, after writing a fault buffer entry into the fault buffer 216, the PPU 202 can trigger a CPU interrupt to cause the CPU 102 to read the fault buffer entry in the fault buffer 216 and respond to the fault buffer entry. Implement any appropriate actions. In other embodiments, CPU 102 may periodically poll fault buffer 216. In the case where the CPU 102 finds a fault buffer entry in the fault buffer 216, the CPU 102 performs a series of operations in response to the fault buffer entry.In some embodiments, system memory 104, rather than PPU memory 204, stores PPU page table 208. In other embodiments, a single or multi-level cache hierarchy may be implemented, such as a single or multi-level translation lookaside buffer (TLB) hierarchy (not shown) to cache virtual for CPU page table 206 or PPU page table 208. Address translation.In still other embodiments, PPU 202 may take one or more actions in the event that a thread executing in PPU 202 causes a PPU failure ("faulty thread"). These tasks include: quiesing the entire PPU 202, quiescing the SM performing the failed thread, quiesce the PPU MMU 213, stopping only the failed thread, quiescing the thread group including the failed thread, or quiesce one or more stages of the TLB. In some embodiments, after a PPU page failure occurs and the page fault sequence has been executed by unified virtual memory system 200, execution of the failed thread resumes, and the failed thread attempts again to perform a memory access request that caused the page fault. In some embodiments, the stall at the TLB is done in such a manner as to exhibit a long-latency memory access to the failed SM or the failed thread, thereby not requiring the SM to perform any special operations for the fault.Finally, in other alternative embodiments, the UVM driver 101 can include instructions that cause the CPU 102 to perform one or more operations for managing the UVM system 200 and remediating page faults, such as accessing the CPU page table 206. , PSD 210 and/or fault buffer 216. In other embodiments, an operating system kernel (not shown) may be configured to manage the UVM system 200 and replenish page faults by accessing the CPU page table 206, the PSD 210, and/or the fault buffer 216. In still other embodiments, the operating system kernel can operate in conjunction with the UVM driver 101 to manage the UVM system 200 and remediate page faults by accessing the CPU page table 206, the PSD 210, and/or the fault buffer 216.Figure 7 illustrates a flow diagram of method steps for managing virtual memory to physical memory mapping via a page state directory, in accordance with one embodiment of the present invention. Although the method steps are described in conjunction with the systems of Figures 1-6, those of ordinary skill in the art will appreciate that any system configured to perform the method steps in any order is within the scope of the present invention.As shown, method 700 begins at step 702, where PPU 202 performs a first operation that references a first virtual memory address. In step 704, PPU MMU 213 reads PPU page table 208 and determines that PPU page table 208 does not include a mapping associated with the first virtual memory address. Once the PPU MMU 213 makes this determination, a first page fault is generated. In step 706, after the PPU fault handler 215 resolves the page fault and places the command in the command queue 214, the copy engine 212 in the PPU 202 reads the command queue 214 to determine the mapping corresponding to the first virtual memory address. In step 708, copy engine 212 updates PPU page table 208 to include the mapping.Figure 8 illustrates a flow chart of method steps for tracking page faults in accordance with one embodiment of the present invention. Although the method steps are described in conjunction with the systems of Figures 1-6, those of ordinary skill in the art will appreciate that any system configured to perform the method steps in any order is within the scope of the present invention.As shown, method 800 begins at step 802, where PPU 202 executes a first instruction associated with a first virtual memory address. In step 804, PPU MMU 213 determines that PPU page table 208 does not include the first mapping associated with the first virtual memory address. In step 805, stream multiprocessor 504 or other unit executing the first instruction is stalled. In step 806, PPU 202 transmits the first page fault to fault buffer 216.Figure 9 illustrates a flow chart of method steps for utilizing a fault buffer to resolve a page fault, in accordance with one embodiment of the present invention. Although the method steps are described in conjunction with the systems of Figures 1-6, those of ordinary skill in the art will appreciate that any system configured to perform the method steps in any order is within the scope of the present invention.As shown, method 900 begins at step 902, where fault buffer 216 stores a plurality of fault buffer entries. In step 904, the PPU fault handler 215 reads the fault buffer entry to resolve the fault buffer entry. In step 906, the PPU fault handler 215 determines what steps are to be taken to resolve the fault buffer and triggers a page fault sequence to remedy one or more page faults associated with the fault buffer entry. In step 908, the PPU fault handler 215 transmits a command to the command queue 214 to update the PPU page table 208. In step 910, stream multiprocessor 504 or other unit that has been quiesced resumes execution.Figure 10 illustrates a flow diagram of method steps for creating and managing a common pointer in a virtual memory architecture, in accordance with one embodiment of the present invention. Although the method steps are described in conjunction with the systems of Figures 1-6, those of ordinary skill in the art will appreciate that any system configured to perform the method steps in any order is within the scope of the present invention.As shown, method 1000 begins at step 1002, where UVM driver 101 stores a first page state directory entry that includes a mapping between a first virtual memory address and a first physical memory address. In step 1004, CPU MMU 209 or PPU MMU 213 translates the first virtual memory address to the first physical address based on the first page state directory. In step 1005, the memory page associated with the first virtual address is copied or migrated. In step 1006, the UVM driver 101 stores a second page state directory entry (or alternatively a first page state directory entry) that includes a mapping between the first virtual memory address and the second physical memory address. The second page directory entry is stored in response to modifying the state of the memory page associated with the first page state directory entry. For example, a memory page can be migrated from one memory unit to another, or it can be copied from one memory unit to another. In step 1008, CPU MMU 209 or PPU MMU 213 translates the first virtual memory address into a second physical memory address based on the second page state directory entry.Figure 11 illustrates a flow chart of method steps for managing ownership status in a virtual memory subsystem, in accordance with one embodiment of the present invention. Although the method steps are described in conjunction with the systems of Figures 1-6, those of ordinary skill in the art will appreciate that any system configured to perform the method steps in any order is within the scope of the present invention.As shown, method 1100 begins at step 1102, where CPU MMU 209 or PPU MMU 213 issues a page fault in response to a memory access request by CPU 102 or PPU 202, respectively. In step 1104, CPU fault handler 211 or PPU fault handler 215 identifies an entry in page state directory 210 that corresponds to a memory page associated with the virtual memory address. In step 1107, the ownership status of the memory page in the page status directory 210 is modified and the memory page is migrated, if necessary. In step 1106, the CPU fault handler 211 or the PPU fault handler 215 reads the ownership status associated with the memory page from the entry in the page state directory 210. In step 1108, CPU fault processor 211 or PPU fault handler 215 updates the local page table to include an entry corresponding to the virtual memory address and associating the virtual memory address with the memory page.In summary, a unified virtual memory system that manages memory is provided in a shared manner between the CPU and one or more PPUs. The unified virtual memory system includes a page status directory that stores a mapping in both the page table associated with the CPU and the page table associated with the PPU. When a PPU or CPU triggers a page fault, a page status directory is available to provide the status of the memory page associated with the page fault. In addition, when the PPU triggers a page fault, the PPU transmits a page fault to the fault buffer. The PPU fault handler checks the contents of the fault buffer to resolve page faults. Providing a fault buffer allows the PPU fault handler to "merge" the page faults performed by the PPU. In addition, the unified virtual memory driver manages the page state directory and associated virtual memory addresses such that the virtual memory addresses are shared between the CPU and the PPU. Finally, the unified virtual memory driver is based on a migration scheme that implements the migration of memory pages by the use of the CPU and PPU.One advantage of the disclosed method is that the user-level application does not need to keep track of multiple pointers depending on where the particular piece of data is stored. An additional advantage is based on the use of migrating memory pages between memory cells, which allows memory pages to be local to the cells that access the memory pages more frequently. Another advantage is that faults generated by the PPU are allowed to be combined for faulty buffers for efficient execution.One embodiment of the invention can be implemented as a program product for use with a computer system. The program of the program product defines various functions of the embodiments, including the methods described herein, and can be embodied on a variety of computer readable storage media. Exemplary computer readable storage media include, but are not limited to: (i) a non-writable storage medium (eg, a read only memory device within a computer, such as a compact disk read only memory (CD-ROM) disk readable by a CD-ROM drive) , flash memory, read only memory (ROM) chip or any type of solid state non-volatile semiconductor memory) on which permanent information is stored; and (ii) writable storage medium (eg, within a disk drive or hard drive) A floppy disk or any type of solid state random access semiconductor memory) on which changeable information is stored.The invention has been described above with reference to specific embodiments. However, it will be understood by those skilled in the art that various modifications and changes may be made without departing from the spirit and scope of the invention as set forth in the appended claims. Accordingly, the foregoing description and drawings are to be regarded asTherefore, the scope of the embodiments of the invention is set forth by the appended claims.
An integrated circuit dynamically compensates for circuit aging by measuring the aging with an aging sensor. The aging sensor uses the same circuit to measure circuit speeds in both aged and un- aged conditions. An example aging sensor includes two delay lines (411, 412). The delay lines (411, 412) are controlled to be in a static aging state or the delay lines (411, 412) are coupled to form a ring oscillator that can operate in an aged state where the frequency is slowed by aging or in an un-aged state where the frequency is not slowed by aging. The integrated circuit uses the aging measurements for dynamic voltage and frequency scaling. The dynamic voltage and frequency scaling uses a table of operating frequencies and corresponding voltage that is periodically updated based on the aging measurements. The integrated circuit use information about the relationship between the aging measurements and circuit performance to update the table.
1.A circuit for sensing aging of an integrated circuit, comprising:a first delay chain having a first input and a first output;a second delay chain having a second input and a second output;a control module configured to place the first delay chain and the second delay chain in an aging state, an aging oscillating state, or an unaged oscillating state;Wherein the aging state comprises supplying an operating voltage to the first delay chain and the second delay chain;The aging state further includes supplying a first logic value to the first input and a second logic value to the second input, wherein the first logic value is a complement of the second logic value;The aged oscillation state includes selecting between a first output and a second output, and coupling the selected signal to the first input and the second input, wherein the first output is after the first input transitions to the first logic value Selected and the second output is selected after the second input transitions to the second logic value;The unaged oscillation state includes selecting between a first output and a second output, and coupling the selected signal to the first input and the second input, wherein the first output is after the first input transitions to the second logic value Selected and the second output is selected after the second input transitions to the first logic value.2.The circuit of claim 1 wherein the first delay chain comprises a first delay element chain coupled between the first input and the first output, and the second delay chain comprises a coupling in the second input and the second A second chain of delay elements between the outputs.3.The circuit of claim 2 wherein each delay element comprises an inverter.4.The circuit of claim 3 wherein each inverter comprises a plurality of p-channel transistors in series and a plurality of n-channel transistors in series.5.The circuit of claim 1 wherein said aging oscillating state comprises coupling a first delay chain and a second delay chain to oscillate at a frequency that is slowed by aging.6.The circuit of claim 5 wherein said unaged oscillating state comprises coupling a first delay chain and a second delay chain to oscillate at a frequency that is not slowed by aging.
Integrated circuit dynamic de-agingbackgroundfieldThe present invention relates to integrated circuits and, more particularly, to systems and methods for dynamically de-aging of integrated circuit performance.Background techniqueIntegrated circuits have become more and more complex. To improve the trade-off between performance and power, integrated circuits can operate at different frequencies and voltages at different times. For example, an integrated circuit can operate in multiple frequency-voltage modes including high performance mode and low performance mode. High performance mode uses high clock frequency and high supply voltage, which provides high performance but also high power consumption. The low performance mode uses a low clock frequency and a low supply voltage, thereby providing low power consumption but also low performance. Additionally, multiple blocks within the integrated circuit can operate at different frequencies and different voltages.The particular supply voltage that provides a given clock frequency can vary based on various conditions. For example, manufacturing variations can result in different integrated circuits that have different relationships between voltage and frequency that are generated according to the same design. Additionally, variations in circuit characteristics within the integrated circuit can result in different sections of the integrated circuit having different relationships between voltage and frequency. Temperature also affects the relationship between voltage and frequency. Furthermore, there may be a voltage drop in the supply voltage that varies depending on the operation of the various modules in the integrated circuit. Adaptive Voltage Scaling (AVS) can be used to control the supply voltage based on performance measurements of the sensed integrated circuit.Device aging (especially in nanotechnology) results in changes in the electrical parameters of the integrated circuit. For example, the transistor threshold voltage can be increased by effects such as positive bias temperature instability (PBTI) and negative bias temperature instability (NBTI). Circuits generally work slower with aging. This further affects the relationship between the supply voltage and the clock frequency. The rate of aging and the amount of aging can vary with the use of integrated circuits. For example, a mobile phone may age when the user is using the phone for multiple tasks (such as text messaging, phone calls, streaming video, and playing games) throughout the day when the user is on standby for most of the day. More.Existing aging compensation schemes a priori estimate the impact of aging on equipment. Then, based on the worst case scenario, the impact of device aging is accounted for by including a large guard band such that the device still meets its design requirements in the event that the full impact of aging causes itself to be near the end of the expected operational life of the device. This results in a conservative design and can result in severe performance loss.OverviewIn one aspect, a circuit for sensing aging of an integrated circuit is provided. The circuit includes: a first delay chain having a first input and a first output; a second delay chain having a second input and a second output; and a control module configured to convert the first delay chain and the second delay chain It is placed in an aging state, an aging oscillation state, or an unaged oscillation state.In one aspect, a method for deaging an integrated circuit is provided. The method includes initializing an operation of the integrated circuit with a safe voltage and frequency, and using an initial value in a coefficient table to enable dynamic voltage and frequency scaling of the integrated circuit, the coefficient table including target performance sensor measurements for a plurality of operating frequencies; Sensing the aging of the integrated circuit; updating the coefficient table based on the sensed aging; and continuing the dynamic voltage and frequency scaling using the updated coefficient table.In one aspect, an integrated circuit is provided, comprising: an aging sensor configured to sense aging of circuitry in an integrated circuit, wherein the aging sensor uses the same circuitry to measure circuit speed in aged and unaged conditions And a core power reduction controller module configured to control a supply voltage used in the integrated circuit, wherein the supply voltage is based at least in part on an aging sensed by the aging sensor.In one aspect, an integrated circuit is provided, comprising: means for sensing aging of circuitry in an integrated circuit, using the same circuitry to measure circuit speed in aged and unaged conditions; and for integrating A circuit de-aging device configured to control a supply voltage used in an integrated circuit, wherein the supply voltage is based at least in part on an aging sensed by the integrated circuit.Other features and advantages of the invention will be apparent from the following description of the aspects of the invention.BRIEF DESCRIPTION OF THE DRAWINGSThe details of the invention (in terms of its structure and operation) can be partially collected by studying the attached drawings, wherein like reference numerals are used to refer to the like parts1 is a functional block diagram of an electronic system with dynamic deaging in accordance with embodiments disclosed herein;2 is a diagram illustrating a layout of an integrated circuit with dynamic de-aging in accordance with embodiments disclosed herein;3 is a functional block diagram of a performance sensor in accordance with an embodiment disclosed herein;4 is a schematic illustration of an aging sensor in accordance with an embodiment disclosed herein;5 is a schematic illustration of a delay element in accordance with an embodiment disclosed herein;6 is a schematic illustration of an aging sensor control module in accordance with an embodiment disclosed herein;7 and 8 are waveform diagrams illustrating the operation of the burn-in sensor of FIG. 4;9 is a flow diagram of a process for dynamic deaging in accordance with embodiments disclosed herein.A detailed descriptionThe detailed description set forth below with reference to the drawings is intended as a description of the various configurations, and is not intended to represent the only configuration in which the concepts described herein may be practiced. The detailed description includes specific details in order to provide a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts can be practiced without these specific details. In some instances, well-known structures and components are shown in a simplified form to avoid obscuring such concepts.1 is a functional block diagram of an electronic system with dynamic deaging in accordance with embodiments disclosed herein. The system can be implemented using one or more integrated circuits. The system can be used, for example, in a mobile phone.The system includes various modules that perform the operational functions of the system. The term "operation" is used to distinguish between functions that can be considered to provide the primary use of an electronic system and those that can be considered as ancillary. The example system illustrated in FIG. 1 includes a processor module 120, a graphics processing unit (GPU) 130, a modem module 140, and a core module 150. The processor module 120 can provide general programmable functionality; the graphics processing unit 130 can provide graphics functionality; the modem module 140 can provide communication functionality, such as wireless communication in accordance with Long Term Evolution (LTE) or Code Division Multiple Access (CDMA) protocols; The core module 150 can provide various functions not provided by other modules.The clock generation module 113 receives the reference clock input and supplies one or more clock signals to other modules. The clock generation module 113 can include a phase locked loop and a frequency divider to supply clock signals at various frequencies. The clock generation module 113 supplies the clock to other modules at a frequency controlled by the core power reduction (CPR) controller module 111. All or part of the functions of the clock generation module 113 may be located in various modules that use clock signals.A power management integrated circuit (PMIC) 115 supplies one or more voltages to other modules in the system. The PMIC 115 can include a switching voltage regulator and a low dropout regulator. The PMIC 115 can be a separate integrated circuit. The voltage supplied by the PMIC 115 is also controlled by the core power reduction controller module 111. Each module of the system can have one supply voltage or multiple supply voltages and multiple modules can operate according to a common supply voltage.Processor module 120, graphics processing unit 130, modem module 140, and core module 150 include performance sensors. In the example system of FIG. 1, processor module 120 includes two performance sensors 121, 122; graphics processing unit 130 includes performance sensor 131; modem module 140 includes performance sensor 141; and core module 150 includes two performance sensors 151, 152 . Each performance sensor includes circuitry for measuring the speed of the circuit. For example, a performance sensor can count the oscillations of the ring oscillator. Each performance sensor also includes an aging sensor. The aging sensor measures the effect of aging on circuit performance. Performance sensors measure the performance characteristics of the circuitry in the sensor. While the performance of circuitry in an integrated circuit may vary with position, temperature, voltage drop, and other parameters, the performance measured by the performance sensor can be used to estimate the performance of similar circuitry near the performance sensor. In an embodiment, the burn-in sensor uses the same circuitry to measure circuit speed in both aged and unaged conditions.The core power reduction controller module 111 controls the clock frequency and supply voltage used by each module in the system. The core power reduction controller module 111 can control the frequency and voltage, for example, based on the operating mode selected by the processor module 120. In an embodiment, the processor selects an operating frequency and the core power reduction controller module 111 determines the supply voltage. The core power reduction controller module 111 can determine the supply voltage based on performance measurements from performance sensors in the corresponding module and based on aging from the burn-in sensor. The core power reduction controller module 111 can determine the supply voltage such that it equals or only slightly exceeds (eg, 10 mV) the minimum voltage required for the selected operating frequency. In other embodiments, the core power reduction controller module 111 can only control the clock frequency. Alternatively or additionally, the system can control other parameters that affect performance, such as substrate voltage. Example functions of the core power reduction controller module 111 will be further described with reference to the process illustrated in FIG.Existing systems that do not include dynamic de-aging set the supply voltage to a value that substantially exceeds the minimum voltage required to protect the amount of band. The amount of guard band (eg, 100 mV) is used to compensate for aging effects (whose amplitudes are unknown at any given time), among other factors. In existing systems, the amount of guard band for aging is fixed and applied at the beginning of system operation even when aging has not occurred. The guard band is also used in conjunction with other parameters, such as the clock frequency. The de-aging systems and methods described herein eliminate or reduce performance loss for guard bands.2 is a diagram illustrating a layout of an integrated circuit with dynamic de-aging in accordance with embodiments disclosed herein. An integrated circuit can be used to implement the electronic system of FIG. For example, the integrated circuit can be fabricated using a complementary metal oxide semiconductor (CMOS) process.The integrated circuit of Figure 2 includes four peripheral blocks 210 (210a, 210b, 210c, and 210d) located along the edge of the integrated circuit. The integrated circuit includes a processor module 220, a graphics processing module 230, and a modem module 240 as large blocks within the integrated circuit. Other functions of the integrated circuit, such as those provided by core module 150 in the system of FIG. 1, may be spread throughout the remaining area 250 of the integrated circuit. The core power reduction controller module 111 of FIG. 1 can also be implemented in the remaining area 250 of the integrated circuit.The integrated circuit also includes a performance sensor 261 that is spaced apart across the integrated circuit area. Although FIG. 2 illustrates twenty performance sensors, an integrated circuit implementation can include hundreds of performance sensors. The performance sensors may, for example, be connected in series to the core power reduction controller module 111 or may be connected by a bus.3 is a functional block diagram of a performance sensor in accordance with an embodiment disclosed herein. Performance sensors can be used to implement the performance sensors 121, 122, 131, 141, 151, 152 of FIG. 1 and the performance sensor 261 of FIG.The performance sensor of Figure 3 includes a plurality of PVT sensors 311-319. Each of the PVT sensors 311-319 measures circuit performance, such as by operating a ring oscillator to produce an output whose frequency indicates circuit performance. Different PVT sensors in PVT sensors 311-319 can measure the performance of different types of circuits, such as circuits with different types of transistors. The name PVT refers to process, voltage and temperature, which are the main factors affecting circuit performance.The performance sensor includes an aging sensor 330. The aging sensor 330 can measure the effects of circuit aging. The aging sensor 330 includes a delay line that can be controlled (eg, by the core power reduction controller module 111) to be in an aging state, an aging oscillating state, or an aging oscillating state. In an example embodiment, in the aging state, the delay line is maintained in a static power supply state. The delay line is powered by the same supply voltage used by the circuit, and the aging of the circuit is sensed by the aging sensor. In the aging oscillating state, the delay line is coupled to produce a clock output that oscillates at a frequency based on the delay of the aging circuitry. In the unaged oscillation state, the delay line is coupled to produce a clock output that oscillates at a frequency based on the delay of the unaged circuitry. The same transistor is used in both the aged oscillation state and the unaged oscillation state.The performance sensor includes a control module 320. Control module 320 provides an interface to other modules (e.g., to core power reduction controller module 111) to communicate the sensed performance measurements. Control module 320 may also include a counter to count the oscillations of PVT sensors 311-319 and burn-in sensors 330. The counter can count up to a known time interval to measure the frequency of the oscillator in the PVT sensor 311-319 or the burn-in sensor 330. The control module 330 can cause the supply voltage to the PVT sensors 311-319 to be removed when the PVT sensors 311-319 are not performing measurements. However, the burn-in sensor 330 remains powered during the aging state.4 is a schematic illustration of an aging sensor in accordance with an embodiment disclosed herein. The burn-in sensor can implement the burn-in sensor 330 of Figure 4, which can be used in the system of Figure 1 and the integrated circuit of Figure 2.The burn-in sensor of FIG. 4 includes a first delay chain 411 and a second delay chain 412. The first delay chain 411 receives the first input AIN and produces a first output A8. The second delay chain 412 receives the second input BIN and produces a second output B8. Each delay chain includes a delay element chain (delay elements 450-458 in the first delay chain 411 and delay elements 470-478 in the second delay chain 412). In the illustrated embodiment, each delay chain includes nine delay elements and the delay elements are inverters.The burn-in sensor includes an burn-in sensor control module 425 that controls the function of the burn-in sensor. The burn-in sensor control module 425 also generates a clock output (CLKOUT) that is capable of indicating the performance of both the burned circuit and the unaged circuit. The burn-in sensor control module 425 receives the run control input (RUN). When the run control input is low, the burn-in sensor is not running (aged state) and the delay chain (also referred to as the delay line) is held in a particular state to age the delay element. When the run control input is high, the delay chain is coupled to form a ring oscillator whose frequency is slowed down by aging (aged oscillatory state), or coupled to form a ring oscillator whose frequency is not slowed by aging (not aged Oscillation state). The choice of an aging or unaged state is controlled by the MIN/MAX control input.In the embodiment illustrated in FIG. 4, four multiplexers are used to place the delay chain in an aging state, an aging oscillating state, or an unaged oscillating state. In the burn-in sensor of Figure 4, the multiplexer is inverted from input to output. Other embodiments may use a non-inverting multiplexer.The multiplexer 441 selects between the output (A8) of the first delay chain (when in the operational state) and the static low voltage (when not in the operational state). The multiplexer 461 selects between the output (B8) of the second delay chain (when in the operational state) and the static high voltage (when not in the operational state).The multiplexer 440 selects between the output (AOUT) of the multiplexer 441 and the output (BOUT) of the multiplexer 461 to supply the input (AIN) of the first delay chain 411. Multiplexer 460 selects between the output (AOUT) of multiplexer 441 and the output (BOUT) of multiplexer 461 to supply the input (BIN) of second delay chain 412. The selection performed by multiplexer 440 is controlled by a first control signal (INITA) supplied by aging sensor control module 425, while the selection performed by multiplexer 460 is subject to a second control signal supplied by aging sensor control module 425 ( INITB) control.In the aging state, the input of the first delay chain 411 has a first logic value and the input of the second delay chain 412 has a second logic value that is a complement of the first logic value. In the embodiment of Figure 4, the first logical value is high and the second logical value is low.In the aging state, multiplexer 441 selects the low voltage input and AOUT is high, while multiplexer 461 selects the high voltage input and BOUT is low. The burn-in sensor control module 425 generates a first control signal (INITA) that is high. Thus, multiplexer 440 selects BOUT (which is low) and the multiplexer output (AIN) is high. The burn-in sensor control module 425 generates a second control signal (INITB) that is low. Thus, multiplexer 460 selects AOUT (which is high) and the multiplexer output (BIN) is low. This causes the first delay chain 411 and the second delay chain 412 to be maintained in a complementary state in which the alternating delay elements have complementary outputs. Specifically, in the first delay chain 411, the output (A0) of the first delay element 450 is low, the output (A1) of the second delay element 451 is high, and the output (A2) of the third delay element 452 is low. And so on until the output (A8) of the ninth delay element 458 is low. In the second delay chain 412, the output (B0) of the first delay element 470 is high, the output (B1) of the second delay element 471 is low, and the output (B2) of the third delay element 472 is high, and This type of push until the output (B8) of the ninth delay element 478 is high.The static voltage on the delay element tends to age the delay element such that the transition to the aged state is slowed down. For example, the output (A0) of the first delay element 450 is low during aging and the falling transition on the output will be mitigated by the aging effect. Similarly, the output (A1) of the second delay element 451 is high during aging and the rising transition on this output will be mitigated by the aging effect. Since the rise and fall transitions are alternated by delay elements and the transitions affected by aging are also alternated by delay elements, the same transition of the entire delay chain for the input of the delay chain is affected by aging. The first delay chain 411 is slowed down by aging for the rising transitions on its input. Similarly, the second delay chain 412 is slowed down by aging for the falling transitions on its input.In the aging oscillating state, the aging sensor control module 425 controls the first and second control signals such that the delay chain includes a delay in the first delay chain 411 for the rising transition on its input and in the second delay chain The period of delay of the falling transition on the input is oscillated. The operation in the aging oscillation state is illustrated in the waveform diagram of FIG. At the beginning of the waveform, the run control input RUN is low and the delay chain is in an aging state where the input (AIN) of the first delay chain is high and the input (BIN) of the second delay chain is low.At time 701, the run control input switches high and the MIN/MAX control input is high to cause the aging sensor to enter the aging oscillating state. The first control signal (INITA) is switched high so that the multiplexer 440 switches and the input (AIN) of the first delay chain 411 is switched low. The falling transition on the input of the first delay chain 411 propagates through the delay chain and reaches AOUT through multiplexer 441, which drops at time 702. At this point, both the first and second control signals from the burn-in sensor control module 425 are low such that AOUT is selected and the inputs of the two delay chains rise (the fall of AOUT is inverted by multiplexers 440 and 460) ).The rising transition on the input of the delay chain propagates through the two delay chains concurrently. The delay in the first delay chain 411 for the rising transition on its input is slowed down by aging. The delay in the second delay chain 412 for the rising transition on its input is not slowed down by aging. At time 703, the rise on the input of the second delay chain 412 propagates until its output, and at time 704, the rise on the input of the first delay chain 411 propagates until its output. The difference between time 704 and time 703 is the aging effect. In Figure 7, the delay difference is exaggerated to clearly illustrate the effect.Prior to time 703, the first and second control signals from the burn-in sensor control module 425 are set such that the multiplexer 440 and the multiplexer 460 select AOUT (from the delay chain affected by the aging for the rising input). Thus, after time 704, the inputs of the two delay chains fall (the rise of AOUT is inverted by multiplexer 440 and multiplexer 460).The rising transition on the input of the delay chain propagates through the two delay chains concurrently. The delay in the first delay chain 411 for the falling transition on its input is not slowed down by aging. The delay in the second delay chain 412 for the falling transitions on its input is slowed down by aging. At time 705, the drop in the input of the first delay chain 411 propagates until its output, and at time 706, the drop in the input of the second delay chain 412 propagates until its output. The difference between time 706 and time 705 is the aging effect.Prior to time 705, the first and second control signals from the burn-in sensor control module 425 are set such that the multiplexer 440 and the multiplexer 460 select BOUT (from the delay chain affected by the aging for the falling input). Thus, the input of the two delay chains rises and the one oscillation of the delay chain is completed. The signal transformation sequence then repeats from time 702 as described.At time 709, the operational control input switches to low and the aging sensor switches back to the aging state. The aging state of oscillation in Figure 7 lasts only a few times, but in an integrated circuit, the aging state of oscillation can last for example hundreds or thousands of oscillations.The burn-in sensor control module 425 can time-transform its control signals to the multiplexer 440 and the multiplexer 460 using signals from the midpoint of the delay chain. For example, the output (A3, B3) of the fourth delay element in each delay chain can be logically NANDed to produce a clock output CLKOUT. The clock output can then be used to generate control signals (INITA, INITB).In the aging oscillation state (from time 701 to time 709), the time period of the clock output delays the delay in the first delay chain for the rising transition on its input and the delay in the second delay chain for its input. Make a combination. Each of these situations is mitigated by aging so that the oscillation frequency can be used to measure the amount of aging that has occurred.In the unaged oscillating state, the aging sensor control module 425 controls the first and second control signals such that the delay chain includes a delay in the first delay chain 411 for the falling transition on its input and a second delay chain for it The period of delay of the rising transition on the input is oscillated. The operation in the unaged oscillation state is illustrated in the waveform diagram of FIG. At the beginning of the waveform, the run control input RUN is low and the delay chain is in an aging state where the input (AIN) of the first delay chain is high and the input (BIN) of the second delay chain is low.At time 801, the run signal switches high and the MIN/MAX control signal is low to cause the aging sensor to enter an unaged oscillating state. The first control signal (INITA) is switched low so that the multiplexer 440 switches and the input (AIN) of the first delay chain 411 is switched low. The falling transition on the input of the first delay chain 411 propagates through the delay chain and reaches AOUT through multiplexer 441, which falls at time 802. At this point, both the first and second control signals from the burn-in sensor control module 425 are low such that AOUT is selected and the inputs of the two delay chains rise (the fall of AOUT is inverted by multiplexers 440 and 460) ).The rising transition on the input of the delay chain propagates through the two delay chains concurrently. The delay in the first delay chain 411 for the rising transition on its input is slowed down by aging. The delay in the second delay chain 412 for the rising transition on its input is not slowed down by aging. At time 803, the rise on the input of the second delay chain 412 propagates until its output, and at time 804, the rise on the input of the first delay chain 411 propagates until its output. The difference between time 804 and time 803 is the aging effect. In Figure 8, the delay difference is exaggerated to clearly illustrate the effect.Prior to time 803, the control signal from aging sensor control module 425 is set such that multiplexer 440 and multiplexer 460 select BOUT (from a delay chain that is not affected by aging for rising inputs). Thus, after time 803, the inputs of the two delay chains fall (the rise of AOUT is inverted by multiplexer 440 and multiplexer 460).The rising transition on the input of the delay chain propagates through the two delay lines concurrently. The delay in the first delay chain 411 for the falling transition on its input is not slowed down by aging. The delay in the second delay chain 412 for the falling transitions on its input is slowed down by aging. At time 805, the drop in the input of the first delay chain 411 propagates until its output, and at time 806, the drop in the input of the second delay chain 412 propagates until its output. The difference between time 806 and time 805 is the aging effect.Prior to time 805, the control signal from aging sensor control module 425 is set such that multiplexer 440 and multiplexer 460 select AOUT (from a delay chain that is not affected by aging for the falling input). Thus, the input of the two delay chains rises and the one oscillation of the delay chain is completed. The signal transformation sequence then repeats from time 802 as described.At time 809, the operational control input switches to low and the aging sensor switches back to the aging state. The unaged oscillation state in Figure 8 lasts only a few oscillations, but in an integrated circuit, the aging oscillation state can last for example hundreds or thousands of oscillations.The burn-in sensor control module 425 can time transform the control signals to the multiplexer 440 and the multiplexer 460 using signals from the midpoint of the delay chain as described in terms of the aged oscillation state.In the unaged oscillation state (from time 801 to time 809), the time period of the clock output delays the delay transition in the first delay chain for its input and the delay transition in the second delay chain for its input. Make a combination. Each of these situations is not slowed down by aging so that the oscillation frequency can be used to indicate the amount of aging that has occurred. In some cases, the effect of aging can increase the frequency of oscillations in the unaged oscillation state.Figure 5 is a schematic illustration of a delay element in accordance with embodiments disclosed herein. A delay element can be used to implement the delay element in the delay chain of the burn-in sensor of FIG. The delay element of Figure 5 receives the input (IN) and produces an inverted output (OUT).The delay element is an inverter comprising three p-channel transistors 511, 512, 513 whose sources and drains are connected in series between the supply voltage and the output. The gates of p-channel transistors 511, 512, 513 are connected to the input. The delay element includes three n-channel transistors 521, 522, 523 whose source and drain are connected in series between the ground reference and the output. The gates of the n-channel transistors 521, 522, 523 are connected to the input. The series use of transistors can increase the delay of the delay elements such that the delay chains in the burn-in sensor can have fewer stages. For example, many other types of delay elements can be used depending on the particular aging effect of interest.6 is a schematic diagram of an aging sensor control module in accordance with embodiments disclosed herein. The burn-in sensor control module can be used to implement the burn-in sensor control module 425 of the burn-in sensor of FIG. The circuit illustrated in Figure 6 is exemplary and other means may be used to implement the same or similar functions.The burn-in sensor control module uses NAND gate 611 and buffer 615 to generate a clock output from the delay chain midpoints (A3, B3) and the run control input (RUN). NAND gate 631 and NAND gate 632 form a set-reset latch that is initialized when the run control input is low and flipped when the clock output rises. The output of NAND gate 631 will be low while the run control input is low (in the aging state) and will then transition high on the first falling edge of the clock output.The XOR gate 621 is used to toggle the control signal (INITA, INITB) based on the polarity of the clock output and the control signals (INITA, INITB) determined by the MIN/MAX control input. The transition on the control signal begins (after the rise of the run control signal) is enabled by NAND gate 622. The first control signal (INITA) is buffered by NAND gate 641, which also controls the value of the first control signal during the aging state (when the run control input is low). The second control signal (INITB) is buffered by an inverter 642.9 is a flow diagram of a process for dynamic deaging in accordance with embodiments disclosed herein. This process can be performed by, for example, the core power reduction controller module 111 in the electronic system of FIG.This process uses an aging sensor, such as the aging sensor of Figure 4. The oscillation frequency (F has aged) in the aged oscillation state and the oscillation frequency (F not aged) in the unaged oscillation state are measured and used to de-age (compensate for aging) the operation of the associated circuit. The sensor can be referred to simply as a ring oscillator or RO. The process uses the relationship determined between the aging measured by the aging sensor and the aging of the operating circuit such that the aging measured in the aging sensor can be used to compensate for aging of the operating circuit. This process will be described in more detail for one domain (operational circuit module with a common supply voltage), but it should be understood that the process can be used for multiple domains that can each operate at multiple frequencies.The relationship between the aging measured by the aging sensor and the aging of the operating circuit can be determined by characterization testing of the actual integrated circuit. For example, an integrated circuit can operate at various temperatures, frequencies, and voltages, while the performance of the aging sensor and the performance of the operational module of the integrated circuit are measured over time.The concepts and variables used in the dynamic deaging process or in the description of the process are defined below.The aging RO degradation (ARD) reflects the degradation due to aging of the ring oscillator in the aging sensor. ARD expresses sensor aging as a percentage change in the frequency of sensor oscillations due to aging. In one embodiment, ARD = (F is not aged - F has aged) / F is not aged + AED (in percent). F is not aging is the frequency of the aging sensor in the unaged oscillation state, it is not sensitive to transistor aging; F aging is the frequency of the aging sensor in the aging oscillation state, it is sensitive to aging and will gradually decrease with transistor degradation. Slow down. Therefore, the ARD will gradually increase as the transistor ages. For domains with multiple aging sensors, the ARD is the largest measurement from all aging sensors in the domain. ARD should be >=0. This can be done using the AED to offset the negative value. Alternatively or additionally, the process can set the negative ARD value to zero. ARD can be voltage dependent: ARD generally increases as the measured voltage decreases.The Aging Error Distribution (AED) indicates a systematic random variation in ARD measurements at time 0 (before aging). Ideally, ARD (at time = 0) should be zero, but ARD can be a small random value with a distribution centered at zero. Since ARD is the largest measured value from all aging sensors in the domain, it is very likely that ARD (at time = 0) > 0 instead of negative. At time 0, ARD>=0 is good, but if at time 0, ARD<0, the AED is used to protect the band ARD. If the ARD of the field is negative during the product characterization of time = 0, the worst case absolute value will set the AED value.The Aging Scale Ratio (ASR) indicates the relationship between sensor aging and aging of operating circuits in the associated domain. The aging of the operating circuit can be expressed as a change in the maximum operating frequency (Fmax) of those circuits. This process can set ASR=Fmax downgrade/ARD. Fmax degradation is the amount of change in the maximum operating frequency of a circuit in a domain for a particular condition. The cell level ASR value can be collected from the Product High Temperature Operating Life (HTOL) test unit, where the worst read value (amected among the multiple read values made in the HTOL test) is used as the ASR value for the circuit in the given domain. . An ASR value can be determined from multiple reads during the product HTOL test. Alternatively, multiple ASR values in the demotion table, for example, may be used.The voltage to frequency scaling factor indicates the relationship between the voltage of the operating circuit and the maximum operating frequency. The voltage to frequency scaling factor can be expressed as a Fmax percentage voltage (VPF), which indicates the amount of voltage increase required to deliver 1% of Fmax in the domain. VPF can be determined based on product characterization. The highest VPF value measured for a given domain should be used. VPF can be voltage dependent. The voltage can be divided into a range having a plurality of VPF values used or the highest VPF value for all voltages.The Aging Protection Band (AGB) is the amount of voltage increase required to compensate for the transistor's degradation to maintain the Fmax of the domain's circuitry. This process can set AGB=VPF*ASR*ARD. The AGB can be updated after each ARD measurement. AGB can be voltage dependent. The process can use multiple AGB values for different voltage ranges or can scale one AGB value for use at other voltages.An Aging Target Attachment (ATA) is a value converted from AGB that can use the ATA to update a coefficient table indicating what performance sensor measurements are required to operate at the respective frequency with the associated operational module. This conversion maps the AGB value (which indicates the amount of aging compensation in the voltage) to the target performance sensor value. This mapping can use, for example, the relationship between the supply voltage and the performance sensor measurements obtained from the integrated circuit characterization. The ATA value updates the coefficient table value to compensate for the aging degradation. For example, a coefficient table value indicating a particular performance sensor measurement required to operate at an associated frequency with an associated mode of operation may be increased. In systems that do not use the above-described coefficient table, the conversion of the ATA value can be omitted or replaced by other calculations suitable for the system.The process of Figure 9 illustrates how the above de-aging information can be used to operate an integrated circuit. For clarity of description, the process is described for a single domain, but it should be understood that the process can be used for deaging of multiple domains.In block 910, the integrated circuit is initialized with a safe voltage and frequency. This combination of voltage and frequency has sufficient guard bands for reliable operation of the integrated circuit under all expected conditions. The expected condition may include all conditions in which the integrated circuit is designated to operate. The safe voltage and frequency allow for reliable operation of the worst-case aging integrated circuit.In block 920, the process uses the initial values in the coefficient table to enable dynamic voltage and frequency scaling in the integrated circuit. The coefficient table contains target performance sensor measurements for each operating frequency. An example of a dynamic voltage and frequency scaling operation includes measuring performance to obtain a performance sensor measurement, finding a current operating frequency in a correction table to obtain a corresponding target performance sensor measurement, and conditional based on a relative value of the performance sensor measurement and a target value Adjust the voltage. If, for example, the performance sensor measurement is less than the target value, the voltage can be boosted to increase the circuit speed. The initial values in the coefficient table include sufficient guard bands for end of life (EOL) aging of the integrated circuit. The initial value can be determined by the characterization of the integrated circuit. The guard band for end-of-life aging can be affected by using the initial ATA value. The process then proceeds to perform de-aging based on the sensed aging.In block 930, the process measures the aging of the integrated circuit. Block 930 can include measuring ARD based on ARD = (F is not aged - F is aged) / F is not aged + AED. In an embodiment, F has aged before the F is not aged. This can avoid or minimize the opposite effect of aging that can occur when the aging sensor oscillates to perform the measurement. This process can then calculate the AGB based on AGB=VPF*ASR*ARD. AGB is calculated in normal (non-standby) mode. The process can then calculate ATA to replace the initial (or current) ATA. In an embodiment, the process limits the amount of ATA to a maximum end of life value, which can be determined by characterization of the integrated circuit. In various embodiments, the ARD can be measured at a fixed voltage or at a currently used operating voltage associated with the aging sensor.In block 940, the process updates the coefficient table based on the aging sensed in block 930. A process can update the coefficient table for one frequency, all frequencies, or a range of frequencies. Alternatively, the process can update the coefficient table before enabling dynamic voltage and frequency scaling. In another alternative, the coefficient table is updated for initializing the operating frequency, dynamic voltage and frequency scaling are enabled, and then all coefficient tables are updated.In block 950, the integrated circuit operates using dynamic voltage and frequency scaling under the updated coefficient table from block 940.Periodically, the process returns to blocks 930 and 940 to further update the coefficient table for the aging effect. The process can update the coefficient table based on the expiration of the timer. The updated time period can be, for example, 1 minute, 10 minutes, or hour. The time period between updates can change over time, such as infrequently updated as the integrated circuit ages. Additionally or alternatively, the process may update the coefficient table based on changes in the mode of operation of the working modules of the integrated circuit or integrated circuit. For example, the coefficient table can be updated when the integrated circuit switches from the operating mode to the standby mode or vice versa.For example, the process for dynamic deaging can be modified by adding, omitting, reordering, or changing boxes. For example, the process can be aged by adjusting the clock frequency (or other performance parameters). In such an embodiment, the process can use a voltage to frequency scaling factor to skip the calculation. In addition, the blocks can be executed concurrently.While various embodiments of the invention have been described above with respect to particular embodiments, many variations of the invention are possible. For example, the number of individual components can be increased or decreased. The system and method can be modified depending on the most important specific aging effects in the integrated circuit. The burn-in sensor can be customized according to the specific manufacturing technology of the integrated circuit. An integrated circuit can include multiple aging sensors to measure multiple aging effects. Additionally, the features of the various embodiments may be combined in different combinations than those described above.Those skilled in the art will appreciate that the various exemplary blocks and modules described in connection with the embodiments disclosed herein can be implemented in various forms. Some of the blocks and modules have been generally described above in terms of their functionality. How such functionality is implemented depends on the design constraints imposed on the overall system. The skilled person will be able to implement the described functionality in a different manner for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the invention. In addition, the grouping of functions within a module or block is for ease of description. Specific functions may be moved from one module or block or distributed across various blocks or blocks without departing from the invention.The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein may be a general purpose processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field programmable gate designed to perform the functions described herein. Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or any combination thereof are implemented or executed. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.The various steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium. An exemplary storage medium can be coupled to the processor to enable the processor to read and write information to/from the storage medium. In the alternative, the storage medium can be integrated into the processor. The processor and the storage medium can reside in an ASIC.The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the invention. Various modifications to these embodiments are obvious to those skilled in the art, and the general principles described herein may be applied to other embodiments without departing from the spirit or scope of the invention. Therefore, the description and drawings presented herein are intended to represent the presently preferred embodiments and are representative of the subject of the invention. It is to be understood that the scope of the invention is to be construed as being limited by the scope of the invention
Systems, methods, and other embodiments associated with rotating keys for a memory are described. According to one embodiment, a memory system comprises a memory controller configured to control access to a memory and to process memory access requests. Rekeying logic is configured to rotate a first key that was used to scramble data in the memory and re-scramble the data with a second key by: determining when the memory controller is in an idle cycle and performing a rekeying operation on a portion of the memory during the idle cycle, and pausing the rekeying operation when the memory controller is not in an idle cycle to allow memory access requests to be performed and resuming the rekeying operation during a next idle cycle.
CLAIMS What is claimed is: 1 . A memory system, comprising: a memory controller configured to control access to a memory and to process memory access requests; and rekeying logic configured to rotate a first key that was used to scramble data in the memory and re-scramble the data with a second key by: determining when the memory controller is in an idle cycle and performing a rekeying operation on a portion of the memory during the idle cycle; and pausing the rekeying operation when the memory controller is not in an idle cycle to allow memory access requests to be performed and resuming the rekeying operation during a next idle cycle. 2. The memory system of claim 1 , further comprising a boundary address register wherein the rekeying logic is configured to store a boundary address in the boundary address register that identifies a location of the next rekeying operation. 3. The memory system of claim 1 , wherein the memory controller is configured to send a rekeying notice to the rekeying logic that indicates that the memory controller is in the idle cycle. 4. The memory system of claim 1 , further comprising selector logic configured to select the first key or the second key to be used for processing a memory access request based, at least in part, on a boundary address that indicates which key was used to scramble the memory at a location specified in the memory access request. 5. The memory system of claim 1 , wherein the rekeying logic is configured to intermittently perform the rekeying operations in between memory access requests. 6. The memory system of claim 1 , wherein the memory controller is configured to perform a read access request for a requested address by: determining if the requested address is in a first portion or a second portion of the memory based on a boundary address that indicates a position of the rekeying operations, wherein the first portion has been scrambled using the first key and the second portion has been scrambled by the second key; selecting the first key or the second key based on the determination; and descrambling data from the requested address using the selected key and returning the descrambled data. 7. The memory system of claim 1 , wherein the memory controller is configured to perform a write access request that indicates data to be written at a requested address by: determining if the requested address is in a first portion or a second portion of the memory based on a boundary address that indicates a position of the rekeying operations, wherein the first portion has been scrambled using the first key and the second portion has been scrambled by the second key; selecting the first key or the second key based on the determination; and scrambling the data from the write access request using the selected key and writing the scrambled data to the requested address in the memory. 8. The memory system of claim 1 , further comprising a scrambling register for storing the data from a boundary address when an access request to access the memory is received before the data at the boundary address is rekeyed by the rekeying logic. 9. The memory system of claim 1 , further comprising function logic to generate functions for rekeying the data at a boundary address, the functions including a first function for descrambling the data with the first key and a second function for re- scrambling the data with the second key. 10. The memory system of claim 1 , further comprising a key storage for storing and controlling access to the first key and the second key. 1 1 . A method, comprising: setting a boundary address in a memory for a rekeying operation that rotates keys for data in the memory; in response to receiving memory access requests, processing, by a memory controller, the memory access requests to the memory; determining when the memory controller is in an idle cycle and in response to being in the idle cycle, the method comprises: performing the rekeying operation on a portion of the memory at the boundary address during the idle cycle; incrementing the boundary address and repeating the rekeying operation; and pausing the rekeying operation when the memory controller is not in an idle cycle to allow the memory access requests to be performed and resuming the rekeying operation during a next idle cycle. 12. The method of claim 1 1 , wherein performing the rekeying operation includes descrambling data stored at the boundary address using a first key that was used to scramble the data, re-scrambling the data with a second key, and storing the re- scrambled data at the boundary address in the memory. 13. The method of claim 1 1 , wherein determining when the memory controller is in an idle cycle includes receiving a rekeying notice that indicates that the memory controller is in the idle cycle. 14. The method of claim 1 1 , further comprising selecting a first key or a second key to be used for processing a memory access request based, at least in part, on the boundary address. 15. The method of claim 1 1 , wherein the rekeying operations are performed intermittently in between the processing of the memory access requests. 16. The method of claim 1 1 , wherein for a memory access request that is a read access request for a requested address, the method further comprises: determining if the requested address is in a first portion or a second portion of the memory based on the boundary address, wherein the first portion has been scrambled using a first key and the second portion has been scrambled by a second key; selecting the first key or the second key based on the determination; and descrambling data from the requested address using the selected key and returning the descrambled data. 17. A method comprising: setting a boundary address in a memory for a rekeying operation that rotates encryption keys for data in the memory, wherein the boundary address is a boundary between a first portion of the memory that is encrypted with a first key and a second portion of the memory that is encrypted with a second key; in response to receiving an access request for a requested address in the memory: pausing the rekeying operation; comparing the requested address to the boundary address to determine if the requested address is in the first portion or the second portion of the memory; selecting the first key or the second key based on the comparison; and processing the access request using the selected key. 18. The method of claim 17, further comprising: determining an idle cycle when there are no access requests pending to be completed; and in response to the idle cycle, resuming the rekeying operation to rotate the encryption keys. 19. The method of claim 17, further comprising performing the rekeying operations intermittently in between the processing of the access requests. 20. The method of claim 17, wherein the rekeying operation is initiated based at least in part on a key rotation interval and the rekeying operation performs rekeying of the data in the memory during idle cycles of the memory.
KEY ROTATION FOR A MEMORY CONTROLLER BACKGROUND [0001] The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventor(s), to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. [0002] Hackers exploit weaknesses in computer systems or networks to obtain data from a memory. To reduce the likelihood that a hacker would be able to read ill-gotten data, the data is scrambled (i.e., encrypted) according to a mathematical scrambling function. The hacker then has to discover the scrambling function that was used to scramble the data in order to unscramble (i.e., decrypt) the scrambled data. To discover the scrambling function, the hacker may attack a component of a computer system, such as a system on chip (SOC), and make requests to write input values to a memory. By analyzing the output values of the memory to the input values, the hacker can try to reverse engineer the operations of the scrambling function. The more time a hacker has to analyze the output, the greater the likelihood that the hacker will be able to determine the scrambling function and use the function to unscramble the scrambled data from the memory. [0003] In some encryption techniques, the scrambling function is based on a key. The key is a piece of information (e.g., a value, parameter) that is known only to individuals that have been authorized to read the scrambled data. To increase security of the data, the key is changed periodically in order to change the scrambling function and consequently, change how the data is scrambled and unscrambled. Because the scrambling function is dependent on the key, changing the key periodically increases the security of the memory since a hacker would have less time to discover the scrambling function before the scrambling function changes. Typically, a key is changed during the initial boot sequence of a computer system when the computer system is turned on after having been powered down. However, weeks or even months may elapse before the computer system is powered down and turned back on, which leaves the data more vulnerable to being unscrambled. Therefore, revoking the key and replacing the revoked key with a new key, referred to as key rotation, reduces this vulnerability. However, typically, key rotation requires processing time and significant resources, and thus, is done infrequently despite increased vulnerability. SUMMARY [0004] In general, in one aspect this specification discloses a memory system that comprises a memory controller configured to control access to a memory and to process memory access requests. Rekeying logic is configured to rotate a first key that was used to scramble data in the memory and re-scramble the data with a second key by: determining when the memory controller is in an idle cycle and performing a rekeying operation on a portion of the memory during the idle cycle; and pausing the rekeying operation when the memory controller is not in an idle cycle to allow memory access requests to be performed and resuming the rekeying operation during a next idle cycle. [0005] In another aspect, the memory system further comprises a boundary address register wherein the rekeying logic is configured to store a boundary address in the boundary address register that identifies a location of the next rekeying operation. [0006] In another aspect, the memory controller is configured to send a rekeying notice to the rekeying logic that indicates that the memory controller is in the idle cycle. [0007] In another aspect, the memory system further comprises selector logic configured to select the first key or the second key to be used for processing a memory access request based, at least in part, on a boundary address that indicates which key was used to scramble the memory at a location specified in the memory access request. [0008] In another aspect of the memory system, the rekeying logic is configured to intermittently perform the rekeying operations in between memory access requests. [0009] In another aspect, the memory controller is configured to perform a read access request for a requested address by: determining if the requested address is in a first portion or a second portion of the memory based on a boundary address that indicates a position of the rekeying operations, wherein the first portion has been scrambled using the first key and the second portion has been scrambled by the second key; selecting the first key or the second key based on the determination; and descrambling data from the requested address using the selected key and returning the descrambled data. [0010] In another aspect of the memory system, the memory controller is configured to perform a write access request that indicates data to be written at a requested address by: determining if the requested address is in a first portion or a second portion of the memory based on a boundary address that indicates a position of the rekeying operations, wherein the first portion has been scrambled using the first key and the second portion has been scrambled by the second key; selecting the first key or the second key based on the determination; and scrambling the data from the write access request using the selected key and writing the scrambled data to the requested address in the memory. [0011] In another aspect, the memory system further comprises a scrambling register for storing the data from a boundary address when an access request to access the memory is received before the data at the boundary address is rekeyed by the rekeying logic. [0012] In another aspect, the memory system further comprises function logic to generate functions for rekeying the data at a boundary address, the functions including a first function for descrambling the data with the first key and a second function for re-scrambling the data with the second key. [0013] In another aspect, the memory system further comprises a key storage for storing and controlling access to the first key and the second key. [0014] In general, in another aspect, this specification discloses a method that comprises setting a boundary address in a memory for a rekeying operation that rotates keys for data in the memory. In response to receiving memory access requests, processing, by a memory controller, the memory access requests to the memory. The method determines when the memory controller is in an idle cycle and in response to being in the idle cycle, the method comprises: performing the rekeying operation on a portion of the memory at the boundary address during the idle cycle; incrementing the boundary address and repeating the rekeying operation; and pausing the rekeying operation when the memory controller is not in an idle cycle to allow the memory access requests to be performed and resuming the rekeying operation during a next idle cycle. [0015] In another aspect of the method, performing the rekeying operation includes descrambling data stored at the boundary address using a first key that was used to scramble the data, re-scrambling the data with a second key, and storing the re-scrambled data at the boundary address in the memory. [0016] In another aspect, determining when the memory controller is in an idle cycle includes receiving a rekeying notice that indicates that the memory controller is in the idle cycle. [0017] In another aspect, the method further comprises selecting a first key or a second key to be used for processing a memory access request based, at least in part, on the boundary address. [0018] In another aspect of the method, the rekeying operations are performed intermittently in between the processing of the memory access requests. [0019] In another aspect of the method, for a memory access request that is a read access request for a requested address, the method further comprises: determining if the requested address is in a first portion or a second portion of the memory based on the boundary address, wherein the first portion has been scrambled using a first key and the second portion has been scrambled by a second key; selecting the first key or the second key based on the determination; and descrambling data from the requested address using the selected key and returning the descrambled data. [0020] In general, in another aspect, this specification discloses a method that comprises setting a boundary address in a memory for a rekeying operation that rotates encryption keys for data in the memory, wherein the boundary address is a boundary between a first portion of the memory that is encrypted with a first key and a second portion of the memory that is encrypted with a second key. In response to receiving an access request for a requested address in the memory, the method comprises: pausing the rekeying operation; comparing the requested address to the boundary address to determine if the requested address is in the first portion or the second portion of the memory; selecting the first key or the second key based on the comparison; and processing the access request using the selected key. [0021] In another aspect, the method further comprises: determining an idle cycle when there are no access requests pending to be completed; and in response to the idle cycle, resuming the rekeying operation to rotate the encryption keys. [0022] In another aspect, the method further comprises performing the rekeying operations intermittently in between the processing of the access requests. [0023] In another aspect, the rekeying operation is initiated based at least in part on a key rotation interval and the rekeying operation performs rekeying of the data in the memory during idle cycles of the memory. BRIEF DESCRIPTION OF THE DRAWINGS [0024] The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate various systems, methods, and other embodiments of the disclosure. Illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. In some examples one element may be designed as multiple elements or multiple elements may be designed as one element. In some examples, an element shown as an internal component of another element may be implemented as an external component and vice versa. [0025] FIG. 1 illustrates one embodiment of a system associated with key rotation for a memory controller. [0026] FIG. 2 illustrates one embodiment of a system associated with key rotation for a memory controller and a key generator. [0027] FIG. 3 illustrates one embodiment of a system associated with key rotation for a memory controller and a scrambling register. [0028] FIG. 4 illustrates one embodiment of a system associated with key rotation for a memory controller and a function logic. [0029] FIG. 5 illustrates one embodiment of a system associated with key rotation for a memory controller and key storage. [0030] FIG. 6 illustrates one embodiment of a method associated with key rotation for rekeying data. [0031] FIG. 7 illustrates one embodiment of a method associated with processing a memory request that is received while data rekeying is not complete. DETAILED DESCRIPTION [0032] In data encryption, key rotation is the process of revoking a first encryption key that was used to scramble a set of data and replacing the first key with a different second key. Key rotation further includes rekeying the data according to the second key. To rekey data, the data is descrambled with the first key and re-scrambled according to the second key. For example, to rekey the contents of a dynamic random access memory (DRAM), the entire contents of the DRAM are descrambled with the first key and re-scrambled according to the second key. The process of rotating the keys and consequently rekeying the data may be lengthy, require a significant number of resources, and interrupt and/or delay normal operations of the memory. [0033] Described herein are examples of systems, methods, and other embodiments associated with key rotation for a memory controller. In one embodiment, data is rekeyed during idle cycles of the memory controller to reduce the impact of rekeying data on the normal operation of the memory. For example, suppose that the memory controller controls a memory that stores scrambled data. In an idle cycle, the memory controller is not processing access requests to the memory. Conversely, in an active cycle, the memory controller is processing access requests to read data from the memory and/or write data to the memory. To avoid interrupting the access requests to the memory during an active cycle, the data is descrambled and re-scrambled during the idle cycles of the memory controller. Thus rekeying operations are performed in between memory operations to allow for normal memory access as well as rekeying. In one embodiment, the rekeying process may run as a background process in the memory controller without delaying or interrupting access requests. [0034] The disclosed systems and methods provide the ability to re-scramble data more frequently, which is not dependent upon a system reset or powering down of the computer. The re-scrambling and re-keying may be performed in portions of memory that allow for continued operation of the memory. [0035] FIG. 1 illustrates one embodiment of a memory system that includes rekeying logic 100 for rotating encryption keys associated with a memory 120 and a memory controller 130. The rekeying logic 100 may be implemented in a computing system or computer readable medium, and may be implemented as part of the memory controller 130 or as a separate component that operates with the memory controller 130. For purposes of explanation, devices 1 10a, 1 10b, 1 1 0c, to 1 1 On are configured to access the memory 120 via the memory controller 130. The memory controller 130 is configured to control access to the memory 120 and process memory access requests (e.g., read requests or write requests) that are received from the devices 1 10a-1 10n. [0036] For example, a write access request is completed when data is written to the memory 120, and a read access request is completed when the requested data is read from the memory 120 and returned to the requesting device. In one embodiment, the memory controller 130 is implemented in as a hardware component with integrated circuits configured to perform the disclosed functions, may be a processor configured to perform memory requests, and/or may include firmware or executable instructions for performing the functions disclosed. [0037] Accordingly, the memory controller 130 manages the data flow between the devices 1 10a-1 10n and the memory 120. An example device 1 10a-1 10n may be a computer processing unit CPU, a hardware device, a system-on-chip (SOC) device, or other component that is part of the computing system, which the memory controller 130 is also part of (e.g., in a single computing device). The memory 120 is a device used to store data in or for a computing device including, but not limited to, non-volatile memory (e.g., flash memory, read-only memory, programmable read-only memory) and volatile memory (e.g., random access memory, dynamic random access memory, static random access memory). The memory controller 130 may be integrated into a chip, such as an SOC, with the memory 120, configured as a chip separate from the memory 120, and may include firmware for performing certain functions. [0038] When the memory controller 130 is not managing data flow between the devices 1 10a-1 10n and the memory 120, the memory controller 130 is idle. When idle, the memory controller 130 is available to rekey data stored in the memory 120. In one embodiment, the rekeying logic 100 is configured to identify the idle periods of the memory controller 130 and control the rekeying process of the data in the memory 120 during the idle periods. [0039] For example, the rekeying logic 100 is configured to determine that the memory controller 130 is idle in one or more ways. In one embodiment, the rekeying logic 100 monitors the activity of the memory controller 130 to determine when the memory controller is not processing memory requests and is thus idle. The monitoring may include checking a queue that holds memory requests and if the queue is empty, the rekeying logic 100 determines that the controller 130 is idle. In another embodiment, the memory controller 130 is configured to send a rekeying notice that is received by the rekeying logic 100. The rekeying notice indicates that the memory controller 130 is idle and available to rekey data. In this embodiment, the rekeying logic 100 initiates the rekeying process (or restarts an incomplete rekeying process) upon receiving the rekeying notice. [0040] The rekeying process may be in two different states. One state is where all data in the memory has been previously rekeyed (with an old key) and a new rekeying process is beginning (with a new key). Another state is where the rekeying process has begun but has not yet completed. Thus a portion of the data in the memory 120 has been rekeyed with a new key and a portion of the data has not been rekeyed (meaning the data is currently keyed with an old key). In one embodiment, since the rekeying process is performed during idle periods of the controller 130, the rekeying process will typically start and stop multiple times before all data in the memory 120 is completely rekeyed. [0041] To rekey the data, the memory controller 130 uses information about where in the memory 120 to start rekeying and which key to use to rekey the data. Initially, the rekeying starts at a starting address in the memory 120 such as address 0 and begins rekeying the data from that location and continues sequentially to an ending address. Of course, other starting locations may be used and/or the sequence may not be sequential. After the rekeying process starts, the rekeying logic 100 tracks the progress of the rekeying process by the memory address that has been last rekeyed and records the next address to be rekeyed in a register. [0042] For example, the rekeying logic 100 sets and stores a boundary address B in a boundary register 140 to indicate the next address in the memory 120 to be rekeyed. As data in the memory 120 is rekeyed, the boundary address B is moved to reflect the next location to be rekeyed. When the rekeying process is paused/stopped due to the memory controller 130 no longer being in an idle state, the rekeying logic 100 uses the boundary address B as a restart point to continue the rekeying process during the next idle state of the controller 130. [0043] The boundary address B is also used to differentiate a portion of the memory 120 that has been rekeyed from the remaining portion of the memory 120 that has not yet been rekeyed. Addresses on one side of the boundary B will use first key and addresses on the other side of the boundary B will use a second key. For example, as seen in FIG. 1 for purposes of explanation, suppose the data in memory portion 121 has been rekeyed with a new key and the data in memory portion 122 has been keyed with an old key and thus still needs to be rekeyed. Differentiating the different portions of memory allows the memory controller 130 to continue processing memory requests before the rekeying process completely rekeys the entire memory. Memory operations are affected by the incomplete rekeying process because some data is scrambled with the new key (portion 121 ) and other data is scrambled with the old key (portion 122). [0044] Thus when a memory request is processed, the memory controller 130 compares the address from the memory request to the boundary address B from the boundary register. The comparison indicates if the memory request is requesting an address that is in memory portion 121 or 122. Depending on which portion the address is in, the memory controller 130 selects and uses the appropriate key that is associated with that memory portion (e.g., the new key or the old key). The memory request is then processed with the appropriate key to scramble/descramble the data from the associated portion of memory 120. [0045] For example, suppose that none of the data in the memory 120 had been rekeyed; the boundary address B would be set by the rekeying logic 100 as the first address of the memory 120 (e.g., memory address 0). The memory controller 130 stores the boundary address B in the boundary register 140 to indicate that rekeying should begin with the data stored in the first address of the memory 120. During an idle cycle of the memory controller 130, the rekeying logic 100 retrieves the address boundary B from the boundary register 140 and provides the boundary address to the memory controller 130. The memory controller 130 can then retrieve the data at the address boundary B or wait to retrieve the data at the address boundary B until the memory controller 130 receives a key for rekeying. [0046] In another example, consider that a first portion 121 of the memory 120 has been rekeyed and a second portion 122 has not been rekeyed. The boundary address B is set as the first address in the second portion 122 to indicate that the first address in the second portion 122 should be rekeyed during the next idle cycle of the memory controller 130. While the memory controller 130 is idle, the memory controller 130 will incrementally rekey the data stored in the memory 120 beginning with the data stored at the boundary address B in the memory 120. [0047] In one embodiment, a selector logic 150 is configured to store the multiple keys that are used with the memory 120, select which keys to use in order to rekey the data, and provide the selected keys to the memory controller 130. In one embodiment, the selector logic 150 is implemented as a function of the rekeying logic 100. The memory controller 130 uses the keys selected by the selector logic 150 and the boundary address B to rekey the data. [0048] For example, as discussed above, to rekey the data at the boundary address B, the selector logic 150 provides a first key 151 (e.g., old key) to descramble the data at the address in the memory 120. The first key 151 is the key that was used to originally scramble the data. The memory controller 130 then receives the second key 152 (e.g., new key) to re-scramble the data at the boundary address B. In one embodiment, the memory controller 130 may request the second key 152 to re-scramble the data from the selector logic 150 when the memory controller 130 determines that the data from the address has been descrambled. Accordingly, the data at the boundary address is rekeyed and the boundary address B is incremented to the next address in memory. [0049] The memory controller 130 then stores the new boundary address B in the boundary register 140. The rekeying process is repeated for the data stored at the new boundary address and the process continues iteratively through the data in the memory 120 until all the data is rekeyed. But, as previously stated, the rekeying process is performed during idle periods of the memory controller 130. The rekeying process is paused/stopped when the memory controller 130 receives a memory request and thus is no longer idle. Accordingly, the rekeying process may start and pause multiple times before completion. [0050] This technique allows memory requests to be processed during the paused rekeying process without delaying or interrupting the memory requests. The technique also allows the data in memory 120 to be rekeyed while the memory system is in an operational state. Thus the system (rekeying logic 100 and memory controller 130) is configured to cycle between or otherwise alternate between processing memory operations and performing rekeying operations based on the state or condition of the system. [0051] Therefore, the key rotation, including rekeying of the data, does not require additional resources. Instead, the key rotation can occur as a background process thereby reducing delays and interruption to the normal operation of the memory controller 130 and memory 120. Also, as opposed to the situation when a computer system is powered down and turned back on to affect key rotation, here, the data stored in the memory 120 is still accessible during the rekeying process. Moreover, the contents of the memory 120 do not have to be rekeyed all at once in the described embodiments. Instead, access requests can be handled by selecting the key associated with the portion of the memory 120 being accessed. [0052] FIG. 2 illustrates another embodiment of the rekeying logic 100 associated with key rotation for the memory controller 130, where the rekeying logic 100 includes key generator logic 210 for generating new keys. The rekeying logic 100 operates in a similar manner as described with reference to FIG. 1 but with additional functionality. For simplicity of explanation, two keys (the first key 151 and the second key 152) were described in FIG. 1 . However, more keys may be used. In the embodiment of FIG. 2, three keys are shown: the first key 151 , the second key 152, and a third key 153, which may be used to scramble the data in the memory 120 three different ways. Accordingly, the rekeying logic 100 is configured to store multiple boundary addressees (e.g., B1 and B2) in the boundary register 140 to identify which portions of memory 120 correspond to which key. [0053] In one embodiment, the key generator logic 210 is configured to generate new keys so that older keys can be revoked. For example, at some point the rekeying of the memory 120 will have progressed such that there is no longer data stored in the memory 120 that is keyed according to the first key 151 . Rather than reusing the first key 151 , the rekeying logic 100 may decide to revoke (e.g., discards) the first key 151 and use the key generator logic 210 to generate a new key to replace the first key 151 . The key generator logic 210 may include a random number generator or algorithm generator to generate keys. A key may be revoked and replaced based on a predefined time schedule or other condition as desired. [0054] As discussed above, the rekeying logic 100 controls the rekeying process based on a state or condition of the system. For example, the rekeying logic 100 may use the idle cycles of the memory controller 130 to rekey data, use a time schedule to initiate rekeying, or automatically rekey data once a predetermined percentage of the data in the memory 120 has been rekeyed. In another embodiment, the rekeying logic 100 may rotate the keys when a predetermined amount of time elapses from the previous rekeying process. [0055] For example, the rekeying logic 100 may be configured to rotate the keys based on a key rotation interval and may use different keys for different portions of memory. For example, if the interval is fifteen (15) seconds, then the data is rekeyed according to the first key 151 in a first 15 second interval, a next portion of data is rekeyed according to the second key in a second 15 second interval, and the data in a following portion is rekeyed according to the third key 153 in a third 15 second interval. In one embodiment, the key rotation interval may be determined by an amount of idle cycles observed from the memory controller 130 and the total size of the memory 120 being rekeyed. To increase the security level, the key rotation interval may be shortened. [0056] Suppose that 15 seconds is not enough time to rekey the entire contents of the memory 120. To demarcate the portions of the memory 120 that have been rekeyed with a specific key, the rekeying logic 100 tracks the rekeying progress for each key by setting multiple boundary addresses in the boundary register 140. For instance, a first portion 221 of the memory 120 is defined as preceding the boundary address B1 (e.g., from address 0 to B1 ). In the example given above, the first portion 221 is keyed according to the first key 151 . Likewise, a second portion 222 is defined as preceding address boundary B2 (e.g., from address B1 +1 to B2) and is keyed according to the second key 152. A third portion 223 is defined as following boundary address B2 (e.g., from address B2+1 to last address) and is keyed according to the third key 153. Thus, multiple boundary addresses can be used to define borders in the memory 120 when three or more keys are used. [0057] In another embodiment, the rekeying logic 100 may be configured to rotate keys in response to a predetermined number of memory access requests to the memory 120 being performed (e.g., setting a threshold). For example, the rekeying logic 100 may rotate the keys once the memory controller 130 has received 100 memory requests. The key rotation may also be initiated based on the components associated with a computer system and how they are used. In one embodiment, the rekeying logic 100 rotates the keys based on the number of idle cycles that elapse for the memory controller 130 and/or the size of the memory 120. Alternatively, the rekeying logic 100 may rotate the keys based, in part, on the number of keys available. For example, the more keys that are available, the more often the rekeying logic 100 may rotate the keys and re-scramble the memory 120. [0058] In another embodiment, the rekeying logic 1 00 may rotate the available keys continuously in response to a rekeying process completing. For example, when all of the data of the memory 120 has been rekeyed according to the first key 151 , a new rekeying process is initiated and the data is then rekeyed according to the second key 152. Once all of the data has been rekeyed according to the second key 152, the data is then rekeyed according to the third key 153, and then the data is rekeyed with the first key 151 again. Thus if there are three available keys, the process cycles through the three keys and repeats. Therefore, the available keys may be used repeatedly. [0059] FIG. 3 illustrates another embodiment of the memory system similar to FIG. 1 that includes the rekeying logic 100 and also includes a scrambling register 360. The rekeying logic 100 and the memory controller 130 operate in a similar manner as described above with respect to FIG. 1 . The scrambling register 360 is used to hold and process data during the rekeying process. For example, the scrambling register 360 is used by the memory controller 130 to store data that has been read from the memory 120 but has not yet been scrambled according to a new key that is being applied to the memory 120. Data may be held in the scrambling register 360 when the rekeying process is interrupted or paused prior to completion. [0060] This situation may occur when an idle cycle of the memory controller 130 is not long enough to descramble and re-scramble data stored at the address in the memory 120 currently being rekeyed. As previously stated, at some point during the rekeying process the memory controller 130 may receive a memory request and thus change from an idle cycle to an active cycle. This event causes the rekeying operation to be paused before the data has been descrambled and/or re- scrambled. Accordingly, the data read from the memory 120 and waiting to be rekeyed is stored in the scrambling register 360. If the process of rekeying is interrupted, the data stored in the scrambling register 360 can be rekeyed in a subsequent idle cycle once the rekeying process is re-started . Once the data has been rekeyed, the memory controller 130 restores the re-scrambled data in the memory 120 at the address from which it was read. Alternatively, the memory controller 130 may store the data from the boundary address B in the scrambling register 360 in response to the memory controller 130 receiving a memory access request. [0061] FIG. 4 illustrates another embodiment of the memory controller 130 that includes function logic 400 that implements scrambling/descrambling functions. The memory controller 130 operates in a similar manner as described with reference to FIG. 1 . As previously described, the rekeying logic 100 provides the memory controller 130 with a boundary address B at which to begin rekeying, and the selector logic 150 provides memory controller 130 with the key needed to rekey the data at the boundary address B. [0062] In one embodiment, function logic 400 of the memory controller 130 is configured to use the boundary address B and key provided to the memory controller 130 by the rekeying logic 100 to generate a scrambling function that can scramble and/or unscramble data. Consider an example where the data of the memory 120 is being rekeyed according to the second key 152 but was previously scrambled with the first key 151 . The data of the memory 120 is thus being descrambled according to the first key 151 and then re-scrambled according to the second key 152. [0063] In one embodiment, function A 410 is configured to combine a key with a boundary address and uses the combination to generate function B 420. Function B 420 scrambles the data at the boundary address B according to the second key 152. Function A 410 is also used to generate Function C 430. In one embodiment, function C 430 is an inverse function of function B 420. Function C 430 is used to descramble data at the boundary address B based on the key provided in Function A 410. Of course, other types of scrambling/descrambling functions may be implemented. [0064] FIG. 5 illustrates one embodiment of a rekeying system that includes the rekeying logic 100, memory controller 130, and a key storage 510. The rekeying logic 100 operates in a similar manner as described above with respect to FIG. 1 and/or other figure. However, unlike the embodiment described in FIG. 1 , the scrambling keys (e.g., first key 151 and second key 152) are not stored in conjunction with the selector logic 150. Instead, the key storage 510 stores and controls access to the keys. In one embodiment, the key storage 510 may be a secure processor or memory responsible for storing and loading keys when requested. [0065] As discussed above, the selector logic 150 selects the appropriate key to be used for descrambling and/or re-scrambling the data at the boundary address B. With the keys being stored in the key storage 510, the selector logic 150 is configured to request the selected key from the key storage 510 on behalf of the memory controller 130. If the request is approved, the key storage 510 loads the key to the memory controller 130. When the memory controller 130 receives the key (e.g., the first key 151 ) from the key storage 510, the memory controller 130 descrambles data read from the boundary address B and continues the rekeying process. Likewise, other available keys may be requested and loaded to the memory controller from the key storage 510. In this manner, the key storage 510 provides an additional layer of security. [0066] FIG. 6 illustrates one embodiment of a method 600 associated with key rotation and rekeying data as described previously. Method 600 is a computer implemented process that performs rekeying of data during idle cycles of memory operations such that the rekeying process is interrupted and paused to allow memory operations to be performed. The rekeying process is then resumed during the next idle cycle. In other words, normal memory access is performed in the computer and the rekeying operations are embedded or inserted in between memory operations during idle cycles. Thus the rekeying process is not limited to be performed during a system reset or power-up where the rekeying is performed for the entire memory and memory operations are not performed. [0067] Initially, at 610, a boundary address is set at which the rekeying will start. For example, the rekeying starts at the lowest memory address (e.g., 0) and progresses to the highest memory address. After rekeying beings and is subsequently paused, the boundary address indicates where the rekeying will resume. [0068] At 620, the method determines if the memory controller is idle. As previously explained, the memory controller is idle when there are no memory access requests pending. In one embodiment, a notice from the memory controller may be received that indicates an idle cycle, or a memory request queue may be checked to see if the queue is empty. If the controller is idle, then the method goes to block 630 where the rekeying process starts or resumes, and data from the memory at the boundary address is read. [0069] At 640, the appropriate key (a first key) is selected from a set of available keys to descramble the data. The first key is the key that originally was used to scramble the data and now is used to descramble the data to its original form. At 650, a new scrambling key (a second key) is selected and the data is re-scrambled using the second key. In general, scrambling and descrambling uses one or more functions in combination with the key to encrypt or decrypt data. The re-scrambled data is then stored back to its address in memory. [0070] At 660, the boundary address is incremented to the next memory address. The rekeying process determines if the entire memory has been re- scrambled or not. This can be determined by comparing the current boundary address to the last memory address. If the entire memory has been re-scrambled, then the old key (the first key) is no longer needed and is discarded. The new key (second key) is set as the new master key for the memory until it is replaced by a new key. The rekeying process ends and waits until the next rekeying interval or condition occurs to once again rekey the memory. [0071] If at 660 the rekeying process is incomplete, then the method returns to 620 and repeats if the memory controller is still idle. If the controller is not idle, then the method goes to 670 where the rekeying process is paused and at 680, one or more memory access operations are performed. The process returns to 620 and repeats. [0072] In one embodiment, the memory access operations are given priority over rekeying operations such that the rekeying is interrupted/paused in order to process access requests. In another embodiment, thresholds may be set to allow a certain number of access requests to process and then a certain number of rekeying operations to process. In either case, since the memory is not rekeyed in its entirety all at once, the memory will have two areas of data that are scrambled with different keys. The areas are defined by the boundary address. As such, memory access operations are dependent upon the area that is being accessed and the corresponding key. This has been described previously with reference to system diagrams and will now be described with reference to a flow chart shown in FIG. 7. [0073] FIG. 7 illustrates one embodiment of a method 700 associated with processing memory requests in a memory system that implements the key rotation techniques described previously (for example, as part of block 680 of FIG. 6). Since rekeying of data may be interrupted and incomplete as previously described, different portions of the memory may be scrambled with different keys. Method 700 will be described with reference to the memory having two portions of memory that are scrambled with two different keys due to the rekeying process not being entirely completed. Accordingly, memory access requests are processed based on the location of the address in the memory and the corresponding key. The data in the address of the access request needs to be descrambled with the key that was used to scramble the data in order to correctly retrieve the original data. [0074] At 710, the method initiates by receiving a memory access request (e.g., a read request) for data at an address in the memory. To provide accurate data, the data at the address needs to be descrambled before being returned as the result of the access request. The appropriate key is determined for the portion of memory. At 720, the boundary address is retrieved from, for example, the boundary register. As previously example, the boundary address indicates the currently location of the rekeying process and thus indicates the boundary between a first portion of the memory (the rekeyed portion with the new key) and a second portion (portion needs to be rekeyed, data is scrambled with old key). Accordingly, the boundary address can be used to determine whether the address in the access request is in the first portion of the memory or the second portion of the memory. Thus the boundary address is used to identify which key is used to descramble the data in the memory. [0075] At 730, it is determined whether the address is in the first portion of the memory. If the requested address is in the first portion of the memory (e.g., below than the boundary address), the method goes to block 740 where the first key is selected to descramble the data. At 750, the first key is provided to the memory controller to descramble the data stored in the requested address. At 760, the data at the requested address is descrambled with the provided key and the data is returned to the requestor. [0076] Alternatively, if at 730 it is determined that the address is in the second portion of the memory (e.g., equal to or higher than the boundary address), the method goes to 770 where the second key is selected to descramble the data. At 780, the second key is provided to the memory controller. Then at 760, the data is descrambled with the second key, and the data is returned to the requestor. The process is repeated for the next memory request. [0077] If the memory access request is a write request, the process is slightly different since the data to be written is first scrambled with the appropriate key depending on which portion of memory the data is being written to. For example, the method determines which portion of memory that the requested write address is in based on the boundary address (e.g., compare requested write address to boundary address). If the requested write address is in the first portion, then the data to be written is scrambled with the first key, otherwise the data is scrambled with the second key. Then the scrambled data is written to the memory location at the requested write address. In one embodiment, the memory controller is configured to perform these functions. [0078] If a rekeying process of the memory is still pending during the process of method 700, then when the memory controller becomes idle and there are no access requests to process, the method 600 is invoked and the rekeying process is resumed for the next data at the boundary address. [0079] With the disclosed techniques, data stored in a memory does not have to be completely rekeyed before the data of the memory can be accessed. Instead, the data stored in the memory is intermittently rekeyed in portions while memory operations are permitted to continue during the rekeying process. Therefore, the key rotation does not shut down the memory or require to be performed only at system reboot/reset. [0080] The following includes definitions of selected terms employed herein. The definitions include various examples and/or forms of components that fall within the scope of a term and that may be used for implementation. The examples are not intended to be limiting. Both singular and plural forms of terms may be within the definitions. [0081] References to "one embodiment", "an embodiment", "one example", "an example", and so on, indicate that the embodiment(s) or example(s) so described may include a particular feature, structure, characteristic, property, element, or limitation, but that not every embodiment or example necessarily includes that particular feature, structure, characteristic, property, element or limitation. Furthermore, repeated use of the phrase "in one embodiment" does not necessarily refer to the same embodiment, though it may. [0082] "Computer storage medium" as used herein is a non-transitory medium that stores instructions configured to perform any of the disclosed functions, and/or data in combination therewith. A computer storage medium may take forms, including, but not limited to, non-volatile media, and volatile media. Non-volatile media may include, for example, optical disks, magnetic disks, and so on. Volatile media may include, for example, semiconductor memories, dynamic memory, and so on. Common forms of a computer storage media may include, but are not limited to, a floppy disk, a flexible disk, a hard disk, a magnetic tape, other magnetic medium, an ASIC, a compact disk, other optical medium, a RAM, a ROM, a memory chip or card, a memory stick, and other electronic media that can store computer instructions and/or data. [0083] "Logic" as used herein includes a computer or electrical hardware component(s), firmware, a non-transitory computer storage medium that stores instructions, and/or combinations of these components configured to perform any of the functions or actions disclosed, and/or to cause a function or action from another logic, method, and/or system. Logic may include a microprocessor controlled by an algorithm configured to perform any of the disclosed functions, a discrete logic (e.g., ASIC), an analog circuit, a digital circuit, a programmed logic device, a memory device containing instructions that when executed perform an algorithm, and so on. Logic may include one or more gates, combinations of gates, or other circuit components. Where multiple logics are described, it may be possible to incorporate the multiple logics into one physical logic component. Similarly, where a single logic unit is described, it may be possible to distribute that single logic unit between multiple physical logic components. [0084] While for purposes of simplicity of explanation, illustrated methodologies are shown and described as a series of blocks. The methodologies are not limited by the order of the blocks as some blocks can occur in different orders and/or concurrently with other blocks from that shown and described. Moreover, less than all the illustrated blocks may be used to implement an example methodology. Blocks may be combined or separated into multiple components. Furthermore, additional and/or alternative methodologies can employ additional actions that are not illustrated in blocks. [0085] To the extent that the term "includes" or "including" is employed in the detailed description or the claims, it is intended to be inclusive in a manner similar to the term "comprising" as that term is interpreted when employed as a transitional word in a claim. [0086] While example systems, methods, and so on have been illustrated by describing examples, and while the examples have been described in considerable detail, it is not the intention of the applicants to restrict or in any way limit the scope of the appended claims to such detail. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the systems, methods, and so on described herein. Therefore, the disclosure is not limited to the specific details, the representative apparatus, and illustrative examples shown and described. Thus, this disclosure is intended to embrace alterations, modifications, and variations that fall within the scope of the appended claims.
Technologies for computing context replay include a computing device having a persistent memory and a volatile memory. The computing device creates multiple snapshots that are each indicative of a user's computing context at a corresponding sync point. The snapshots may include metadata created in response to system events, memory snapshots stored in a virtual machine, and/or video data corresponding to the computing context. At least a part of the snapshots are stored in the persistent memory. The computing device presents a timeline user interface based on the snapshots. The timeline includes multiple elements that are associated with corresponding sync points. The timeline elements may visually indicate a salience value that has been determined for each corresponding sync point. In response to a user selection of a sync point, the computing device activates a computing context corresponding to the snapshot for the selected sync point. Other embodiments are described and claimed.
WHAT IS CLAIMED IS:1. A compute device for user compute context replay, the compute device comprising:snapshot generator circuitry to create a plurality of snapshots, wherein each of the snapshots is indicative of a user compute context at a corresponding point in time;snapshot browser circuitry to (i) present a timeline user interface based on the plurality of snapshots, wherein the timeline user interface includes a plurality of elements, and wherein each element is associated with a corresponding sync point, wherein each sync point corresponds to a point in time and (ii) receive a user selection indicative of a first selected sync point in response to a presentation of the timeline user interface, wherein the first selected sync point corresponds to a first selected snapshot of the plurality of snapshots; andtimeline coordinator circuitry to activate a first user compute context that corresponds to the first selected snapshot in response to a receipt of the user selection.2. The compute device of claim 1, wherein:to create the plurality of snapshots comprises to log metadata associated with user-interaction events of the user compute context; andto activate the user compute context comprises to replay the metadata of the first selected snapshot.3. The compute device of claim 1, wherein to create the plurality of snapshots comprises to receive metadata indicative of the user compute context from a remote compute device.4. The compute device of claim 1, wherein:to create the plurality of snapshots comprises to create a virtual machine memory snapshot that corresponds to each of the plurality of snapshots; andto activate the user compute context comprises to load the virtual machine memory snapshot.5. The compute device of claim 1, wherein:to create the plurality of snapshots comprises to capture video data indicative of the user compute context; and to activate the user compute context comprises to display the video data corresponding to the first selected snapshot.6. The compute device of claim 1, wherein to create the plurality of snapshots comprises to apply security restrictions to the user compute context.7. The compute device of claim 1, further comprising a persistent memory and a volatile memory, wherein:the snapshot generator circuitry is further to store at least a part of the plurality of snapshots in the persistent memory; andto activate the first user compute context that corresponds to the first selected snapshot comprises to load the first user compute context from the persistent memory into the volatile memory.8. The compute device of claim 7, wherein the timeline coordinator circuitry is further to:select a second selected snapshot of the plurality of snapshots based on a relationship between the second selected snapshot and the first selected snapshot; andload a second user compute context that corresponds to the second selected snapshot from the persistent memory into the volatile memory in response to an activation of the first user compute context.9. The compute device of claim 1, wherein to activate the first user compute context comprises to transfer data from the first user compute context to a current user compute context.10. The compute device of any of claims 1-9, wherein each element of the timeline user interface comprises a visual representation of a corresponding snapshot of the plurality of snapshots.11. The compute device of claim 10, wherein the snapshot browser circuitry is further to:analyze the plurality of snapshots to determine a salience value associated with the sync point corresponding to each of the plurality of snapshots, wherein to analyze the plurality of snapshots to determine the salience value associated with the sync point that corresponds to each of the plurality of snapshots comprises to analyze the visual representation of the corresponding snapshot to determine visual distinctiveness of the visual representation;wherein to present the timeline user interface comprises to display a visual indication of the salience value associated with the sync point that corresponds to each of the plurality of elements of the timeline user interface.12. The compute device of claim 10, wherein the visual representation of the corresponding snapshot of the plurality of snapshots comprises a visual indication of a characteristic of a document of the corresponding snapshot, wherein the characteristic of the document comprises an associated application, visual distinctiveness of elements within the document, or a related topic to the document.13. The compute device of claim 10, wherein the visual representation of the corresponding snapshot of the plurality of snapshots comprises a visual indication of a usage factor of the corresponding snapshot, wherein the usage factor comprises an elapsed edit time, an elapsed time with window focus, or a document share attribute.14. The compute device of claim 10, wherein the snapshot browser circuitry is further to:receive a visual search term in response to the presentation of the timeline user interface; andperform a visual search of the plurality of snapshots based on the visual search term.15. The compute device of any of claims 1-9, wherein to receive the user selection indicative of the first selected sync point comprises to receive a transport control command.16. A method for user computing context replay, the method comprising:creating, by a computing device, a plurality of snapshots, wherein each of the snapshots is indicative of a user computing context at a corresponding point in time;presenting, by the computing device, a timeline user interface based on the plurality of snapshots, wherein the timeline user interface includes a plurality of elements, and wherein each element is associated with a corresponding sync point, wherein each sync point corresponds to a point in time; receiving, by the computing device, a user selection indicative of a first selected sync point in response to presenting the timeline user interface, wherein the first selected sync point corresponds to a first selected snapshot of the plurality of snapshots; andactivating, by the computing device, a first user computing context corresponding to the first selected snapshot in response to receiving the user selection.17. The method of claim 16, wherein:creating the plurality of snapshots comprises logging metadata associated with user-interaction events of the user computing context; andactivating the user computing context comprises replaying the metadata of the first selected snapshot.18. The method of claim 16, wherein creating the plurality of snapshots comprises creating a virtual machine memory snapshot corresponding to each of the plurality of snapshots.19. The method of claim 16, further comprising:storing, by the computing device, at least a part of the plurality of snapshots in a persistent memory of the computing device;wherein activating the first user computing context corresponding to the first selected snapshot comprises loading the first user computing context from the persistent memory into a volatile memory of the computing device.20. The method of claim 16, wherein activating the first user computing context comprises transferring data from the first user computing context to a current user computing context.21. The method of claim 16, wherein each element of the timeline user interface comprises a visual representation of a corresponding snapshot of the plurality of snapshots.22. The method of claim 21, further comprising:analyzing, by the computing device, the plurality of snapshots to determine a salience value associated with the sync point corresponding to each of the plurality of snapshots; wherein presenting the timeline user interface comprises displaying a visual indication of the salience value associated with the sync point corresponding to each of the plurality of elements of the timeline user interface.23. A computing device comprising:a processor; anda memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of claims 16-22.24. One or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of claims 16-22.25. A computing device comprising means for performing the method of any of claims 16-22.
TECHNOLOGIES FOR COMPUTING CONTEXT REPLAY WITH VISUAL SEARCHING CROSS-REFERENCE TO RELATED U.S. PATENT APPLICATION[0001] The present application claims priority to U.S. Utility Patent Application SerialNo. 14/866,179, entitled "TECHNOLOGIES FOR COMPUTING CONTEXT REPLAY WITH VISUAL SEARCHING," which was filed on September 25, 2015.BACKGROUND[0002] Typical computing systems include volatile memory such as random-access memory (RAM) coupled to persistent data storage such as hard disk drives or solid-state drives. Volatile memory requires a power source for operation; the contents of volatile memory may be lost when the power supply to a computing system is turned off. Persistent, or nonvolatile, storage retains its contents while power to the computing system is turned off.BACKGROUND[0003] Typical computing systems include volatile memory such as random-access memory (RAM) coupled to persistent data storage such as hard disk drives or solid-state drives. Volatile memory requires a power source for operation; the contents of volatile memory may be lost when the power supply to a computing system is turned off. Persistent, or nonvolatile, storage retains its contents while power to the computing system is turned off.[0004] Some computing systems include persistent memory, which may be byte- addressable, high-performance, nonvolatile memory. Persistent memory may provide performance comparable to traditional volatile random access memory (RAM) while also providing data persistence. Computing systems may use persistent memory for program execution and data storage.[0005] Current backup and versioning systems may allow a user to recover or revert to previous versions of a file. Typically, to restore data, the user launches a specialized restore program and then selects the data to be restored (e.g., a file, a directory, a previous system configuration, etc.). Restoring the data may be time-consuming or intrusive, and may require rebooting the system. For example, Microsoft® Windows™ allows the user to restore a previous system configuration (e.g., the operating system, OS drivers, and settings) by selecting from one or more saved checkpoints in a specialized restore tool. As another example, Apple® Time Machine® provides a visual navigation system allowing a user to browse and select previous versions of a file that were saved to disk. Lifestreaming services such as the former Jaiku service recorded a messaging stream across multiple web applications. Jaiku did not store local interactions with applications, documents, or files.BRIEF DESCRIPTION OF THE DRAWINGS[0006] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0007] FIG. 1 is a simplified block diagram of at least one embodiment of a computing device for user computing context replay;[0008] FIG. 2 is a simplified block diagram of at least one embodiment of an environment of the computing device of FIG. 1;[0009] FIG. 3 is a simplified flow diagram of at least one embodiment of a method for user computing context replay that may be executed by the computing device of FIGS. 1 and 2;[0010] FIG. 4 is a diagram illustrating at least one embodiment of a user interface of the computing device of FIGS. 1-2;[0011] FIG. 5 is a schematic diagram illustrating memory structures that may be established by the computing device of FIGS. 1 and 2; and[0012] FIG. 6 is a schematic diagram illustrating additional memory structures that may be established by the computing device of FIGS. 1 and 2.DETAILED DESCRIPTION OF THE DRAWINGS[0013] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.[0014] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one of A, B, and C" can mean (A); (B); (C): (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C): (A and B); (A and C); (B and C); or (A, B, and C).[0015] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or nonvolatile memory, a media disc, or other media device).[0016] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.[0017] Referring now to FIG. 1, in one embodiment, a system 100 includes a computing device 102 having both volatile and persistent memory. The system 100 may include additional mobile computing devices 104, which may be in communication with the computing device 102 over a network 106. In use, as discussed in more detail below, the computing device 102 creates multiple snapshots of a user's computing context as the user performs computing tasks. The computing context may include application state (e.g., document contents, application events, window positions, etc.) as well as other contextual data such as input device data, sensor data, network resources, and other data. The computing context may include contextual data generated by the computing device 102 as well as contextual data generated by other devices in addition to the computing device 102, such as the mobile computing devices 104. The computing device 102 presents a visual timeline representation of the stored snapshots, and the user may use the visual timeline to search, filter, navigate, and otherwise select previous snapshots. The computing device 102 may restore a previous computing context from a snapshot in response to a user selection. Thus, the system 100 may allow a user to revert to previous computing contexts simply and completely, without being limited to reverting file versions. Additionally, browsing and restoration of previous data may be integrated with the user's normal workflow. Also, by using a visual timeline with visual searching and indexing, the system 100 may take advantage of human visual memory.[0018] The computing device 102 may be embodied as any type of computing device capable of performing the functions described herein, including, without limitation, a computer, a laptop computer, a notebook computer, a tablet computer, a smartphone, a mobile computing device, a wearable computing device, a multiprocessor system, a server, a rack-mounted server, a blade server, a network appliance, a web appliance, a distributed computing system, a processor-based system, and/or a consumer electronic device. As shown in FIG. 1, the computing device 102 includes a processor 120, an input/output subsystem 122, a memory 124, a data storage device 130, and communication circuitry 132. Of course, the computing device 102 may include other or additional components, such as those commonly found in a computer (e.g., various input/output devices), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, the memory 124, or portions thereof, may be incorporated in one or more processor 120 in some embodiments.[0019] The processor 120 may be embodied as any type of processor capable of performing the functions described herein. For example, the processor may be embodied as a single or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/controlling circuit. Similarly, the memory 124 may be embodied as any type of volatile or nonvolatile memory or data storage capable of performing the functions described herein. In operation, the memory 124 may store various data and software used during operation of the computing device 102 such as operating systems, applications, programs, libraries, and drivers.[0020] The memory 124 further includes volatile memory 126 and persistent memory128. The volatile memory 126 may be embodied as traditional RAM, meaning that any data contained in the volatile memory 126 is lost when power is removed from the computing device 102 and/or the volatile memory 126. The persistent memory 128 may be embodied as any byte- addressable, high-performance, nonvolatile memory. For example, the persistent memory 128 may be embodied as battery-backed RAM, phase-change memory, spin-transfer torque RAM, resistive RAM, memristor-based memory, or other types of persistent memory. The persistent memory 128 may include programs and data similar to the volatile memory 126; however, the contents of the persistent memory 128 are retained for at least some period of time when power is removed from the computing device 102 and/or the persistent memory 128. In some embodiments, the memory 124 may include only persistent memory 128; however, in those embodiments a portion of the persistent memory 128 may be used to store volatile data similar to volatile memory 126.[0021] The memory 124 is communicatively coupled to the processor 120 via the I/O subsystem 122, which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 120, the memory 124, and other components of the computing device 102. For example, the I/O subsystem 122 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 122 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with the processor 120, the memory 124, and other components of the computing device 102, on a single integrated circuit chip.[0022] The data storage device 130 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Access to the data storage device 130 may be much slower than to the persistent memory 128. Additionally, the data storage device 130 may be accessed through a block device, file system, or other non-byte-addressable interface.[0023] The communication circuitry 132 of the computing device 102 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the computing device 102, the mobile computing devices 104 and/or other remote devices over the network 106. The communication circuitry 132 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.[0024] In some embodiments, the computing device 102 may also include a display 134 and one or more peripheral devices 136. The display 134 may be embodied as any type of display capable of displaying digital information such as a liquid crystal display (LCD), a light emitting diode (LED), a plasma display, a cathode ray tube (CRT), or other type of display device. As described below, the display 134 may be used to display a user interface for browsing, searching, and otherwise navigating stored snapshots or to display other information to the user of the computing device 102.[0025] The peripheral devices 136 may include any number of additional input/output devices, interface devices, and/or other peripheral devices. For example, in some embodiments, the peripheral devices 136 may include a touch screen, graphics circuitry, keyboard, mouse, speaker system, microphone, network interface, and/or other input/output devices, interface devices, and/or peripheral devices.[0026] Each of the mobile computing devices 104 may be configured to provide user computing context data to the computing device 102 as described further below. Each mobile computing device 104 may be embodied as any type of computation or computer device capable of performing the functions described herein, including, without limitation, a computer, a smart phone, an embedded computing device, a tablet computer, a laptop computer, a notebook computer, a wearable computing device, an in-vehicle infotainment system, a multiprocessor system, a network appliance, a web appliance, a distributed computing system, a processor-based system, and/or a consumer electronic device. Thus, the mobile computing device 104 includes components and devices commonly found in a smart phone or similar computing device, such as a processor, an I/O subsystem, a memory, a data storage device, and/or communication circuitry. Those individual components of the mobile computing device 104 may be similar to the corresponding components of the computing device 102, the description of which is applicable to the corresponding components of the mobile computing device 104 and is not repeated herein so as not to obscure the present disclosure. Additionally, although illustrated in FIG. 1 as being in communication with multiple mobile computing devices 104, it should be appreciated that the computing device 102 may also communicate with other computing devices such as servers, desktop computers, workstations, or other primarily stationary devices.[0027] As discussed in more detail below, the computing device 102 and the mobile computing devices 104 may be configured to transmit and receive data with each other and/or other devices of the system 100 over the network 106. The network 106 may be embodied as any number of various wired and/or wireless networks. For example, the network 106 may be embodied as, or otherwise include, a wired or wireless local area network (LAN), a wired or wireless wide area network (WAN), a cellular network, and/or a publicly-accessible, global network such as the Internet. As such, the network 106 may include any number of additional devices, such as additional computers, routers, and switches, to facilitate communications among the devices of the system 100. [0028] Referring now to FIG. 2, in an illustrative embodiment, the computing device102 establishes an environment 200 during operation. The illustrative environment 200 includes a user computing context 202, a snapshot generator module 204, a snapshot browser module 208, and a timeline coordinator module 212. The various modules of the environment 200 may be embodied as hardware, firmware, software, or a combination thereof. For example, one or more of the modules, logic, and other components of the environment 200 may form a portion of, or otherwise be established or executed by, the processor 120 or other hardware components of the computing device 102. As such, in some embodiments, any one or more of the modules of the environment 200 may be embodied as a circuit or collection of electrical devices (e.g., a snapshot generator circuit 204, a snapshot browser circuit 208, etc.).[0029] The user computing context 202 may be embodied as any data indicative of the current computing environment of the user, including open applications, documents, windows, and other user interface elements. The user computing context 202 may include data relating to applications and content accessed by the user on the computing device 102, as well as on remote devices such as the mobile computing devices 104.[0030] The snapshot generator module 204 is configured to create multiple snapshots206. Each of the snapshots 206 is indicative of the user computing context 202 at a corresponding sync point (that is, a corresponding point in time). The snapshot generator module 204 is further configured to store at least a part of snapshots 206 in the persistent memory 128. The snapshot generator module 204 may be configured to create snapshots 206 by logging metadata associated with user-interaction events of the user computing context 202, creating virtual machine memory snapshots, or capturing video data indicative of the user computing context 202. The temporal resolution of the snapshots 206 may decrease for snapshots 206 that are further back in time.[0031] The snapshot browser module 208 is configured to present a timeline user interface based on the snapshots 206. The timeline user interface includes several user interface elements 210 such as timeline scrubbers, thumbnails, date pickers, or other elements. Each user interface element 210 is associated with a corresponding sync point and thus with a corresponding snapshot 206. Each user interface element 210 of the timeline user interface may include a visual representation such as a thumbnail image or video of the corresponding snapshot 206. The snapshot browser module 208 is further configured to receive a user selection that is indicative of a selected sync point. In some embodiments, the snapshot browser module 208 may be configured to analyze the snapshots 206 to determine salience values associated with the sync points corresponding to the snapshots 206. In those embodiments, the timeline user interface may display a visual indication of the salience value associated with each sync point. In some embodiments, the snapshot browser module 208 may be configured to stream or pre-load snapshots 206 based on user navigation commands. The snapshot browser module 208 may be configured to select a second snapshot 206 based on the selected snapshot 206 (such as an adjacent snapshot 206) and then load the second snapshot 206 from the persistent memory 128 into the volatile memory 126.[0032] The timeline coordinator module 212 is configured to activate the user computing context 202 that corresponds to snapshot 206 selected by the user. Activating the user computing context 202 may include loading the computing context 202 from the persistent memory 128 into the volatile memory 126. The timeline coordinator module 212 may be configured to activate the user computing context 202 by replaying the metadata stored in the selected snapshot 206, loading the virtual machine memory snapshot of the selected snapshot 206, or displaying the video data corresponding to the selected snapshot 206. In some embodiments, the timeline coordinator module 212 may be configured to transfer data from the selected user computing context 202 to a currently executing user computing context 202.[0033] Referring now to FIG. 3, in use, the computing device 102 may execute a method 300 for user computing context replay. The method 300 begins in block 302, in which the computing device 102 creates a snapshot 206 of the current user computing context 202 and stores at least a part of the snapshot 206 in the persistent memory 128. Each snapshot 206 corresponds to the user computing context 202 at a particular sync point (i.e., a particular date and time). The user computing context 202 may include any state data, input data, sensor data, or other data indicative of the current computing environment of the user, including open applications, documents, windows, and other user interface elements. Thus, the user computing context 202 may include the applications currently in use by the user, sensor data received by the computing device 102, location data, motion sensing data, document and file versions, operating system document or window focus, and other contextual information. The computing device 102 may create snapshots 206 periodically, continually, or responsively. For example, the computing device 102 may create snapshots 206 in response to certain system events or on any other appropriate schedule. The temporal resolution of the snapshots 206 may decrease for sync points further back in time. Thus, sync points could be fine-grained (e.g., occurring seconds apart) for recent sync points or, as the timeline gets older (or the events less interesting) the sync points may be separated by larger amounts of time (e.g., hours or days apart). The computing device 102 may store the snapshot 206 using any appropriate format. Additionally, although in the illustrative embodiment at least a part of the snapshot 206 is stored in the persistent memory 128, it should be understood that in some embodiments the snapshot 206 may be stored in the volatile memory 126, the data storage device 130, or other storage area of the computing device 102.[0034] In some embodiments, in block 304 the computing device 102 may store the snapshot 206 by logging metadata corresponding to events and/or actions generated in the user's computing environment. For example, the computing device 102 may log metadata in response to application events, file system events, or other user-interaction events. The events may be generated and/or monitored using one or more application programming interfaces (APIs) provided by an operating system or other operating environment of the computing device 102. For example, in some embodiments the computing device 102 may monitor for certain Windows™ API calls and store metadata in response to those API calls. The logged metadata may describe attributes of the user's computing environment including open documents, window placement, window focus, document versions, location, extracted text from individual documents (including the use of optical-to-character recognition to generate text), extracted images from individual documents, or other attributes. As described further below, to recreate a particular user computing context 202, the logged metadata may be replayed or otherwise applied to a known previous snapshot 206. The logged metadata may be stored in a database, which may be stored in the persistent memory 128, in the volatile memory 126, and/or stored across both the persistent memory 128 and the volatile memory 126.[0035] In some embodiments, in block 306 the computing device 102 may receive computing context 202 information from one or more remote devices, such as a mobile computing device 104. The remote device context information may be indicative of events or user actions performed by the user with the remote device, including particular interactions with applications, documents or other content, messaging actions, as well as other factors such as device location, sensor data, connected Bluetooth devices, and other peripherals or input methods. The remote device events may correspond to one or more local applications of the computing device 102 that are capable of accessing the same data. For example, the computing device 102 may receive context information from a mobile computing device 104 indicating that the user has interacted with a particular image or other content, generated a message, or performed another interactive action. As another example, the computing device 102 may receive context information from an embedded connected device such as an Amazon™ Dash Button™, an Internet of Things (IoT) controller, or other embedded device. As still another example, the computing device 102 may receive sensor data from a remote device such as a fitness tracker, eye tracker, smart watch, or other sensor-enabled remote device. [0036] In some embodiments, in block 308 the computing device 102 may store the snapshot 206 by creating a snapshot of the contents of the volatile memory 126 and storing the snapshot 206 in a virtual machine. The snapshot 206 thus may represent the entire contents of memory associated with the user computing environment and/or one or more applications of the user computing environment. The snapshot 206 may be created using a technique similar to the technique used by hypervisors to live-migrate virtual machines between hosts. For example, the computing device 102 may shim available virtual machine memory with write barriers to mark the memory as dirty. When the memory region has been marked dirty, its contents may be incrementally stored to the snapshot 206. Additionally, in some embodiments the snapshot 206 may also store a snapshot of disk contents. The snapshot 206 and/or containing virtual machine may be stored in the persistent memory 128. As described further below, to recreate a particular user computing context 202, a hypervisor, virtual machine monitor, or other supervisory component of the computing device 102 may restore the contents of memory from the snapshot 206 to the volatile memory 126.[0037] In some embodiments, in block 310, the computing device 102 may capture video or other image data of the user computing context 202 in the snapshot 206. For example, the snapshot 206 may be embodied as or otherwise include a visual representation of the user's computing environment, such as a video screencast, still image screenshot, or other image data.[0038] It should be understood that in some embodiments, the computing device 102 may store the snapshot 206 using any combination of metadata, memory snapshots, and video as described above in connection with blocks 304 through 310. For example, in some embodiments, the computing device 102 may store metadata describing incremental changes from a baseline memory snapshot. When the changes from the baseline snapshot become too large, the computing device 102 may create a new baseline memory snapshot.[0039] In some embodiments, in block 312 the computing device 102 may cache one or more network resources such as web pages, files, or other remote data. The cached network resources may be stored for backup purposes, for example to be used if network resources cannot be accessed by the computing device 102 at a later time.[0040] In some embodiments, in block 314 the computing device 102 may apply security restrictions to the user computing context 202 and/or the corresponding snapshot 206 data. The computing device 102 may prevent sensitive data such as passwords, secure websites, or digital rights management (DRM) -protected content from being stored in the snapshot 206. For example, in some embodiments, the computing device 102 may identify applications and/or windows containing sensitive data and exclude those windows or applications from the snapshot 206. In some embodiments, the computing device 102 may use a protected audio-video path or other protected output to prevent sensitive data from being stored in the snapshot 206.[0041] In block 316, after storing a snapshot 206, the computing device 102 analyzes the snapshot(s) 206 to determine salience of the sync points corresponding to each of the snapshots 206. Salience may be determined as a measure of weighted importance to the user of a particular sync point, determined by characteristics of documents and usage factors. In some embodiments, salience may be based on visual analysis of the snapshots 206. For example, the computing device 102 may identify the snapshots 206 that include visual milestones, such as visually distinctive elements of applications and/or documents (e.g., large and/or distinctive images as compared to large blocks of text), as particularly salient. As another example, salience may be based on usage data, such as changes in the application receiving operating system focus, document operations, or other events.[0042] In block 318, the computing device 102 presents a timeline user interface that allows a user to browse, search, or otherwise navigate through the snapshots 206 stored by the computing device 102. The timeline user interface includes several user interface elements 210. Each user interface element 210 corresponds to a particular sync point (i.e., a date and time) and thus may also correspond to a particular snapshot 206 stored for that sync point. In some embodiments, the computing device 102 may present the timeline interface in response to a user interface command, such as a particular button press, menu item selection, touch gesture, or other interface command. The computing device 102 may present the timeline user interface using any user interface modality. For example, the computing device 102 may present the timeline user interface as a graphical user interface using the display 134.[0043] In some embodiments, in block 320 the computing device 102 may display visual representations corresponding to one or more of the stored snapshots 206. For example, each user interface element 210 may display one or more visual representations of the stored snapshots 206. The visual representation may be embodied as, for example, a full-sized or thumbnail image or video indicative of the user computing context 202 stored by the snapshot 206, an icon or other symbol indicative of an application, document, or other content of the user computing context 202, or other visual indicator of the user computing context 202. In some embodiments, in block 322, the computing device 102 may visually weigh timeline user interface elements 210 to indicate the salience of the associated sync point. For example, the computing device 102 may render more-salient user interface elements 210 in a larger size. In addition to salience, the computing device 102 may also use the user interface elements 210 to indicate other information associated with the sync points. For example, the computing device 102 may visually indicate a current application or current document characteristics of a sync point, such as versions of a document or related topics to the document. As another example, the computing device 102 may visually indicate usage factors relating to the use of the computing device 102 at the sync point, such as whether a document has been edited, the time spent by the user editing a document, the time spent with the operating system focus on each window, the nature of sharing of a document (e.g., which persons a document has been shared with, when a document is shared, versions that are shared), or other usage factors.[0044] In block 324, the computing device 102 detects a user timeline command or other user selection. The user may interact with the timeline user interface to browse available snapshots 206, search or filter available snapshots 206, select previous snapshots 206 to activate, and/or otherwise navigate through the stored snapshots 206. As described further below, the user may navigate through the snapshots 206 by selecting a user interface element 210 associated with a particular snapshot 206, by selecting a particular date and time, by entering one or more search terms or other search commands, by selecting one or more media transport controls (e.g., play, pause, rewind, fast-forward, video scrubbing, or other controls), or by selecting any other appropriate user interface control. In block 326, the computing device 102 determines whether a user timeline command has been received. If not, the method 300 loops back to block 302 to continue creating snapshots 206 of the user computing context 202. Thus, the computing device 102 may continually, periodically, responsively, or otherwise repeatedly create and store snapshots 206 of the user's computing context 202. Referring back to block 326, if a user timeline command has been received, the method 300 advances to block 328.[0045] In block 328, the computing device 102 loads a selected snapshot 206 from the persistent memory 128 into the volatile memory 126 (if necessary). The snapshot 206 may be selected, for example, in response to a user selection of a sync point associated with the snapshot 206, a search command that matches the snapshot 206, or a media transport command that reaches the snapshot 206 (e.g., rewind, fast-forward, or timeline scrubbing). Once the snapshot 206 is loaded into the volatile memory 126, the user computing context 202 associated with that snapshot may be activated as described below. As described above, the snapshots 206 may be stored partially or completely in the persistent memory 128. Because persistent memory 128 is typically much faster than the data storage device 130, storing part or all of the snapshot 206 data in the persistent memory 128 may improve performance as compared to storing the snapshot 206 data in the data storage device 130. Of course, in some embodiments the computing device 102 may additionally or alternatively store part or all of the snapshot 206 data in the data storage device 130. The particular type and amount of snapshot 206 data stored in the volatile memory 126, the persistent memory 128 and/or the data storage device 130 may vary between embodiments. For example, data that is relatively small but requires low latency may preferably be stored in the volatile memory 126.[0046] In some embodiments, in block 330, the computing device 102 may also load one or more adjacent snapshots 206 from the persistent memory 128 into the volatile memory 126. For example, the computing device 102 may load snapshots 206 associated with sync points that are close in time to a currently selected snapshot 206. As another example, the computing device 102 may load snapshots 206 that are likely to be loaded in the future based on media transport controls activated by the user (e.g., a rewind control or a timeline scrubber control). By loading adjacent snapshots 206, the computing device 102 may reduce latency and otherwise improve performance associated with accessing the snapshots 206.[0047] In block 332, the computing device 102 activates the computing context 202 stored by the selected snapshot 206. Activating the selected snapshot 206 allows the user to view, edit, or otherwise interact with the applications, documents, and other content of the user computing context 202 associated with the selected snapshot 206. The particular interactions available with a computing context 202 may depend on the contents or format of the snapshot 206. The computing device 102 may use the selected snapshot 206 as the active computing environment of the computing device 102, for example by replacing the user's current desktop. As another example, the computing device 102 may replicate the applications, windows, and window arrangement of the snapshot 206, but the contents of associated documents may reflect the most recent version of those documents. For example, the timeline user interface may include a "switch to present" command to bring all viewed documents (including Web links) up to their current state while retaining the relative arrangement or other layout of the documents. In some embodiments, the active snapshot 206 may be presented with the timeline user interface, for example as a subwindow, frame, or other embedded view within the timeline user interface. The computing device 102 may record an additional snapshot 206 for the current computing context 202 prior to activating the selected snapshot 206, to allow the user to return to the current computing context 202. The computing device 102 may use any appropriate technique to activate the snapshot 206, based on the format of the snapshot 206.[0048] In some embodiments, in block 334 the computing device 102 may replay logged metadata associated with the selected snapshot 206. The computing device 102 may start from a known previous snapshot 206 of the user's computing context 202, for example stored as memory contents in a virtual machine, and then generate application events, file system events, or other events based on the logged metadata. In some embodiments, in block 336 the computing device 102 may load a memory snapshot 206 stored in a virtual machine. As described above, the computing device 102 may load the memory snapshot 206 using a process similar to live-migrating virtual machines between hosts. After loading the memory snapshot 206, the applications, documents, and other programs of the user computing context 202 of the selected snapshot 206 may begin to execute.[0049] In some embodiments, in block 338 the computing device 102 may transfer data from the user computing context 202 of the selected snapshot 206 into the current computing context 202 of the computing device 102. The transferred data may include documents, applications, or other content of the selected user computing context 202. For example, the user may browse or otherwise view content of the selected user computing context 202. As another example, the user may transfer a previous version of a document to the current context of the computing device 102 in order to recover the contents of the previous version. The computing device 102 may use any technique to transfer the data. For example, the computing device 102 may allow the user to cut and paste data using a system clipboard. As another example, the computing device 102 may allow the user to transfer files using a network drive or other specialized volume. As still another example, the computing device 102 may perform optical character recognition on the visual representation (e.g., on recorded screencast video) to recover textual contents of the selected user computing context 202. Additionally, in some embodiments the user may share the contents of the selected user computing context 202 with another user, for example by sharing a virtual machine snapshot 206 to provide the other user with the same computing context 202.[0050] In some embodiments, in block 340 the computing device 102 may create a new timeline branch starting with the selected snapshot 206. For example, the computing device 102 may create new versions for one or more documents associated with the selected snapshot 206. Additionally or alternatively, the computing device 102 may prompt or otherwise query the user if the user attempts to edit a prior version of a document. After activating the selected snapshot 206, the method 300 loops back to block 302 to continue monitoring the user computing context 202. Additionally or alternatively, it should be understood that in some embodiments the computing device 102 may activate or otherwise display additional snapshots 206 prior to looping back to block 302 and creating a new snapshot 206. For example, the computing device 102 may activate multiple snapshots 206 while the user rewinds or scrubs through the timeline user interface. [0051] Referring now to FIG. 4, diagram 400 illustrates one potential embodiment of a timeline user interface that may be established by the computing device 102. In the illustrative embodiment, a window 402 contains the timeline user interface. The window 402 may be embodied as, for example, a native application window, a web browser window, an embedded web view, or any other user interface window. In some embodiments, the window 402 may occupy the user's entire desktop environment.[0052] The illustrative window 402 includes a keyframe timeline 404. The keyframe timeline 404 includes a series of thumbnails 406, 408, 412, 414, 418, 420, 424 that correspond to particular sync points and include visual representations of the associated snapshots 206. Each thumbnail also displays the associated time and/or date of the corresponding sync point. Each of the thumbnails in the keyframe timeline 404 corresponds to sync point that may be important to the user, and may be selected using the salience value associated with each sync point. For example, each thumbnail may represent when the user has switched to a different application, edited a new document, or performed similar events. Additionally, the salience value of each sync point may be further shown by the relative sizes of the thumbnails. In the illustrative embodiment, the thumbnails 414, 418 are illustrated as being larger than the other thumbnails and thus have a higher associated salience value. Additionally, as shown, the thumbnail 418 is visually highlighted, indicating that the corresponding sync point is currently activated. The thumbnails 412, 418, 424 are decorated with icons 410, 416, 422, which indicate that in those sync points the same application (which may be represented by the icons 410, 416, 422) is focused and/or active. As shown, the keyframe timeline 404 further includes a date change indictor 426, which may visually indicate changes between dates.[0053] The window 402 also includes time navigation controls, including a time selector 428, a today navigation button 430, and a calendar button 432. The time selector 428 allows the user to directly specify particular times. Upon selecting a particular time, the computing device 102 may update the keyframe timeline 404 based upon the selected time. In some embodiments, the computing device 102 may directly activate a sync point corresponding to the selected time. Similarly, the today navigation button 430 and the calendar button 432 may be used to select particular sync points based on date.[0054] The window 402 includes a computing context view 434. The computing context view 434 may include an interactive or non-interactive view of a selected user computing context 202 of the computing device 102. For example, in the illustrative embodiment, the computing context view 434 includes three application windows 436, 438, 440. The computing context view 434 may display the user computing context 202 of the computing device 102 while the user is operating the computing device 102, for example by displaying the current contents of the user's desktop environment. The computing context view 434 may also display a user computing context 202 loaded from a snapshot 206, for example in response to a user selection of a sync point or in response to the user rewinding, scrubbing, or otherwise navigating through the timeline user interface. In some embodiments, the computing context view 434 may occupy the user's entire desktop environment and/or entire display 134. In those embodiments, the computing context view 434 may shrink, move, be obscured, or otherwise be modified in response to a user command to access the timeline user interface.[0055] The window 402 includes a filter control 442. The filter control 442 may allow the user to filter and otherwise select what sync points appear in the keyframe timeline 404. For example, the user may specify particular filter criteria and only matching sync points will be displayed in the keyframe timeline 404. Illustratively, the filter control 442 includes an eye tracking filter 444, an activity filter 446, a document name filter 448, a current application filter 450, and a productivity filter 452. Of course, additional or alternative filters may be available in other embodiments. The eye tracking filter 446 may, for example, restrict retrieved context data to only items that the user has looked at for a minimum period of time. The productivity filter 452 may, for example, retrieve context data related to work documents in order to avoid review of extraneous information for a given search.[0056] The window 402 further includes a timeline scrubber 454. The timeline scrubber454 may allow the user to browse through available sync points (with corresponding snapshots 206) by dragging, swiping, or otherwise selecting various points on the timeline scrubber 454. In response to selections on the timeline scrubber, the computing device 102 may update the keyframe timeline 404 and/or the computing context view 434. In some embodiments, the timeline scrubber 454 may provide visual representations (e.g. thumbnails) and allow access to all sync points in a particular time period. Unlike the keyframe timeline 404, the timeline scrubber 454 may allow access to all sync points without regard to the associated salience value of the sync point. For example, in the illustrative embodiment the timeline scrubber 454 allows the user to scrub through one entire day's worth of sync points (and corresponding snapshots 206).[0057] The window 402 includes a date/time display 456. The date/time display 456 displays the date and time of the currently selected sync point and changes dynamically in response to timeline navigation. For example, as shown, the date/time display 456 matches the date and time of the currently selected thumbnail 418 of the keyframe timeline 404. [0058] The window 402 includes media transport controls 458, which are illustratively embodied as a rewind button, a play/pause button, and a fast-forward button. The user may select the media transport controls to browse through available sync points. In response to a selection of a media transport control the computing device 102 may activate a series of sync points (and their corresponding snapshots 206) and dynamically update the keyframe timeline 404, the computing context view 434, the timeline scrubber 454, and/or the date/time display 456. For example, in response to selection of the rewind button, the computing device 102 may display snapshots 206 in the computing context view 434 moving backward through time and update the other timeline controls 404, 454, 456 appropriately.[0059] Lastly, the window 402 also includes a search control 460. The search control460 allows the user to enter one or more search terms which may be used to search for particular sync points and/or associated documents. The computing device 102 may search content data, keywords, metadata, tag data, or other data associated with the saved snapshots 206. After searching, the keyframe timeline 404 or other components of the timeline user interface may be limited or focused to matching sync points. As shown, the illustrative search control 460 includes a textual search field and a tags button. However, it should be understood that other search modalities may be used in some embodiments. For example, in some embodiments, the computing device 102 may allow speech input for search terms. As another example, in some embodiments, the computing device 102 may provide for visual search. In that example, the user may provide an image, and the computing device 102 searches for snapshots 206 that match or are similar to that image, for example by analyzing stored video data of the snapshots 206.[0060] Referring now to FIG. 5, schematic diagram 500 illustrates at least one potential embodiment of memory structures that may be established by the computing device 102. As shown, in the illustrative embodiment, the snapshots 206 are stored in a database that is partially stored in the persistent memory 128 and partially stored in the volatile memory 126. For example, the snapshot database 206 may store metadata associated with application and/or user events, as described above in connection with block 304 of FIG. 3. As shown, when a snapshot 206 is selected and activated, the contents of the snapshot database 206 are loaded into an application context of the user computing context 202, which is contained in the volatile memory 126. Snapshot data may be moved within the snapshot database 206 from the persistent memory 128 to the volatile memory 126 to allow low-latency access to snapshots 206. For example, in some embodiments the computing device 102 may monitor the user's progress through the timeline user interface and load snapshots 206 from the slower persistent memory 128 to the faster volatile memory 126 as the user approaches a given time period.[0061] Referring now to FIG. 6, a schematic diagram 600 illustrates at least one potential embodiment of memory structures that may be established by the computing device 102. As shown, the snapshots 206 are each stored in virtual machines 602 that are stored in the persistent memory 128. When a snapshot 206 is selected and activated, a hypervisor 604 and/or host operating system 606 may load the contents of the virtual machine 602 from the persistent memory 128 into the volatile memory 126 and activate the snapshot 206 as the current user computing context 202. For example, in the illustrative embodiment, the computing device 102 has loaded virtual machine 1 into the volatile memory 126, and the contents of snapshot 1 have been activated as the current user computing context 202. In some embodiments, the computing device 102 (e.g., by the hypervisor 604 and/or the host operating system 606) may dynamically allocate space in the volatile memory 126 for a given virtual machine 602 as the user approaches that sync point in the user timeline interface. Thus, virtual machines 602 may be allocated and de-allocated dynamically as the user moves through the timeline. As described above, the snapshots 206 associated with the virtual machines 602 may be used standalone or in combination with file system or other event logging to recreate the full user computing context 202.EXAMPLES[0062] Illustrative examples of the technologies disclosed herein are provided below.An embodiment of the technologies may include any one or more, and any combination of, the examples described below.[0063] Example 1 includes a computing device for user computing context replay, the computing device comprising snapshot generator circuitry to create a plurality of snapshots, wherein each of the snapshots is indicative of a user computing context at a corresponding point in time; snapshot browser circuitry to (i) present a timeline user interface based on the plurality of snapshots, wherein the timeline user interface includes a plurality of elements, and wherein each element is associated with a corresponding sync point, wherein each sync point corresponds to a point in time and (ii) receive a user selection indicative of a first selected sync point in response to a presentation of the timeline user interface, wherein the first selected sync point corresponds to a first selected snapshot of the plurality of snapshots; and timeline coordinator circuitry to activate a first user computing context that corresponds to the first selected snapshot in response to a receipt of the user selection. [0064] Example 2 includes the subject matter of Example 1, and wherein the user computing context comprises application state data associated with one or more user applications.[0065] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to create the plurality of snapshots comprises to log metadata associated with user-interaction events of the user computing context; and to activate the user computing context comprises to replay the metadata of the first selected snapshot.[0066] Example 4 includes the subject matter of any of Examples 1-3, and wherein to create the plurality of snapshots comprises to receive metadata indicative of the user computing context from a remote computing device.[0067] Example 5 includes the subject matter of any of Examples 1-4, and wherein to create the plurality of snapshots comprises to create a virtual machine memory snapshot that corresponds to each of the plurality of snapshots; and to activate the user computing context comprises to load the virtual machine memory snapshot.[0068] Example 6 includes the subject matter of any of Examples 1-5, and wherein to create the plurality of snapshots comprises to capture video data indicative of the user computing context; and to activate the user computing context comprises to display the video data corresponding to the first selected snapshot.[0069] Example 7 includes the subject matter of any of Examples 1-6, and wherein to create the plurality of snapshots comprises to cache network resources associated with the user computing context.[0070] Example 8 includes the subject matter of any of Examples 1-7, and wherein to create the plurality of snapshots comprises to apply security restrictions to the user computing context.[0071] Example 9 includes the subject matter of any of Examples 1-8, and further including a persistent memory and a volatile memory, wherein the snapshot generator circuitry is further to store at least a part of the plurality of snapshots in the persistent memory.[0072] Example 10 includes the subject matter of any of Examples 1-9, and wherein to activate the first user computing context that corresponds to the first selected snapshot comprises to load the first user computing context from the persistent memory into the volatile memory.[0073] Example 11 includes the subject matter of any of Examples 1-10, and wherein the timeline coordinator circuitry is further to select a second selected snapshot of the plurality of snapshots based on a relationship between the second selected snapshot and the first selected snapshot; and load a second user computing context that corresponds to the second selected snapshot from the persistent memory into the volatile memory in response to an activation of the first user computing context.[0074] Example 12 includes the subject matter of any of Examples 1-11, and wherein to activate the first user computing context comprises to transfer data from the first user computing context to a current user computing context.[0075] Example 13 includes the subject matter of any of Examples 1-12, and wherein to activate the first user computing context comprises to create a timeline branch that starts at the first user computing context.[0076] Example 14 includes the subject matter of any of Examples 1-13, and wherein each element of the timeline user interface comprises a visual representation of a corresponding snapshot of the plurality of snapshots.[0077] Example 15 includes the subject matter of any of Examples 1-14, and wherein the snapshot browser circuitry is further to analyze the plurality of snapshots to determine a salience value associated with the sync point corresponding to each of the plurality of snapshots; wherein to present the timeline user interface comprises to display a visual indication of the salience value associated with the sync point that corresponds to each of the plurality of elements of the timeline user interface.[0078] Example 16 includes the subject matter of any of Examples 1-15, and wherein to analyze the plurality of snapshots to determine the salience value associated with the sync point that corresponds to each of the plurality of snapshots comprises to analyze the visual representation of the corresponding snapshot to determine visual distinctiveness of the visual representation.[0079] Example 17 includes the subject matter of any of Examples 1-16, and wherein the visual representation of the corresponding snapshot of the plurality of snapshots comprises a visual indication of a characteristic of a document of the corresponding snapshot.[0080] Example 18 includes the subject matter of any of Examples 1-17, and wherein the characteristic of the document comprises an associated application, visual distinctiveness of elements within the document, or a related topic to the document.[0081] Example 19 includes the subject matter of any of Examples 1-18, and wherein the visual representation of the corresponding snapshot of the plurality of snapshots comprises a visual indication of a usage factor of the corresponding snapshot. [0082] Example 20 includes the subject matter of any of Examples 1-19, and wherein the usage factor comprises an elapsed time editing, an elapsed time with window focus, or a document sharing attribute.[0083] Example 21 includes the subject matter of any of Examples 1-20, and wherein the snapshot browser circuitry is further to receive a visual search term in response to the presentation of the timeline user interface; and perform a visual search of the plurality of snapshots based on the visual search term.[0084] Example 22 includes the subject matter of any of Examples 1-21, and wherein to receive the user selection indicative of the first selected sync point comprises to receive a transport control command.[0085] Example 23 includes the subject matter of any of Examples 1-22, and wherein to receive the user selection indicative of the first selected sync point comprises to receive a search command.[0086] Example 24 includes a method for user computing context replay, the method comprising creating, by a computing device, a plurality of snapshots, wherein each of the snapshots is indicative of a user computing context at a corresponding point in time; presenting, by the computing device, a timeline user interface based on the plurality of snapshots, wherein the timeline user interface includes a plurality of elements, and wherein each element is associated with a corresponding sync point, wherein each sync point corresponds to a point in time; receiving, by the computing device, a user selection indicative of a first selected sync point in response to presenting the timeline user interface, wherein the first selected sync point corresponds to a first selected snapshot of the plurality of snapshots; and activating, by the computing device, a first user computing context corresponding to the first selected snapshot in response to receiving the user selection.[0087] Example 25 includes the subject matter of Example 24, and wherein the user computing context comprises application state data associated with one or more user applications.[0088] Example 26 includes the subject matter of any of Examples 24 and 25, and wherein creating the plurality of snapshots comprises logging metadata associated with user- interaction events of the user computing context; and activating the user computing context comprises replaying the metadata of the first selected snapshot.[0089] Example 27 includes the subject matter of any of Examples 24-26, and wherein creating the plurality of snapshots comprises receiving metadata indicative of the user computing context from a remote computing device. [0090] Example 28 includes the subject matter of any of Examples 24-27, and wherein creating the plurality of snapshots comprises creating a virtual machine memory snapshot corresponding to each of the plurality of snapshots.[0091] Example 29 includes the subject matter of any of Examples 24-28, and wherein creating the plurality of snapshots comprises capturing video data indicative of the user computing context; and activating the user computing context comprises displaying the video data corresponding to the first selected snapshot.[0092] Example 30 includes the subject matter of any of Examples 24-29, and wherein creating the plurality of snapshots comprises caching network resources associated with the user computing context.[0093] Example 31 includes the subject matter of any of Examples 24-30, and wherein creating the plurality of snapshots comprises applying security restrictions to the user computing context.[0094] Example 32 includes the subject matter of any of Examples 24-31, and further including storing, by the computing device, at least a part of the plurality of snapshots in a persistent memory of the computing device.[0095] Example 33 includes the subject matter of any of Examples 24-32, and wherein activating the first user computing context corresponding to the first selected snapshot comprises loading the first user computing context from the persistent memory into a volatile memory of the computing device.[0096] Example 34 includes the subject matter of any of Examples 24-33, and further including selecting, by the computing device, a second selected snapshot of the plurality of snapshots based on a relationship between the second selected snapshot and the first selected snapshot; and loading, by the computing device, a second user computing context corresponding to the second selected snapshot from the persistent memory into the volatile memory in response to activating the first user computing context.[0097] Example 35 includes the subject matter of any of Examples 24-34, and wherein activating the first user computing context comprises transferring data from the first user computing context to a current user computing context.[0098] Example 36 includes the subject matter of any of Examples 24-35, and wherein activating the first user computing context comprises creating a timeline branch starting at the first user computing context. [0099] Example 37 includes the subject matter of any of Examples 24-36, and wherein each element of the timeline user interface comprises a visual representation of a corresponding snapshot of the plurality of snapshots.[00100] Example 38 includes the subject matter of any of Examples 24-37, and further including analyzing, by the computing device, the plurality of snapshots to determine a salience value associated with the sync point corresponding to each of the plurality of snapshots; wherein presenting the timeline user interface comprises displaying a visual indication of the salience value associated with the sync point corresponding to each of the plurality of elements of the timeline user interface.[00101] Example 39 includes the subject matter of any of Examples 24-38, and wherein analyzing the plurality of snapshots to determine the salience value associated with the sync point corresponding to each of the plurality of snapshots comprises analyzing the visual representation of the corresponding snapshot to determine visual distinctiveness of the visual representation.[0100] Example 40 includes the subject matter of any of Examples 24-39, and wherein the visual representation of the corresponding snapshot of the plurality of snapshots comprises a visual indication of a characteristic of a document of the corresponding snapshot.[0101] Example 41 includes the subject matter of any of Examples 24-40, and wherein the characteristic of the document comprises an associated application, visual distinctiveness of elements within the document, or a related topic to the document.[0102] Example 42 includes the subject matter of any of Examples 24-41, and wherein the visual representation of the corresponding snapshot of the plurality of snapshots comprises a visual indication of a usage factor of the corresponding snapshot.[0103] Example 43 includes the subject matter of any of Examples 24-42, and wherein the usage factor comprises an elapsed time editing, an elapsed time with window focus, or a document sharing attribute.[0104] Example 44 includes the subject matter of any of Examples 24-43, and further including receiving, by the computing device, a visual search term in response to presenting the timeline user interface; and performing, by the computing device, a visual search of the plurality of snapshots based on the visual search term.[0105] Example 45 includes the subject matter of any of Examples 24-44, and wherein receiving the user selection indicative of the first selected sync point comprises receiving a transport control command. [0106] Example 46 includes the subject matter of any of Examples 24-45, and wherein receiving the user selection indicative of the first selected sync point comprises receiving a search command.[0107] Example 47 includes a computing device comprising a processor; and a memory having stored therein a plurality of instructions that when executed by the processor cause the computing device to perform the method of any of Examples 24-46.[0108] Example 48 includes one or more machine readable storage media comprising a plurality of instructions stored thereon that in response to being executed result in a computing device performing the method of any of Examples 24-46.[0109] Example 49 includes a computing device comprising means for performing the method of any of Examples 24-46.[0110] Example 50 includes a computing device for user computing context replay, the computing device comprising means for creating a plurality of snapshots, wherein each of the snapshots is indicative of a user computing context at a corresponding point in time; means for presenting a timeline user interface based on the plurality of snapshots, wherein the timeline user interface includes a plurality of elements, and wherein each element is associated with a corresponding sync point, wherein each sync point corresponds to a point in time; means for receiving a user selection indicative of a first selected sync point in response to presenting the timeline user interface, wherein the first selected sync point corresponds to a first selected snapshot of the plurality of snapshots; and means for activating a first user computing context corresponding to the first selected snapshot in response to receiving the user selection.[0111] Example 51 includes the subject matter of Example 50, and wherein the user computing context comprises application state data associated with one or more user applications.[0112] Example 52 includes the subject matter of any of Examples 50 and 51, and wherein the means for creating the plurality of snapshots comprises means for logging metadata associated with user-interaction events of the user computing context; and the means for activating the user computing context comprises means for replaying the metadata of the first selected snapshot.[0113] Example 53 includes the subject matter of any of Examples 50-52, and wherein the means for creating the plurality of snapshots comprises means for receiving metadata indicative of the user computing context from a remote computing device. [0114] Example 54 includes the subject matter of any of Examples 50-53, and wherein the means for creating the plurality of snapshots comprises means for creating a virtual machine memory snapshot corresponding to each of the plurality of snapshots.[0115] Example 55 includes the subject matter of any of Examples 50-54, and wherein the means for creating the plurality of snapshots comprises means for capturing video data indicative of the user computing context; and the means for activating the user computing context comprises means for displaying the video data corresponding to the first selected snapshot.[0116] Example 56 includes the subject matter of any of Examples 50-55, and wherein the means for creating the plurality of snapshots comprises means for caching network resources associated with the user computing context.[0117] Example 57 includes the subject matter of any of Examples 50-56, and wherein the means for creating the plurality of snapshots comprises means for applying security restrictions to the user computing context.[0118] Example 58 includes the subject matter of any of Examples 50-57, and further including means for storing at least a part of the plurality of snapshots in a persistent memory of the computing device.[0119] Example 59 includes the subject matter of any of Examples 50-58, and wherein the means for activating the first user computing context corresponding to the first selected snapshot comprises means for loading the first user computing context from the persistent memory into a volatile memory of the computing device.[0120] Example 60 includes the subject matter of any of Examples 50-59, and further including means for selecting a second selected snapshot of the plurality of snapshots based on a relationship between the second selected snapshot and the first selected snapshot; and means for loading a second user computing context corresponding to the second selected snapshot from the persistent memory into the volatile memory in response to activating the first user computing context.[0121] Example 61 includes the subject matter of any of Examples 50-60, and wherein the means for activating the first user computing context comprises means for transferring data from the first user computing context to a current user computing context.[0122] Example 62 includes the subject matter of any of Examples 50-61, and wherein the means for activating the first user computing context comprises means for creating a timeline branch starting at the first user computing context. [0123] Example 63 includes the subject matter of any of Examples 50-62, and wherein each element of the timeline user interface comprises a visual representation of a corresponding snapshot of the plurality of snapshots.[0124] Example 64 includes the subject matter of any of Examples 50-63, and further including means for analyzing the plurality of snapshots to determine a salience value associated with the sync point corresponding to each of the plurality of snapshots; wherein the means for presenting the timeline user interface comprises means for displaying a visual indication of the salience value associated with the sync point corresponding to each of the plurality of elements of the timeline user interface.[0125] Example 65 includes the subject matter of any of Examples 50-64, and wherein the means for analyzing the plurality of snapshots to determine the salience value associated with the sync point corresponding to each of the plurality of snapshots comprises means for analyzing the visual representation of the corresponding snapshot to determine visual distinctiveness of the visual representation.[0126] Example 66 includes the subject matter of any of Examples 50-65, and wherein the visual representation of the corresponding snapshot of the plurality of snapshots comprises a visual indication of a characteristic of a document of the corresponding snapshot.[0127] Example 67 includes the subject matter of any of Examples 50-66, and wherein the characteristic of the document comprises an associated application, visual distinctiveness of elements within the document, or a related topic to the document.[0128] Example 68 includes the subject matter of any of Examples 50-67, and wherein the visual representation of the corresponding snapshot of the plurality of snapshots comprises a visual indication of a usage factor of the corresponding snapshot.[0129] Example 69 includes the subject matter of any of Examples 50-68, and wherein the usage factor comprises an elapsed time editing, an elapsed time with window focus, or a document sharing attribute.[0130] Example 70 includes the subject matter of any of Examples 50-69, and further including means for receiving a visual search term in response to presenting the timeline user interface; and means for performing a visual search of the plurality of snapshots based on the visual search term.[0131] Example 71 includes the subject matter of any of Examples 50-70, and wherein the means for receiving the user selection indicative of the first selected sync point comprises means for receiving a transport control command. [0132] Example 72 includes the subject matter of any of Examples 50-71, and wherein the means for receiving the user selection indicative of the first selected sync point comprises means for receiving a search command.
A processor including a processing core to execute an instruction prior to executing a memory allocation call; one or more last branch record (LBR) registers to store one or more recently retired branch instructions; a performance monitoring unit (PMU) comprising a logic circuit to: retrieve the one or more recently retired branch instructions from the one or more LBR registers; identify, based on the retired branch instructions, a signature of the memory allocation call; provide the signature to software to determine a memory tier to allocate memory for the memory allocation call.
A processor comprising:a processing core to execute a memory allocation call to allocate memory in a memory device;a last branch record (LBR) register to store information indicative of a recently retired branch instruction; anda performance monitoring unit (PMU) coupled to the LBR register, the PMU comprising a logic circuit to:retrieve the information from the LBR register prior to a memory allocation call being received by the processing core; andidentify, based on the information, a signature of the memory allocation call; andprovide the signature to the processing core.The processor of claim 1, wherein the processing core is to execute software, wherein the software is to determine a memory tier to allocate memory for the memory allocation call using the signature prior to execution of the memory allocation call, and wherein the signature identifies an allocation path of instructions executed prior to execution of the memory allocation call.The processor of any of claims 1-2, wherein the logic circuit comprises a hash circuit to directly hash the recently retired branch instruction to the signature.The processor of any of claims 2-3, wherein the signature is a scalar value that is associated with an allocation path, and wherein the signature is associated with one or more memory buffers in memory previously allocated through the allocation path.The processor of any of claims 2-4, wherein in response to executing the memory allocation call, the processing core is to allocate a memory buffer in the memory tier, and wherein the PMU is to:collect an address and a size of the memory buffer from the memory allocation call; andcollect data associated with accesses of the memory buffer.A system comprising:a memory; anda processing device, operatively coupled to the memory, the processing device to:retrieve last branch records (LBRs) from LBR registers prior to executing a memory allocation call;identify, based on the LBRs, an execution context of the memory allocation call; anddetermine a memory tier to allocate memory for the memory allocation call based on the execution context.The system of claim 6, wherein the LBRs retrieved from the LBR registers comprise an LBR vector, representing information regarding retired branch instructions, stored in the LBR registers, and wherein the processing device, to identify the execution context of the memory allocation call, is to apply a hash function to the LBR vector to identify a signature associated with the execution context.The system of claim 7, wherein the processing device is further to store a virtual address pointer generated for the memory allocation call with the signature in an associative data structure.The system of claim 7, wherein the signature is a scalar value that is associated with a memory buffer, wherein the memory buffer is an allocation of the memory that was assigned to the signature when a previous memory allocation call was executed.The system of any of claims 6-9, wherein the execution context comprises an allocation path, the allocation path being a series of retired branch instructions leading up to the memory allocation call.The system of any of claims 6-10, the processing device further to:allocate a memory buffer in response to executing the memory allocation call;collect an address of the memory buffer and a size of the memory buffer; andcollect data associated with accesses of memory at the address of the memory buffer.A method comprising:retrieving a last branch register (LBR) vector from a stack of LBR registers prior to a memory allocation call;determining a unique signature using the LBR vector, the signature representing an allocation path of the memory allocation call;selecting a tier of memory from a plurality of tiers of memory based on the signature; andassigning a memory buffer for the memory allocation call in the tier of memory.The method of claim 12, further comprising:collecting a size of the memory buffer and a virtual address of the memory buffer;monitoring accesses of memory at the virtual address of the memory buffer; andupdating information associated with the signature based on the accesses of memory at the virtual address of the memory buffer.The method of any of claims 12-13, wherein selecting the tier of memory comprises:identifying access information associated with the signature; anddetermining, based on the access information, a cost of the memory buffer; andwherein the method further comprises aggregating memory access information for a plurality of memory buffers associated with the signature.The method of any of claims 12-14, wherein selecting the tier of memory comprises:determining access rates of memory buffers associated with the signature; anddetermining a total amount of memory allocated to memory buffers associated with the signature; andwherein the method further comprises:determining an access density associated with the signature based on the access rates and the total amount of memory allocated to memory buffers associated with the signature; andcategorizing the signature into one of a plurality of categories based on the access density associated with the signature.
Technical FieldThis disclosure generally relates to computer technology; in particular the disclosure relates to memory allocation in tiered memory systems.BackgroundHeterogeneous memory is memory that includes multiple tiers of memory which are comprised of different types of storage hardware. Memory allocation in multi-tiered hetero-memory can be controlled by hardware under directly-mapped association between lower and upper tiers. Memory allocation can also be controlled by software to assign data to an appropriate tier of memory.Brief Description of the DrawingsFigure 1 is a system block diagram of a computing device for dynamic allocation of memory based on an execution context of a memory allocation request.Figure 2A is a block diagram illustrating example allocation pathways leading to a memory allocation call according to an implementation.Figure 2B is a block diagram illustrating a number of different example allocation pathways associated with a signature.Figure 2C is a table illustrating properties associated with each signature for an allocation pathway.Figure 3 is a block diagram of example memory allocations to different tiers of memory based on properties of the allocation paths.Figure 4 is a flow diagram of an example method for dynamic memory tier selection using a signature assigned to an execution context.Figure 5 is a flow diagram of an example method for dynamic memory allocation to a memory tier using last branch records to determine an allocation pathway.Figure 6 is a flow diagram of an example method for monitoring memory allocations to collect information relevant for dynamic memory tier selection.Figure 7A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline according to one implementation.Figure 7B is a block diagram illustrating a micro-architecture for a processor or an integrated circuit that may implement hardware support for a multi-key cryptographic engine, according to an implementation of the disclosure.Figure 8 illustrates a block diagram of the micro-architecture for a processor or an integrated circuit that implements hardware support for a multi-key cryptographic engine, according to an implementation of the disclosure.Figure 9 is a block diagram of a computer system according to one implementation.Figure 10 is a block diagram of a computer system according to another implementation.Figure 11 is a block diagram of a system-on-a-chip according to one implementation.Figure 12 illustrates another implementation of a block diagram for a computing system.Figure 13 illustrates another implementation of a block diagram for a computing system.Detailed DescriptionDetermining, in a heterogeneous memory system, which tier of the system to allocate memory may be difficult. Computer applications are generally coded for modularity so that parts of code can easily be reused. Because of code modularity, the logical purpose and/or context of a memory allocation may be obscured. For example, the logical purpose may be located in one part of an application while the section of code that makes a call to an allocator is located in a different part of the application. Profiling an application's execution using traditional methods may be insufficient to tie the frequency with which a memory buffer (i.e., an instance of allocated memory) is accessed to a particular instance of a call to allocate that buffer. Traditional profiling is unable to accurately predict future use of a buffer and is unable to select the most efficient allocation of memory upon a memory allocation call. Generally, a higher tier in a memory system comprises memory types with lower access latency but less capacity, while lower memory tiers comprise memory types with higher access latency with higher capacity. Therefore, the method in which memory is allocated to memory tiers can substantially effect performance of the memory system.Static allocation mechanisms may be used to select a memory tier for a memory allocation. Static methods may select which memory tier to assign a memory allocation based on specific memory allocation calls within code. However, these methods do not allow for consideration of the logical purpose or context of the memory allocation. A specific memory allocation call may be used for many different purposes in the code of an application. For example, a memory allocation may be defined within a function that is called throughout an application in a number of different contexts. The different purposes for which the memory allocation is called may provide for different uses of the allocated memory buffer. One allocation context may use the buffer often, yet in a different context the buffer may be used very little. Software memory tier selection methods may provide for more flexible memory allocation. However there is a large overhead associated with application profiling and data collection which can significantly impact application performance. Thus, it may be necessary to identify and use the context in which a memory allocation call is made to provide for efficient use of memory tiers.Embodiments described herein may address the above deficiencies. An instruction may be executed by a processor before a memory allocation call is executed to retrieve a number of previously retired instructions leading up to the memory allocation (referred to herein as an "allocation path") and associate that allocation path with an identifier. A performance management unit (PMU) may collect information about a memory buffer allocated by the memory allocation call. A memory allocation call is referred to herein as a malloc call. However, any other type of memory allocation call or library may be used. The PMU may continuously collect information about the memory buffers. The information may indicate how the memory buffer associated with the allocation path is used (e.g., how often the buffer is accessed). Software may then use the identifier of the allocation path to select an optimal memory tier for subsequent memory allocations based on the past use of the memory buffer associated with the identifier.In one embodiment, the PMU may associate the allocation path with an identifier, referred to herein as a "signature," to identify the allocation path. The signature may be a scalar value with which information collected about an allocation path may be associated and identified. The same signature may be associated with every allocated buffer that has the same, or similar, allocation path. The collection of buffers with the same signature may be referred to as a "signature domain." Therefore, information about memory buffers of a signature domain may be collected and aggregated. For example, an access density for the signature may be calculated by dividing the total number of accesses to buffers associated with the signature by the total amount of memory allocated to buffers associated with the signature. Accordingly, a memory tier may be selected for a memory allocation based at least in part on the access density of the signature (e.g., the higher the access density the higher the tier that is selected). The memory tier selection may additionally be based at least in part on the size of buffers associated with the signature and the storage capacity of each of the memory tiers. Information about memory buffers may be continuously collected to provide for dynamic selection of memory tiers for memory allocation based on runtime behavior.Therefore, embodiment described herein may automate and simplify memory tier selection in heterogeneous memory systems, reducing the burden of software maintenance of memory allocation. Optimal memory tiers may be selected with lower overhead and system performance may be increased across a large variety of hardware configurations. Additionally, embodiments may support the use of large capacity memory tiers, high bandwidth memory tiers, and other variations of memory tier usage.Figure 1 is a block diagram illustrating a system 100 comprising a processor 110 and a memory 150, according to one implementation. Processor 110 may include one or more processing cores 120. The processing core 120 may include a performance monitoring unit (PMU) 130 and a plurality of last branch record (LBR) registers 140. PMU 130 may further include a translation circuit 132 to translate LBRs collected by the LBR registers 140 into an identifier of an execution context. Memory 150 may include multiple tiers of memory 152, 154, and 156 each tier including different types of memory.LBR registers 140 may store LBRs including information about recently executed branch instructions. For example, the information may include the source address and the destination address of each branch instruction along with additional metadata. The LBRs stored in the LBR registers 140 may represent a control flow of an executing program. An LBR snapshot instruction may retrieve the LBRs from the LBR registers 140 and reconstruct an execution pathway leading up to the current instruction (i.e. the malloc call). In one example, the LBR snapshot instruction may retrieve the LBRs when a malloc call is about to be executed. The LBR snapshot instruction may be added to code by a compiler front-end, or through a wrapper around malloc. The LBR snapshot instruction may use PMU 130 hardware, as described below, to identify a signature associated with an allocation path using the LBRs retrieved from the LBR registers 140.PMU 130 may be part of an execution unit and may monitor a number of performance characteristics of the processing core 120 including memory access rates, memory allocations, time accounting, etc. The PMU 130 may include a translation circuit 132 to receive and translate the LBR information into a signature identifying an allocation path of a malloc call in response to execution of the LBR snapshot instruction described above. The translation circuit 132 may hash the LBR vector into a 32 bit, 64 bit or other singular scalar value. The translation circuit may be a hash or any other associative hardware structure. In one example, the translation circuit may hash a subset of the LBRs in the LBR registers 140 at the time LBR snapshot is executed. The subset may be, for example, all LBRs associated with function calls and returns, or all LBRs that include non-function branches, etc. The translation circuit 132 may store the identified signature in a hardware register, a cache, a scratchpad area, or the like, to be used by software. Software may then be used to associate a linear address returned by the malloc call and a size of the memory buffer to be allocated by the malloc call with the identified signature. Thus, the signature may be used by software to identify and monitor buffers allocated from a specific allocation path.Many different memory buffers may be allocated from a single allocation path. Each memory buffer allocated from the allocation path may be associated with a single signature corresponding to the allocation path. The memory buffers associated with a signature may be referred to as a signature domain. For each signature domain, the PMU 130 may be used to collect information about the buffers of the signature domain. Various types of information may be collected for the buffers. In one example, the PMU 130 may collect memory access data for each buffer. The memory access data may be used to determine memory access frequency and in turn memory access density (i.e. number of access divided by total memory) of the signature domain. The lifespan of buffers may also be collected upon freeing the memory buffer from memory. When a memory buffer is freed, it may also dissociate the signature from the area of memory as well. Collection of memory access data is described in more detail below with respect to Figures 5 and 6 .After the signature for the malloc call is determined by the translation circuit 132 of the PMU 130, software may use the identified signature to select a memory tier to allocate a memory buffer from the malloc call. As described above, a variety of information may be collected for each signature domain. The collected information may be used by software to select the memory tier. For example, a memory access density of the signature domain for the signature identified for a malloc call may be used to select an appropriate memory tier. In one example, the larger the access density, the higher the tier selected. In another example, the size, or the time-space product (i.e. the amount of memory allocated multiplied by the time the memory is allocated) associated with the signature may also be used, at least in part, to select a memory tier. For example, the larger the size, and/or the time-space product, the lower the memory tier selected. The selection may also account for the memory capacity of each memory tier (both total capacity and capacity at the time of malloc). Memory tier selection is described in further detail below with respect to Figure 3 .Memory 150 may include one or more memory tiers 152, 154, and 156. Each memory tier may include a different type of memory storage hardware, such as random access memory (RAM), dynamic RAM (DRAM), non-volatile RAM, solid state devices (SSD), hard-disk drives, etc. For example, tier 152 may include DRAM, tier 154 may include a solid state drive and tier 156 may include a hard-disk drive. Each tier may also include a combination of different types of storage hardware. Memory tiers may be determined based on total access latency, read access latency, and/or write access latency. In one example, memory 150 may be tiered based on storage device proximity to the CPU. For example, in a NUMA system, a node that is more distant may be designated as a lower tier of memory because latency may be higher.Figure 2A depicts a number of possible allocation paths that may lead to a memory allocation, such as a malloc. For each allocation path depicted, the LBR snapshot instruction may be executed between the execution of E (instruction 230) and malloc (instruction 240) to hash each of the different allocation paths to a unique signature. One example path may begin with execution of F (instruction 210), then C (instruction 215), and then E (instruction 230) followed by the malloc call (instruction 240). Another path may begin with the execution of F (instruction 210), then D (instruction 225), and then E (instruction 230) followed by the malloc call (instruction 240). A third path begins with the execution of G (instruction 220) then C (instruction 215) and then E (instruction 230), followed by the malloc call (instruction 240). Finally, a fourth allocation path depicted may begin with execution of G (instruction 220), then D (instruction 225), and then E (instruction 230), followed by the malloc call (instruction 240). Thus, in the depicted example a single malloc call may be preceded by four different allocation paths. However, it should be noted that the disclosure is not limited to the allocation paths depicted, or the number of instructions depicted in each path. Any number of allocation paths may be taken to a single malloc call and any number of previously executed instructions may be used to identify a signature for an allocation path.Figure 2B illustrates a example associations of allocation paths to signatures. The allocation paths may be a unique sequence of instructions executed before a malloc call. The unique sequence of instructions may therefore be associated with a unique signature that may be used to identify the allocation path. In one example, as depicted, allocation path 250 may comprise the execution sequence F, D, E followed by malloc. The allocation path 250 may be associated with signature 255, denoted by the value "Z94." Allocation path 260 comprising the execution sequence F, C, E followed by malloc may be associated with signature 265, denoted by the value "Q35." Allocation path 270 comprising the execution sequence G, D, E followed by malloc may be associated with signature 275, denoted by the value "K51." Finally, allocation path 280 comprising the execution sequence G, C, E followed by malloc may be associated with signature 285, denoted by the value "W82." A hash function may be applied to an allocation path to determine the signature to be assigned to the allocation path. The hash function may be a low overhead hash which hashes the LBR vector (allocation path) directly to a scalar value (e.g., a 32 or 64 bit signature).Figure 2C depicts a table illustrating example signatures and data associated with memory buffers in each signature domain. In one example, each signature (identifying an allocation path) is associated with one or more attributes that may be used to select a memory tier for a malloc. The signatures in Figure 2C are each associated with two attributes, "temperature" and "footprint." Temperature may indicate the frequency with which the buffers of the signature domain are accessed. The frequency may be the average frequency for all the memory buffers in the signature domain. Footprint may indicate the amount of memory that is associated with the signature. Additionally, the footprint may account for the lifespan of memory buffers in the signature domain. For example, the footprint may be the average time-space product of the buffers in the signature domain. The temperature and footprint may be averages over all memory buffers in a signature domain.In one example, a path with the signature "Z94" may have a temperature of "8" and a footprint of "2," while a path with the signature "Q35" may have a temperature "3" and a footprint of "8." Therefore, the memory buffers in the signature domain of Z94 are accessed, on average, more than twice as much as the buffers in the signature domain of Q35. Furthermore, the average footprint of the buffers of Z94 is four times smaller than the footprint of buffers of Q35. Accordingly, it may be most advantageous to allocate the next malloc with the signature Z94 to a higher memory tier than the next malloc with the signature Q35.Figure 3 depicts example placement of mallocs for several different signatures in memory tiers based on a "heat factor" (access density) and "relative size" (size or time-space product). Access density may be the total number of accesses of a signature domain divided by the total memory allocated in the signature domain. The relative size associated with each signature may be an average size of all buffers in the signature domain. The size of a buffer may refer to the amount of memory occupied by the buffer, or may refer to a time-space product (i.e. amount of memory occupied multiplied by the lifespan of the buffer). Each signature may be categorized by its associated access density. For example, as depicted in Figure 3 , the categories may be represented as a heat factor. The heat factor may include three possible categories: HOT, WARM, and COLD. The HOT category may represent the highest range of access densities. WARM may represent a middle range of access densities and COLD may represent the lowest range of access densities. More categories or fewer categories may be used. The classifications may updated periodically to ensure proper selection of memory tiers for mallocs. The memory tiers 152, 154, and 156, may be the same as, or similar to, memory tiers 152, 154, and 156, respectively, as described with respect to Figure 1 .In one example, signatures P, Q, R, S, T, and U may each represent a unique allocation path. Each signature may be placed in a memory tier according to its heat factor and its relative size. In general the hotter signatures (i.e. signatures with higher access densities) and signatures with a smaller size will be placed in higher tiers than colder and larger signatures. Signatures P and Q are both HOT and therefore will be placed in higher tiers than WARM or COLD signatures. However, because there is a limited capacity of memory tier 152 signature P will be placed in memory tier 152 since it is 4x smaller than signature Q. Signatures Q will then be placed in memory tier 2. Similarly, signature R and S are both WARM and thus will be placed in a higher memory tier than COLD signatures. However signature S is much larger in size than signature R and therefore signature S will be placed in memory tier 156 and signature R will be placed in memory tier 154. Signature T and U are both COLD. Signatures that are COLD may be automatically placed in the lowest memory tier. Therefore, signature T and U may be automatically placed in memory tier 156. As shown in Figure 3 , placement of each signature may depend on the total capacity of each of the memory tiers as well as the remaining capacity of each tier at any given time. The policy used for memory tier assignment may be dynamic and may be set or updated during program runtime.Figure 4 depicts a flow diagram illustrating an example method 400 of dynamic tier allocation based on execution context of a malloc call. At block 402, an LBR vector may be retrieved from a plurality of LBR registers prior to a memory allocation call. An LBRSNAP instruction may be executed immediately before the memory allocation call is executed. The LBR vector may include one or more LBRs which represent an execution context, or allocation path, leading up to the execution of the memory allocation call. The LBRs of the LBR vector may comprise information regarding one or more recently executed branch instructions. For example, the LBRs may include the last thirty-two branch instructions executed and retired by the processor core before the memory allocation call.At block 404, a signature may be determined from the LBR vector. The signature may represent an allocation path of the memory allocation call. The LBRs retrieved at block 402 may be hashed to a unique scalar value referred to as a signature. The signature may be determined for every malloc call executed by the system. The signature may remain associated with each allocated memory buffer during its lifespan so that a memory access profile may be generated for the signature. Additional information of previous memory allocations with the same allocation path may be collected and associated with the signature to generate the memory access profile.At block 406, a memory tier may be selected for the memory allocation call based on the signature that was determined from the LBR vector at block 404. A signature domain of the signature may include each buffer associated with the signature. Access statistics of the buffers of a signature domain may be used to determine the memory tier to be selected. For example, an average access density of the buffers in the signature domain may be used to select the memory tier. The cost of a memory allocation, such as size and lifespan, may also be used in the selection. Other considerations in selecting the memory tier may include determining an access density for writes and reads to buffers of a signature domain separately. The access density for writes and reads may be weighted differently in determining a memory tier to select. For example, a buffer with a higher read access density may be placed in a higher memory tier than a buffer with a higher write access density. An application may also indicate which memory allocations to prioritize in higher tiers of memory.Figure 5 depicts a flow diagram illustrating an example method 500 of dynamic memory tier allocation and collection of memory buffer size, frequency of access, and lifespan. At block 502, an execution unit may receive a malloc call. A malloc call may be a request to allocate a memory buffer in memory of the system. The memory of the system may be heterogeneous and therefore consist of multiple tiers of memory. At block 504, an instruction may be executed prior to the malloc call to retrieve a snapshot of the LBRs collected in the LBR registers (i.e. LBR vector) of a processing core. The number of LBRs retrieved at one time may be all of the LBRs stored in the LBR registers (e.g. all 32 LBRs in some systems). Each LBR may include information pertaining to a retired branch instruction and execution thereof.At block 506, the PMU may determine a signature of the LBR vector for the malloc call. The signature may be a scalar value representing a unique allocation path. Every memory allocation call with the same allocation path may have the same signature. To determine a signature, a low-overhead hash may be applied to the LBR vector. Thus, the identification of the signature may be done quickly with low impact on performance. Each memory buffer allocated may continue to be associated with the signature to which it was initially hashed. The association may be stored in a data structure that maps all buffers associated with a signature to the signature (i.e. many-to-one). The virtual addresses of the memory buffers may be collected along with the size of the memory buffer. The virtual address may be used to associate a memory buffer with the signature in the data structure. Then the virtual address may be used to track accesses to the memory buffer.At block 508, a memory allocation may be assigned to a memory tier based on profiling data associated with the signature of the memory allocation call. As indicated above, the signature may remain associated to memory buffers allocated in memory (i.e. a signature domain). Profiling data about the buffers of a signature domain may be collected and aggregated. The aggregated profiling data from the memory domain may include information such as a frequency of accesses to the buffers of the signature domain, the total memory allocated to memory buffers of the domain, an average time that buffers of the signature domain persist in memory, etc. Using the collected profiling data, the allocation call may be allocated to an appropriate memory tier. For example, an access density and a time-space product may be used to select which tier to allocate the memory buffer. Any other profiling data collected that relates to the memory buffers of a signature may be used to select a memory tier as well. Additionally, an application may specify which memory allocation calls to prioritize in higher memory tiers.At block 510, it may be determined whether an instruction to free a memory buffer is received. If an instruction to free is not received then the process continues to collect access data and to dynamically allocate memory buffers to optimal memory tiers. If an instruction to free a memory buffer is received, the memory buffer to be free may be removed from memory to free up the memory space for other memory allocations. At block 512, the PMU may log the lifetime of the memory buffer to be freed and store the information with the signature of the freed memory buffer. To log the lifetime, the PMU may record a time-stamp at the time of allocation of the memory buffer and a time-stamp at the time of the free instruction. The different between the time-stamps may represent the lifetime of the memory buffer. The information associated with the signature, in particular the time-space product may then be updated to dynamically track the cost of memory allocations for the signature. At block 514, the memory buffer may be freed and the process may continue to allocate memory based on collected profiling information of signature domains and continuously collect and update the profiling information to provide for dynamic memory tier selection.Figure 6 is a flow diagram of an example method for collecting access profile data associated with a memory buffer. At block 602, the PMU may monitor for the occurrence of a precise event. Precise event based sampling (PEBS) may collect samples for specified events at various intervals configured by an application. For example, a PEBS sample may be collected when memory accesses miss a last level cache (LLC), a translation lookaside buffer miss occurs, or other PEBS events providing indications of memory accesses. At block 604, it may be determined whether a triggered memory event occurred. If not, then the monitoring continues until a triggered memory event occurs, such as a cache miss. If a triggered memory event occurs, then at block 606 the event data is collected in a data buffer. The event data may include the virtual address of the memory access and the source of data of the cache line, based on the type of event being captured. The source of the data may be used to determine if NUMA placement for memory accesses can be optimized (e.g. selecting NUMA nodes similar to selecting memory tiers). At block 608, the collected data may be inserted into a table. The table may be indexed by data addresses so that the collected data can be identified with a signature. The process may continue to repeat, collecting memory access data from the triggered PEBS events and adding up the access counts for each signature domain. In this manner, a relative access rate associated with the memory buffers of a signature domain may be determined on a running basis and used to dynamically select a memory tier (or NUMA node) for a memory allocation call.Figure 7A is a block diagram illustrating a micro-architecture for a processor 700 that implements hardware support for dynamic memory allocation in heterogeneous memory systems. Specifically, processor 700 depicts an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor according to at least one implementation of the disclosure.Processor 700 includes a front end unit 730 coupled to an execution engine unit 750, and both are coupled to a memory unit 770. The processor 700 may include a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, processor 700 may include a special-purpose core, such as, for example, a network or communication core, compression engine, graphics core, or the like. In one implementation, processor 700 may be a multi-core processor or may be part of a multiprocessor system.The front end unit 730 includes a branch prediction unit 732 coupled to an instruction cache unit 734, which is coupled to an instruction translation lookaside buffer (TLB) 736, which is coupled to an instruction fetch unit 738, which is coupled to a decode unit 740. The decode unit 740 (also known as a decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decoder 740 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. The instruction cache unit 734 is further coupled to the memory unit 770. The decode unit 740 is coupled to a rename/allocator unit 752 in the execution engine unit 750.The execution engine unit 750 includes the rename/allocator unit 752 coupled to a retirement unit 754 and a set of one or more scheduler unit(s) 756. The scheduler unit(s) 756 represents any number of different scheduler circuits, including reservations stations (RS), central instruction window, etc. The scheduler unit(s) 756 is coupled to the physical register set(s) unit(s) 758. Each of the physical register set(s) units 758 represents one or more physical register sets, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. The physical register set(s) unit(s) 758 is overlapped by the retirement unit 754 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register set(s), using a future file(s), a history buffer(s), and a retirement register set(s); using a register maps and a pool of registers; etc.).Generally, the architectural registers are visible from the outside of the processor or from a programmer's perspective. The registers are not limited to any known particular type of circuit. Various different types of registers are suitable as long as they are capable of storing and providing data as described herein. Examples of suitable registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. The retirement unit 754 and the physical register set(s) unit(s) 758 are coupled to the execution cluster(s) 760. The execution cluster(s) 760 includes a set of one or more execution units 762 and a set of one or more memory access units 764. The execution units 762 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and operate on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point).While some implementations may include a number of execution units dedicated to specific functions or sets of functions, other implementations may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 756, physical register set(s) unit(s) 758, and execution cluster(s) 760 are shown as being possibly plural because certain implementations create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register set(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain implementations are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 764). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 764 is coupled to the memory unit 770, which may include a data prefetcher 780, a data TLB unit 772, a data cache unit (DCU) 774, and a level 2 (L2) cache unit 776, to name a few examples. In some implementations DCU 774 is also known as a first level data cache (L1 cache). The DCU 774 may handle multiple outstanding cache misses and continue to service incoming stores and loads. It also supports maintaining cache coherency. The data TLB unit 772 is a cache used to improve virtual address translation speed by mapping virtual and physical address spaces. In one exemplary implementation, the memory access units 764 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 772 in the memory unit 770. The L2 cache unit 776 may be coupled to one or more other levels of cache and eventually to a main memory.In one implementation, the data prefetcher 780 speculatively loads/prefetches data to the DCU 774 by automatically predicting which data a program is about to consume. Prefetching may refer to transferring data stored in one memory location (e.g., position) of a memory hierarchy (e.g., lower level caches or memory) to a higher-level memory location that is closer (e.g., yields lower access latency) to the processor before the data is actually demanded by the processor. More specifically, prefetching may refer to the early retrieval of data from one of the lower level caches/memory to a data cache and/or prefetch buffer before the processor issues a demand for the specific data being returned.The processor 700 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of Imagination Technologies of Kings Langley, Hertfordshire, UK; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA).It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated implementation of the processor also includes a separate instruction and data cache units and a shared L2 cache unit, alternative implementations may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some implementations, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Figure 7B is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline implemented by processor 700 of Figure 7A according to some implementations of the disclosure. The solid lined boxes in Figure 7B illustrate an in-order pipeline 701, while the dashed lined boxes illustrate a register renaming, out-of-order issue/execution pipeline 703. In Figure 7B , the pipelines 701 and 703 include a fetch stage 702, a length decode stage 704, a decode stage 706, an allocation stage 708, a renaming stage 710, a scheduling (also known as a dispatch or issue) stage 712, a register read/memory read stage 714, an execute stage 716, a write back/memory write stage 718, an exception handling stage 722, and a commit stage 724. In some implementations, the ordering of stages 702-724 may be different than illustrated and are not limited to the specific ordering shown in Figure 7B .Figure 8 illustrates a block diagram of the micro-architecture for a processor 800 that includes logic circuits of a processor or an integrated circuit that implements hardware support for dynamic memory allocation in heterogeneous memory systems, according to an implementation of the disclosure. In some implementations, an instruction in accordance with one implementation can be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one implementation the in-order front end 801 is the part of the processor 800 that fetches instructions to be executed and prepares them to be used later in the processor pipeline. The implementations of the page additions and content copying can be implemented in processor 800.The front end 801 may include several units. In one implementation, the instruction prefetcher 816 fetches instructions from memory and feeds them to an instruction decoder 818 which in turn decodes or interprets them. For example, in one implementation, the decoder decodes a received instruction into one or more operations called "micro-instructions" or "micro-operations" (also called micro op or uops) that the machine can execute. In other implementations, the decoder parses the instruction into an opcode and corresponding data and control fields that are used by the micro-architecture to perform operations in accordance with one implementation. In one implementation, the trace cache 830 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 834 for execution. When the trace cache 830 encounters a complex instruction, microcode ROM (or RAM) 832 provides the uops needed to complete the operation.Some instructions are converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one implementation, if more than four micro-ops are needed to complete an instruction, the decoder 818 accesses the microcode ROM 832 to do the instruction. For one implementation, an instruction can be decoded into a small number of micro ops for processing at the instruction decoder 818. In another implementation, an instruction can be stored within the microcode ROM 832 should a number of micro-ops be needed to accomplish the operation. The trace cache 830 refers to an entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one implementation from the micro-code ROM 832. After the microcode ROM 832 finishes sequencing micro-ops for an instruction, the front end 801 of the machine resumes fetching micro-ops from the trace cache 830.The out-of-order execution engine 803 is where the instructions are prepared for execution. The out-of-order execution logic has a number of buffers to smooth out and reorder the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register set. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 802, slow/general floating point scheduler 804, and simple floating point scheduler 806. The uop schedulers 802, 804, 806, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 802 of one implementation can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.Register sets 808, 810, sit between the schedulers 802, 804, 806, and the execution units 812, 814, 816, 818, 820, 822, 824 in the execution block 811. There is a separate register set 808, 810, for integer and floating point operations, respectively. Each register set 808, 810, of one implementation also includes a bypass network that can bypass or forward just completed results that have not yet been written into the register set to new dependent uops. The integer register set 808 and the floating point register set 810 are also capable of communicating data with the other. For one implementation, the integer register set 808 is split into two separate register sets, one register set for the low order 32 bits of data and a second register set for the high order 32 bits of data. The floating point register set 810 of one implementation has 128 bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.The execution block 811 contains the execution units 812, 814, 816, 818, 820, 822, 824, where the instructions are actually executed. This section includes the register sets 808, 810, that store the integer and floating point data operand values that the micro-instructions need to execute. The processor 800 of one implementation is comprised of a number of execution units: address generation unit (AGU) 812, AGU 814, fast ALU 816, fast ALU 818, slow ALU 820, floating point ALU 812, floating point move unit 814. For one implementation, the floating point execution blocks 812, 814, execute floating point, MMX, SIMD, and SSE, or other operations. The floating point ALU 812 of one implementation includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops. For implementations of the disclosure, instructions involving a floating point value may be handled with the floating point hardware.In one implementation, the ALU operations go to the high-speed ALU execution units 816, 818. The fast ALUs 816, 818, of one implementation can execute fast operations with an effective latency of half a clock cycle. For one implementation, most complex integer operations go to the slow ALU 820 as the slow ALU 820 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 822, 824. For one implementation, the integer ALUs 816, 818, 820, are described in the context of performing integer operations on 64 bit data operands. In alternative implementations, the ALUs 816, 818, 820, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 822, 824, can be implemented to support a range of operands having bits of various widths. For one implementation, the floating point units 822, 824, can operate on 128 bits wide packed data operands in conjunction with SIMD and multimedia instructions.In one implementation, the uops schedulers 802, 804, 806, dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed in processor 800, the processor 800 also includes logic to handle memory misses. If a data load misses in the data cache, there can be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations need to be replayed and the independent ones are allowed to complete. The schedulers and replay mechanism of one implementation of a processor are also designed to catch instruction sequences for text string comparison operations.The term "registers" may refer to the on-board processor storage locations that are used as part of instructions to identify operands. In other words, registers may be those that are usable from the outside of the processor (from a programmer's perspective). However, the registers of an implementation should not be limited in meaning to a particular type of circuit. Rather, a register of an implementation is capable of storing and providing data, and performing the functions described herein. The registers described herein can be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one implementation, integer registers store 32-bit integer data. A register set of one implementation also contains eight multimedia SIMD registers for packed data.For the discussions herein, the registers are understood to be data registers designed to hold packed data, such as 64 bits wide MMX™ registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128 bits wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as "SSEx") technology can also be used to hold such packed data operands. In one implementation, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one implementation, integer and floating point are either contained in the same register set or different register sets. Furthermore, in one implementation, floating point and integer data may be stored in different registers or the same registers.Implementations may be implemented in many different system types. Referring now to Figure 9 , shown is a block diagram of a multiprocessor system 900 that may implement hardware support for dynamic memory allocation in heterogeneous memory systems. As shown in Figure 9 , multiprocessor system 900 is a point-to-point interconnect system, and includes a first processor 970 and a second processor 980 coupled via a point-to-point interconnect 950. As shown in Figure 9 , each of processors 970 and 980 may be multicore processors, including first and second processor cores (i.e., processor cores 974a and 974b and processor cores 984a and 984b), although potentially many more cores may be present in the processors. While shown with two processors 970, 980, it is to be understood that the scope of the disclosure is not so limited. In other implementations, one or more additional processors may be present in a given processor.Processors 970 and 980 are shown including integrated memory controller units 972 and 982, respectively. Processor 970 also includes as part of its bus controller units point-to-point (P-P) interfaces 976 and 988; similarly, second processor 980 includes P-P interfaces 986 and 988. Processors 970, 980 may exchange information via a point-to-point (P-P) interface 950 using P-P interface circuits 978, 988. As shown in Figure 9 , IMCs 972 and 982 couple the processors to respective memories, namely a memory 932 and a memory 934, which may be portions of main memory locally attached to the respective processors.Processors 970, 980 may exchange information with a chipset 990 via individual P-P interfaces 952, 954 using point to point interface circuits 976, 994, 986, 998. Chipset 990 may also exchange information with a high-performance graphics circuit 938 via a high-performance graphics interface 939.Chipset 990 may be coupled to a first bus 916 via an interface 996. In one implementation, first bus 916 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or interconnect bus, although the scope of the disclosure is not so limited.Referring now to Figure 10 , shown is a block diagram of a third system 1000 that may implement hardware support for dynamic memory allocation in heterogeneous memory systems, in accordance with an implementation of the disclosure. Like elements in Figures 9 and 10 bear like reference numerals and certain aspects of Figure 10 have been omitted from Figure 9 in order to avoid obscuring other aspects of Figure 9 .Figure 10 illustrates that the processors 1070, 1080 may include integrated memory and I/O control logic ("CL") 1072 and 1092, respectively. For at least one implementation, the CL 1072, 1082 may include integrated memory controller units such as described herein. In addition. CL 1072, 1092 may also include I/O control logic. Figure 10 illustrates that the memories 1032, 1034 are coupled to the CL 1072, 1092, and that I/O devices 1014 are also coupled to the control logic 1072, 1092. Legacy I/O devices 1015 are coupled to the chipset 1090.Figure 11 is an exemplary system on a chip (SoC) 1100 that may include one or more of the cores 1102A ... 1102N that may implement hardware support for dynamic memory allocation in heterogeneous memory systems. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Within the exemplary SoC 1100 of Figure 11 , dashed lined boxes are features on more advanced SoCs. An interconnect unit(s) 1102 may be coupled to: an application processor 1117 which includes a set of one or more cores 1102A-N and shared cache unit(s) 1106; a system agent unit 1110; a bus controller unit(s) 1116; an integrated memory controller unit(s) 1114; a set of one or more media processors 1120 which may include integrated graphics logic 1108, an image processor 1124 for providing still and/or video camera functionality, an audio processor 1126 for providing hardware audio acceleration, and a video processor 1128 for providing video encode/decode acceleration; a static random access memory (SRAM) unit 1130; a direct memory access (DMA) unit 1132; and a display unit 1140 for coupling to one or more external displays.Turning next to Figure 12 , an implementation of a system on-chip (SoC) design that may implement hardware support for dynamic memory allocation in heterogeneous memory systems, in accordance with implementations of the disclosure is depicted. As an illustrative example, SoC 1200 is included in user equipment (UE). In one implementation, UE refers to any device to be used by an end-user to communicate, such as a hand-held phone, smartphone, tablet, ultra-thin notebook, notebook with broadband adapter, or any other similar communication device. A UE may connect to a base station or node, which can correspond in nature to a mobile station (MS) in a GSM network. The implementations of the page additions and content copying can be implemented in SoC 1200.Here, SoC 1200 includes 2 cores-1206 and 1207. Similar to the discussion above, cores 1206 and 1207 may conform to an Instruction Set Architecture, such as a processor having the Intel® Architecture Core™, an Advanced Micro Devices, Inc. (AMD) processor, a MIPS-based processor, an ARM-based processor design, or a customer thereof, as well as their licensees or adopters. Cores 1206 and 1207 are coupled to cache control 1208 that is associated with bus interface unit 1209 and L2 cache 1210 to communicate with other parts of system 1200. Interconnect 1211 includes an on-chip interconnect, such as an IOSF, AMBA, or other interconnects discussed above, which can implement one or more aspects of the described disclosure.In one implementation, SDRAM controller 1240 may connect to interconnect 1211 via cache 1210. Interconnect 1211 provides communication channels to the other components, such as a Subscriber Identity Module (SIM) 1230 to interface with a SIM card, a boot ROM 1235 to hold boot code for execution by cores 1206 and 1207 to initialize and boot SoC 1200, a SDRAM controller 1240 to interface with external memory (e.g. DRAM 1260), a flash controller 1245 to interface with non-volatile memory (e.g. Flash 1265), a peripheral control 1250 (e.g. Serial Peripheral Interface) to interface with peripherals, video codecs 1220 and Video interface 1225 to display and receive input (e.g. touch enabled input), GPU 1215 to perform graphics related computations, etc. Any of these interfaces may incorporate aspects of the implementations described herein.In addition, the system illustrates peripherals for communication, such as a Bluetooth® module 1270, 3G modem 1275, GPS 1280, and Wi-Fi® 1285. Note as stated above, a UE includes a radio for communication. As a result, these peripheral communication modules may not all be included. However, in a UE some form of a radio for external communication should be included.Figure 13 illustrates a diagrammatic representation of a machine in the example form of a computing system 1300 within which a set of instructions, for causing the machine to implement hardware support for dynamic memory allocation in heterogeneous memory systems according any one or more of the methodologies discussed herein. In alternative implementations, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client device in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The implementations of the page additions and content copying can be implemented in computing system 1300.The computing system 1300 includes a processing device 1302, main memory 1304 (e.g., flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or DRAM (RDRAM), etc.), a static memory 1306 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 1316, which communicate with each other via a bus 1308.Processing device 1302 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1302 may also be one or more special-purpose processing devices such as an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. In one implementation, processing device 1302 may include one or more processor cores. The processing device 1302 is configured to execute the processing logic 1326 for performing the operations discussed herein.In one implementation, processing device 1302 can be part of a processor or an integrated circuit that includes the disclosed LLC caching architecture. Alternatively, the computing system 1300 can include other components as described herein. It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).The computing system 1300 may further include a network interface device 1318 communicably coupled to a network 1319. The computing system 1300 also may include a video display device 1310 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1312 (e.g., a keyboard), a cursor control device 1314 (e.g., a mouse), a signal generation device 1320 (e.g., a speaker), or other peripheral devices. Furthermore, computing system 1300 may include a graphics processing unit 1322, a video processing unit 1328 and an audio processing unit 1332. In another implementation, the computing system 1300 may include a chipset (not illustrated), which refers to a group of integrated circuits, or chips, that are designed to work with the processing device 1302 and controls communications between the processing device 1302 and external devices. For example, the chipset may be a set of chips on a motherboard that links the processing device 1302 to very high-speed devices, such as main memory 1304 and graphic controllers, as well as linking the processing device 1302 to lower-speed peripheral buses of peripherals, such as USB, PCI or ISA buses.The data storage device 1316 may include a computer-readable storage medium 1324 on which is stored software 1326 embodying any one or more of the methodologies of functions described herein. The software 1326 may also reside, completely or at least partially, within the main memory 1304 as instructions 1326 and/or within the processing device 1302 as processing logic during execution thereof by the computing system 1300; the main memory 1304 and the processing device 1302 also constituting computer-readable storage media.The computer-readable storage medium 1324 may also be used to store instructions 1326 utilizing the processing device 1302, and/or a software library containing methods that call the above applications. While the computer-readable storage medium 1324 is shown in an example implementation to be a single medium, the term "computer-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "computer-readable storage medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instruction for execution by the machine and that cause the machine to perform any one or more of the methodologies of the disclosed implementations. The term "computer-readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.The following examples pertain to further implementations.Example 1 is a processor comprising: a processing core to execute a memory allocation call to allocate memory in a memory device; a last branch record (LBR) register to store information indicative of a recently retired branch instruction; and a performance monitoring unit (PMU) coupled to the LBR register, the PMU comprising a logic circuit to: retrieve the information from the LBR register prior to a memory allocation call being received by the processing core; and identify, based on the information, a signature of the memory allocation call; and provide the signature to the processing core.In Example 2, the subject matter of Example 1, wherein the processing core is to execute software, wherein the software is to determine a memory tier to allocate memory for the memory allocation call using the signature prior to execution of the memory allocation call, and wherein the signature identifies an allocation path of instructions executed prior to execution of the memory allocation call.In Example 3, the subject matter of any one of Examples 1-2, wherein the logic circuit comprises: a hash circuit to directly hash the recently retired branch instruction to the signature.In Example 4, the subject matter of any one of Examples 1-3, wherein the signature is a scalar value that is associated with an allocation path.In Example 5, the subject matter of any one of Examples 1-4, wherein the signature is associated with one or more memory buffers in memory previously allocated through the allocation path.In Example 6, the subject matter of any one of Examples 1-5, wherein in response to executing the memory allocation call, the processing core is to allocate a memory buffer in the memory tier.In Example 7 the subject matter of any one of Examples 1-6, wherein the PMU is to: collect an address and a size of the memory buffer from the memory allocation call; and collect data associated with accesses of the memory buffer.Various implementations may have different combinations of the structural features described above. For instance, all optional features of the processors and methods described above may also be implemented with respect to a system described herein and specifics in the examples may be used anywhere in one or more implementations.Example 8 is a system comprising: a memory; and a processing device, operatively coupled to the memory, the processing device to: retrieve last branch records (LBRs) from LBR registers prior to executing a memory allocation call; identify, based on the LBRs, an execution context of the memory allocation call; and determine a memory tier to allocate memory for the memory allocation call based on the execution context.In Example 9, the subject matter of Example 8, wherein the LBRs retrieved from the LBR registers comprise an LBR vector, representing information regarding retired branch instructions, stored in the LBR registers.In Example 10 the subject matter of any one of Examples 8-9, wherein the processing device, to identify the execution context of the memory allocation call, is to apply a hash function to the LBR vector to identify a signature associated with the execution context.In Example 11, the subject matter of any one of Examples 8-10, wherein the execution context comprises an allocation path, the allocation path being a series of retired branch instructions leading up to the memory allocation call.In Example 12, the subject matter of any one of Examples 8-11, wherein the processing device is further to store a virtual address pointer generated for the memory allocation call with the signature in an associative data structure.In Example 13, the subject matter of any one of Examples 8-12, wherein the signature is a scalar value that is associated with a memory buffer, wherein the memory buffer is an allocation of the memory that was assigned to the signature when a previous memory allocation call was executed.In Example 14, the subject matter of any one of Examples 8-13, the processing device further to: allocate a memory buffer in response to executing the memory allocation call; collect an address of the memory buffer and a size of the memory buffer; and collect data associated with accesses of memory at the address of the memory buffer.Various implementations may have different combinations of the structural features described above. For instance, all optional features of the processors and methods described above may also be implemented with respect to a system described herein and specifics in the examples may be used anywhere in one or more implementations.Example 15 is a method comprising: retrieving a last branch register (LBR) vector from a stack of LBR registers prior to a memory allocation call; determining a unique signature using the LBR vector, the signature representing an allocation path of the memory allocation call; selecting a tier of memory from a plurality of tiers of memory based on the signature; and assigning a memory buffer for the memory allocation call in the tier of memory.In Example 16, the subject matter of Example 15, further comprising: collecting a size of the memory buffer and a virtual address of the memory buffer; monitoring accesses of memory at the virtual address of the memory buffer; and updating information associated with the signature based on the accesses of memory at the virtual address of the memory buffer.In Example 17, the subject matter of any one of Examples 15-16, wherein selecting the tier of memory comprises: identifying access information associated with the signature; and determining, based on the access information, a cost of the memory allocation.In Example 18, the subject matter of any one of Examples 15-17, further comprising aggregating memory access information for a plurality of memory buffers associated with the signature.In Example 19 the subject matter of any one of Examples 15-18, wherein selecting the tier of memory comprises: determining access rates of memory buffers associated with the signature; and determining a total amount of memory allocated to memory buffers associated with the signature.In Example 20 the subject matter of any one of Examples 15-19, further comprising: determining an access density associated with the signature based on the access rates and the total amount of memory allocated to memory buffers associated with the signature; and categorizing the signature into one of a plurality of categories based on the access density associated with the signature.Example 21 is a system comprising means to perform a method of any one of the Examples 15-20.Example 22 is at least one non-transitory machine readable storage medium comprising a plurality of instructions, when executed, to implement a method or realize an apparatus of any one of Examples 15-20.Example 23 is an apparatus comprising a processor configured to perform the method of any one of Examples 15-20.While the disclosure has been described with respect to a limited number of implementations, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this disclosure.In the description herein, numerous specific details are set forth, such as examples of specific types of processors and system configurations, specific hardware structures, specific architectural and micro architectural details, specific register configurations, specific instruction types, specific system components, specific measurements/heights, specific processor pipeline stages and operation etc. in order to provide a thorough understanding of the disclosure. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the disclosure. In other instances, well known components or methods, such as specific and alternative processor architectures, specific logic circuits/code for described algorithms, specific firmware code, specific interconnect operation, specific logic configurations, specific manufacturing techniques and materials, specific compiler implementations, specific expression of algorithms in code, specific power down and gating techniques/logic and other specific operational details of a computer system have not been described in detail in order to avoid unnecessarily obscuring the disclosure.The implementations are described with reference to determining validity of data in cache lines of a sector-based cache in specific integrated circuits, such as in computing platforms or microprocessors. The implementations may also be applicable to other types of integrated circuits and programmable logic devices. For example, the disclosed implementations are not limited to desktop computer systems or portable computers, such as the Intel® Ultrabooks™ computers. And may be also used in other devices, such as handheld devices, tablets, other thin notebooks, systems on a chip (SoC) devices, and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, a digital signal processor (DSP), a system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below. It is described that the system can be any kind of computer or embedded system. The disclosed implementations may especially be used for low-end devices, like wearable devices (e.g., watches), electronic implants, sensory and control infrastructure devices, controllers, supervisory control and data acquisition (SCADA) systems, or the like. Moreover, the apparatuses, methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency. As will become readily apparent in the description below, the implementations of methods, apparatuses, and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are vital to a 'green technology' future balanced with performance considerations.Although the implementations herein are described with reference to a processor, other implementations are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of implementations of the disclosure can be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teachings of implementations of the disclosure are applicable to any processor or machine that performs data manipulations. However, the disclosure is not limited to processors or machines that perform 512 bit, 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data operations and can be applied to any processor and machine in which manipulation or management of data is performed. In addition, the description herein provides examples, and the accompanying drawings show various examples for the purposes of illustration. However, these examples should not be construed in a limiting sense as they are merely intended to provide examples of implementations of the disclosure rather than to provide an exhaustive list of all possible implementations of implementations of the disclosure.Although the above examples describe instruction handling and distribution in the context of execution units and logic circuits, other implementations of the disclosure can be accomplished by way of a data or instructions stored on a machine-readable, tangible medium, which when performed by a machine cause the machine to perform functions consistent with at least one implementation of the disclosure. In one implementation, functions associated with implementations of the disclosure are embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor that is programmed with the instructions to perform the steps of the disclosure. Implementations of the disclosure may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform one or more operations according to implementations of the disclosure. Alternatively, operations of implementations of the disclosure might be performed by specific hardware components that contain fixed-function logic for performing the operations, or by any combination of programmed computer components and fixed-function hardware components.Instructions used to program logic to perform implementations of the disclosure can be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of implementations of the disclosure.A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a micro-controller, associated with a non-transitory medium to store code adapted to be executed by the micro-controller. Therefore, reference to a module, in one implementation, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another implementation, use of a module refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another implementation, the term module (in this example) may refer to the combination of the microcontroller and the non-transitory medium. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one implementation, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.Use of the phrase 'configured to,' in one implementation, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still 'configured to' perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate 'configured to' provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term 'configured to' does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.Furthermore, use of the phrases 'to,' 'capable of/to,' and/or 'operable to,' in one implementation, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of 'to,' 'capable to,' or 'operable to,' in one implementation, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as 1's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one implementation, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example the decimal number ten may also be represented as a binary value of 1010 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one implementation, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.The implementations of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non-transitory mediums that may receive information there from.Instructions used to program logic to perform implementations of the disclosure may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer)Reference throughout this specification to "one implementation" or "an implementation" means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation of the disclosure. Thus, the appearances of the phrases "in one implementation" or "in an implementation" in various places throughout this specification are not necessarily all referring to the same implementation. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more implementations.In the foregoing specification, a detailed description has been given with reference to specific exemplary implementations. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of implementation and other exemplarily language does not necessarily refer to the same implementation or the same example, but may refer to different and distinct implementations, as well as potentially the same implementation.Some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is, here and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. The blocks described herein can be hardware, software, firmware or a combination thereof.It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as "defining," "receiving," "determining," "issuing," "linking," "associating," "obtaining," "authenticating," "prohibiting," "executing," "requesting," "communicating," or the like, refer to the actions and processes of a computing system, or similar electronic computing device, that manipulates and transforms data represented as physical (e.g., electronic) quantities within the computing system's registers and memories into other data similarly represented as physical quantities within the computing system memories or registers or other such information storage, transmission or display devices.The words "example" or "exemplary" are used herein to mean serving as an example, instance or illustration. Any aspect or design described herein as "example' or "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words "example" or "exemplary" is intended to present concepts in a concrete fashion. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless specified otherwise, or clear from context, "X includes A or B" is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then "X includes A or B" is satisfied under any of the foregoing instances. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term "an implementation" or "one implementation" or "an implementation" or "one implementation" throughout is not intended to mean the same implementation or implementation unless described as such. Also, the terms "first," "second," "third," "fourth," etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.
A process for fabricating a non-volatile memory device in which extraneous electrical charge is removed from charge-storage layers during fabrication includes exposing a charge-storage layer to infrared radiation prior to forming additional layers of the non-volatile memory cell. For example, in a memory cell incorporating a dielectric floating-gate electrode, such as silicon nitride, the infrared radiation exposure step is carried out after forming the floating-gate electrodes and prior to formation of the control-gate electrode. By exposing the charge-storage layer to infrared radiation prior to forming additional layers, extraneous electrical charge arising from previous processing steps can be efficiently removed from the floating-gate electrodes.
What is claimed is: 1. A process for fabricating a non-volatile memory device comprising the steps of:providing a semiconductor substrate; forming a charge-storage layer overlying the semiconductor substrate; wherein the charge-storage layer has an exposed surface region; forming a buried bit-line in the semiconductor substrate; and after the buried bit-line is formed, bombarding the exposed surface region with sufficient infrared radiation to remove unwanted electrical charge from the charge-storage layer. 2. The process of claim 1, wherein the step of forming a charge-storage layer comprises forming an silicon nitride layer.3. The process of claim 1, wherein the step of forming a charge-storage layer comprises the steps of:forming a first silicon oxide layer overlying the substrate; forming a silicon nitride layer overlying the first silicon oxide layer; and forming a second silicon oxide layer overlying the silicon nitride layer. 4. The process of claim 3, wherein the step of bombarding the charge-storage layer with infrared radiation comprises removing unwanted electrical charge from the silicon nitride layer.5. The process of claim 1, wherein the step of forming the charge-storage layer comprises the steps of:chemical vapor depositing an ONO layer; forming a resist pattern on the ONO layer; and plasma etching the ONO layer. 6. The process of claim 1, wherein the step of bombarding the exposed surface region with infrared radiation comprises using infrared radiation having a wavelength of about 600 nm to about 1100 nm.7. The process of claim 1, wherein the process of forming a charge-storage layer comprises forming a polycrystalline silicon layer.8. The process of claim 1, further comprising the step of forming a control gate electrode overlying the charge-storage layer.9. A process for fabricating a non-volatile memory device comprising the steps of:providing a semiconductor substrate; forming a change-storage layer overlying the semiconductor substrate; forming a resist pattern on the charge-storage layer; etching the charge-storage layer; removing the resist pattern; forming a buried bit-line in the semiconductor substrate; and after the buried bit-line is formed, exposing the charge-storage layer to sufficient infrared radiation to remove unwanted electrical charge from the charge-storage layer. 10. The process of claim 9, wherein the step of forming a charge-storage layer comprises forming a silicon nitride layer.11. The process of claim 9, wherein the step of forming a charge-storage layer comprises forming a polycrystalline silicon layer.12. The process of claim 9 further comprising the step of forming a control-gate electrode overlying the charge-storage layer.13. The process of claim 12, wherein the step of forming a control-gate electrode comprises forming an insulating layer; andforming a layer of polycrystalline silicon layer overlying the insulating layer. 14. The process of claim 9, wherein the step of etching the charge-storage layer exposes surface regions of the semiconductor substrate and wherein the process further comprises the step of forming bit-line oxide regions on the semiconductor substrate prior to the step of exposing the charge-storage layer to infrared radiation.15. A process for fabricating a non-volatile semiconductor device comprising the steps of:providing a semiconductor substrate; forming a charge-storage layer overlying the semiconductor substrate; forming a buried bit-line in the semiconductor substrate; after the buried bit-line is formed, exposing the charge-storage layer to sufficient infrared radiation to remove unwanted electrical charge from the charge-storage layer; and forming a control-gate layer overlying the charge-storage layer. 16. The process of claim 15 wherein the step of forming a charge-storage layer comprises forming a patterned charge storage layer on a portion of the semiconductor substrate and leaving an exposed portion of the substrate, and wherein the process further comprises forming an oxide region in the exposed portion of the substrate.17. The process of claim 15, wherein the step of forming a charge-storage layer comprises depositing a layer selected from the group consisting of silicon nitride and polycrystalline silicon.18. The process of claim 15, wherein the step of forming a charge-storage layer comprises forming a layer having process-induced electrical charge therein.
FIELD OF THE INVENTIONThis invention relates, generally, to the fabrication of semiconductor devices and, more particularly, to the fabrication of non-volatile memory devices such as EEPROM devices, and the like.BACKGROUND OF THE INVENTIONNon-volatile memory devices are currently in widespread use in to electronic components that require the retention of information when electrical power is terminated. Non-volatile memory devices include read-only-memory (ROM), programmable-read-only memory (PROM), erasable-programmable-read-only-memory (EPROM), and electrically-erasable-programmable-read-only-memory (EEPROM) devices. EEPROM devices differ from other non-volatile memory devices in that they can be electrically programmed and erased. Flash EEPROM devices are similar to EEPROM devices in that memory cells can be programmed and erased electrically. However, Flash EEPROM devices enable the erasing of all memory cells in the device using a single electrical current pulse.Typically, an EEPROM device includes a floating-gate electrode upon which electrical charge is stored. The floating-gate electrode overlies a channel region residing between source and drain regions in a semiconductor substrate. The floating-gate electrode together with the source and drain regions forms an enhancement transistor. By storing electrical charge on the floating-gate electrode, the threshold voltage of the enhancement transistor is brought to a relatively high value. Correspondingly, when charge is removed from the floating-gate electrode, the threshold voltage of the enhancement transistor is brought to a relatively low value. The threshold level of the enhancement transistor determines the current flow through the transistor when the transistor is turned on by the application of appropriate voltages to the gate and drain. When the threshold voltage is high, no current will flow through the transistor, which is defined as a logic 0 state. Correspondingly, when the threshold voltage is low, current will flow through the transistor, which is defined as a logic 1 state.Since the operation of an EEPROM device depends upon the presence or absence of charge on the floating-gate electrode, memory manufacturers typically take steps to ensure that all memory cells are erased prior to shipment of memory devices to customer. Typically the data-erase operation involves applying appropriate erase voltages to the memory array in order remove electrical charge from the floating-gate electrodes in the array. It is important that no electrical charge remain on any floating-gate electrode in a memory array prior to shipment. Extraneous charge in the memory array can result in programming errors and other operational anomalies.Advances in EEPROM device technology have led to the use of certain dielectric materials for the fabrication of floating-gate electrodes. For example, advanced EEPROM devices can be fabricated with silicon nitride floating-gate electrodes. Silicon nitride is among a group of dielectric materials that possess the capability to store electrical charge in isolated regions within the dielectric material. The ability of silicon nitride to store electrical charge in isolated regions has led to its use in advance EEPROM technology, such as two-bit non-volatile memory devices.Although the ability of materials, such as silicon nitride and the like, to store electrical charge in isolated regions has enabled the fabrication of advanced EEPROM devices. Memory cells incorporating silicon nitride as a charge storage layer must be carefully fabricated. The storage and removal of electrical charge from isolated regions of a single layer of silicon nitride in an EEPROM memory cell requires that adequate steps to taken to ensure that extraneous electrical charge does not inadvertently remain on floating-gate electrodes prior to device shipment. According, advances in non-volatile fabrication technology are necessary to ensure proper programming and operation of non-volatile memory devices incorporating floating-gate electrodes fabricated with dielectric materials.SUMMARY OF THE INVENTIONThe present invention is for a process for fabricating a non-volatile memory device in which extraneous electrical charge is removed from charge-storage layers during fabrication of the non-volatile memory device. By taking steps to remove electrical charge from charge-storage wires during device fabrication, extraneous electrical charge induced during fabrication can be efficiently removed. For example, extraneous electrical charge can be generated by conventional processing operations, such as chemical-vapor-deposition (CVD), plasma etching and the like. The process of the present invention efficiently removes process-induced electrical charge by exposing the charge-storage layer to infrared radiation during device fabrication. The inventive process incorporates infrared radiation exposure of the charge-storage layer at a point in the process prior to the formation of overlying layers, such as control-gate electrodes, interlevel dielectric layers, metal interconnect layers, and the like. By exposing the charge-storage layer at an intermediate point in the fabrication process, a high efficiency charge removal methodology is realized.In one form, a process is provided in which a charge-storage layer is formed to overlie a semiconductor substrate. The charge-storage layer has exposed surface regions which are bombarded by infrared radiation to remove electrical charge from the charge-storage layer.BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1-5 illustrate, in cross-section, processing steps in accordance with one embodiment of the invention; andFIG. 6 illustrates, in cross-section, a non-volatile memory device having a stacked-gate electrode fabricated in accordance with the invention.It will be appreciated that, for simplicity and clarity of illustration, elements shown in the Figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to each other. Further, where considered appropriate, reference numerals have been repeated among the Figures to indicate corresponding elements.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSFIG. 1 illustrates a cross-section view of a portion of a semiconductor substrate 10 having already undergone several processing steps in accordance with the invention. An ONO layer 12 overlies a principal surface 14 of semiconductor substrate 10. ONO layer 12 includes a first silicon oxide layer 16 overlying principal surface 14, a silicon nitride layer 18 overlying first silicon oxide layer 16, and a second silicon oxide layer 20 overlying silicon nitride layer 18. In accordance with the invention, ONO layer 12 can be fabricated by a variety of fabrication techniques. For example, first silicon oxide layer 16 can be thermally grown on principal surface 14, followed by a CVD silicon nitride deposition process and a CVD silicon oxide deposition process. Alternatively, after thermally growing first silicon oxide layer 16 and forming silicon nitride layer 18 by CVD, a thermal oxidation process can be carried out to grow second silicon oxide layer 20 on silicon nitride layer 18. In yet another ONO formation process, ONO layer 12 can be formed by first growing or depositing a thick silicon oxide layer followed by a nitrogenation process in which nitrogen is incorporated into the silicon oxide layer. The nitrogen can be incorporated by a nitrogen annealing process, nitrogen ion implantation and the like.In the present embodiment, ONO layer 12 and, in particular, silicon nitride layer 18, will function as a floating-gate electrode in an EEPROM device. Accordingly, the fabrication of ONO layer 12 is carefully performed to ensure that a high quality silicon nitride layer is fabricated. Silicon nitride layer 18 will be used to store electrical charge in isolated regions of the silicon nitride layer during operation of the memory device.Following the formation of ONO layer 12, a patterned resist layer 22 is formed on the surface of second silicon oxide layer 20, as illustrated in FIG. 2. Patterned resist layer 22 can be formed with a variety of resist materials commonly used in semiconductor fabrication. For example, patterned resist layer 22 can be formed by depositing a layer of positive photoresist, followed by exposure of the resist to optical radiation and subsequent development in a developer solution. Alternatively, patterned resist layer 22 can be formed by depositing a deep-UV resist followed by exposure to UV radiation and subsequent chemical development. Additionally, patterned resist layer 22 can be formed by deposition of an X-ray resist material, followed by exposure to X-ray radiation and chemical development.After forming patterned resist layer 22, pocket regions 24 are formed in semiconductor substrate 10. Pocket regions 24 are formed to partially underlie the sections of patterned resist layer 22. In a preferred embodiment, pocket regions 24 are formed by an angled ion implantation process. The angled ion implantation process is carried out at an offset angle with respect to the normal of principal surface 14. By implanting at an offset angle of incidence, dopant atoms can be placed in semiconductor substrate 10 in regions underlying the edges of the sections of patterned resist layer 22. Preferably, pocket regions 24 are p-type regions formed by the angled ion implantation of boron.Once pocket regions 24 are formed, an etching process is carried out to form floating-gate electrodes 26, 28 and 30, as illustrated in FIG. 3. Preferably, floating-gate electrodes 26, 28 and 30 are formed by anisotropic etching of ONO layer 12. The anisotropic etching process directionally etches ONO layer 12, such that floating-gate electrodes 26, 28 and 30 are formed to have substantially vertical sidewalls.After forming the floating-gate electrodes, a doping process is carried out to form buried bit-lines 32 and 34 in semiconductor substrate 10. In a preferred embodiment of the invention, buried bit-lines 32 and 40 are formed by ion plantation into semiconductor substrate 10 using patterned resist layer 22 as a doping mask. Upon completion of the doping process, buried bit-lines 32 and 34 reside in semiconductor substrate 10, such that pocket regions 24 lie adjacent to a portion of the perimeter of buried bit-lines 32 and 34.Those skilled in the art will recognize that the process used to form pocket regions 24 and buried bit-lines 32 and 34 can be different from that just described. For example, buried bit-lines 32 and 34 can be formed prior to forming pocket regions 24. In another alternative process, ONO layer 12 can be etched prior to forming pocket regions 24. In yet another alternative process, buried bit-lines 32 and 34 can be formed prior to etching ONO layer 12. Accordingly, all such process variations are within the scope of the present invention.In a preferred embodiment of the invention, pocket regions 24 are formed by ion implantation of boron at a dose of about 0.2E13 ions/cm<2 >to about 4.5E13 ions/cm<2 >and at an implant energy of about 20 keV to about 200 keV. Also, buried bit-lines 32 and 34 are preferably formed by ion implantation of arsenic at a dose of about 1E 15 ions/cm<2 >to about 12E 15 ions/cm<2 >and at an implant energy of about 15 keV to about 180 keV. Additionally, in a preferred embodiment, first oxide layer 16 is formed to a thickness of about 40 angstroms to about 60 angstroms, and first oxide layer 16 is not removed by the ONO etching process until after formation of buried bit-lines 32 and 34.After removing resist mass 22, bit-line oxide regions 36 and 38 are formed in semiconductor substrate 10. Preferably, bit-line oxide regions 36 and 38 are formed by a thermal oxidation process using the floating-gate electrode as an oxidation mass. Those skilled in the art will recognize that ONO structures, such as ONO layer 12, are resistant to thermal oxidation such that regions of semiconductor substrate 10 underlying the floating-gate electrodes will be protected from the thermal oxidation process.In accordance with the invention, after forming buried bit-lines 32 and 34, resist layer 22 is removed and the floating-gate electrodes are exposed to infrared radiation, as illustrated in FIG. 4. In a preferred embodiment of the invention, the floating-gate electrodes are exposed to infrared radiation having a wavelength of about 600 nm to about 1100 nm, more preferably about 800 nm to about 1000 nm. Any extraneous electrical charge present in silicon nitride layer 18 is removed by exposure to infrared radiation.Extraneous electrical charge can arise from several sources, for example, during the previously described processing steps, electrical charge can become trapped in silicon nitride layer 18. The extraneous electrical charge can arise in silicon nitride layer 18 during CVD processing, plasma etching, ion implantation and the like. By performing a infrared radiation exposure step at a point in time prior to forming additional layers, such as dielectric layers, control-gate electrodes, and the like, over the floating-gate electrodes, a highly efficient charge removal process is realized. As illustrated in FIG. 4, the infrared radiation only needs to penetrate second oxide layer 20 in order to bathe silicon nitride layer 18 in radiation.The importance of removing electrical charge from the floating-gate electrodes, such as floating-gate electrodes 26 and 28, is important in view of the close proximity of the floating-gate electrode to channel regions 40 and 42 in semiconductor substrate 10. During operation of an EEPROM memory cell, electrical charge is injected into silicon nitride layer 18 from pocket regions 24. For proper operation of an EEPROM memory device, such as a 2-bit memory device, the electrical charge injected into silicon nitride layer 18 must remain in isolated regions in close proximity to the pocket regions. Once this is accomplished, the electrical field experienced by channel regions 40 and 42 will vary depending upon the presence or absence of charge in the isolated regions of silicon nitride layer 18. Given the extremely small feature sizes to which state-of-the-art EEPROM devices are fabricated, even an extremely small amount of unwanted electrical charge can severely disrupt the electrical field established by the floating-gate electrodes in channel regions 40 and 42. The process of the invention maximizes the removal of extraneous electrical charge by exposing silicon nitride layer 18 to infrared radiation at an intermediate point in the non-volatile memory fabrication process.As illustrated in FIG. 5, once extraneous charge has been removed from the floating-gate electrodes, a control-gate electrode 44 is formed. Preferably, control-gate electrode 44 is formed by depositing a layer of polycrystalline silicon by a CVD process, followed by patterning and etching to form thin control-gate lines overlying the substrate. Control-gate electrode 44 overlies the floating-gate electrodes 26 and 28, and bit-line oxide regions 36 and 38. In accordance with the invention, additional infrared radiation exposure steps can be carried out at selected stages of the non-volatile memory fabrication process. Accordingly, a second infrared radiation exposure step can be carried out after forming floating-gate electrode 44. In addition to removing electrical charge from dielectric materials such as silicon nitride 18, the infrared radiation exposure step can remove electrical charge from electrically conductive materials, such as polycrystalline silicon forming control-gate electrode 44. Accordingly, the present invention is not limited to removal of electrical charge from floating-gate electrodes fabricated with insulating materials and can beneficially be employed in the fabrication of non-volatile memory devices having polycrystalline silicon floating-gate electrodes.Shown in FIG. 6 is a cross-sectional view of a non-volatile memory device 46 having a stacked-gate electrode structure 48. Stacked-gate electrode structure 48 overlies a channel region 50 formed in a semiconductor substrate 52. A source region 54 and a drain region 56 reside in semiconductor substrate 52 and are separated by channel region 50. Stacked-gate electrode structure 48 includes a first gate dielectric layer 58 overlying channel region 50 and a floating-gate electrode 60 overlying gate dielectric layer 58. A control-gate electrode 62 is separated from floating-gate electrode 60 by an inter-gate dielectric layer 64.In accordance with the invention, following the deposition and etching process used to form floating-gate electrode 60, an infrared exposure step is carried out to remove extraneous electrical charge from floating-gate electrode 60. Floating-gate electrode 60 can be formed from a variety of semiconductive materials, such as polycrystalline silicon, refractory metal silicides, amorphous silicon and the like. The infrared exposure process can be advantageously employed to remove extraneous electrical charge from semiconductive materials used to form a floating-gate electrode.Thus, it is apparent that there has been described, in accordance with the invention, a process for fabricating a non-volatile memory device that fully provides the advantages set forth above. Although the invention has been described and illustrated with reference to specific, illustrative embodiments thereof, it is not intended that the invention be limited to those illustrative embodiments. Those skilled in the art will recognize that variations and modifications can be made without departing from the spirit of the invention. For example, in addition to the non-volatile memory devices illustrated above, the process of the invention can be carried out to fabricate single-poly non-volatile memory cells. Single-poly non-volatile memory cells are often employed in the memory arrays of standard logic devices, microcontroller devices and the like. It is, therefore, intended to include within the invention all such variations and modifications as fall within the scope of the appended claims and equivalents thereof.
Aspects of the embodiments are directed to a port comprising hardware to support the multi-lane link, the link comprising a lane that comprises a first differential signal pair and a second differential signal pair. Link configuration logic, implemented at least in part in hardware circuitry, can determine that the port comprises hardware to support one or both of receiving data on the first differential signal pair or transmitting data on the second differential signal pair, and reconfigure the first differential signal pair to receive data with the second differential signal pair or reconfigure the second differential signal pair to transmit data with the first differential signal pair; and wherein the port is to transmit data or receive data based on reconfiguration of one or both the first differential signal pair and the second differential signal pair.
1.An apparatus for configuring a multi-channel link, the apparatus includes:A port including hardware for supporting the multi-channel link, the link including a channel, the channel including a first differential signal pair and a second differential signal pair, wherein the first differential signal pair is initially ground Configured to transmit data and the second differential signal pair is initially configured to receive data; andLink configuration logic, implemented at least in part in hardware circuits, for:Determining that the port includes hardware for supporting one or both of: receiving data on the first differential signal pair or transmitting data on the second differential signal pair, andReconfiguring the first differential signal pair to receive data using the second differential signal pair or reconfiguring the second differential signal pair to transmit data using the first differential signal pair; andThe port is configured to send or receive data based on reconfiguration of one or both of the first differential signal pair and the second differential signal pair.2.The apparatus of claim 1, wherein the port comprises a PCIe-based port.3.The apparatus of claim 1, wherein the link configuration logic is configured to receive an announcement during a link training phase of operation, the announcement indicating that the port includes hardware, and the hardware is configured to support one of the following Or two: receive data on the first differential signal pair or send data on the second differential signal pair.4.The apparatus of claim 1, wherein the link configuration logic is configured to perform link equalization for the first differential signal pair and the second differential signal pair.5.The apparatus of claim 1, wherein the apparatus comprises a buffer memory coupled to the port for buffering transmission data to be transmitted on the first differential signal pair and the second differential signal pair Or buffering received data received on the first differential signal pair and the second differential signal pair.6.The apparatus of claim 5, wherein the buffer memory includes a common stack for each of the first differential signal pair and the second differential signal pair.7.The apparatus of claim 5, wherein the buffer memory includes a first stack for the first differential signal pair and a second stack for the second differential signal pair.8.The apparatus according to claim 1, wherein the port includes hardware for supporting multiple TX lines and multiple RX lines, and wherein the port includes hardware and the hardware is for Data is received on a subset of the TX lines and / or is used to send data on the subset of the multiple RX lines.9.The apparatus of claim 8, wherein the link configuration logic is configured to assign a channel number to a subset of the plurality of TX lines or a subset of the plurality of RX lines.10.The apparatus of claim 1, wherein the port comprises hardware for receiving control signaling on the first differential signal pair or sending control signaling on the second differential signal pair.11.The apparatus of claim 1, wherein the link configuration logic is configured to determine to reconfigure the first differential signal pair or the second differential signal pair based on bandwidth utilization information.12.A method for performing channel configuration in a multi-channel link, the method comprising:Detecting a cross-link device-to-host device connection, wherein the link includes a first signal channel and a second signal channel, and the first signal channel is initially configured to send from the device to the host device Data, the second signal channel is initially configured to receive data from the host device at the device;Receiving a capability announcement from the device, the capability announcement indicating that the device can support at least one of: conversion of the first signal channel for receiving data or conversion of the second signal channel for sending data;Performing channel configuration to reconfigure the first signal channel to receive data or reconfigure the second signal channel to send data; andData is transmitted over the link based on a reconfiguration of one or both of the first signal channel and the second signal channel.13.The method of claim 12, further comprising:Performing link training on one or more channels connecting the host to the device;Detect a capability announcement during link training, the capability announcement indicating that the device can support at least one of the following: conversion of the first signal channel for receiving data or conversion of the second signal channel for sending data ;as well asConfiguring the first signal channel to receive data or configuring the second signal channel to send data; andEqualization is performed on the channels during link training.14.The method of claim 13, further comprising entering a L0 state of an Active State Power Management (ASPM) protocol after completing link training.15.The method of claim 13, further comprising determining a bandwidth utilization capability of the device based on link training; andThe multi-channel link is configured to be asymmetric based at least in part on the bandwidth utilization capability.16.The method of claim 13, further comprising detecting an indication from the device for returning one or more channels to a default state; andReconfigure the link to return to the default state.17.A system including:A host, which includes a data processor, a port, and a system manager; andA device connected to the host across a multi-channel link, the multi-channel link including a channel including a first differential signal pair and a second differential signal pair, the first differential signal pair initially being configured as Sending data in a first channel of the link, and the second differential signal pair is initially configured to receive data in a first channel of the link;The system manager is used for:Detecting a capability announcement from the device, wherein the capability announcement indicates that the device can use the first differential signal pair to receive data or use the second differential signal pair to send data;Reconfigure the first differential signal pair to receive data or reconfigure the second differential signal pair to send data based at least in part on the capability announcement; andAfter reconfiguring the second differential signal pair, performing data transmission on the first differential signal pair and the second differential signal pair, or after reconfiguring the first differential signal pair, on the first Data reception is performed on a differential signal pair and the second differential signal pair.18.The system of claim 17, wherein the port comprises a PCIe-based port.19.The system of claim 17, wherein the system manager is configured to receive an announcement during a link training phase of operation, the announcement indicating that the port includes hardware that is configured to support the first A differential signal pair or a second differential signal pair for transmitting data.20.The system of claim 17, wherein the system manager logic is configured to perform link equalization for the first differential signal pair and the second differential signal pair.21.The system of claim 17, further comprising a buffer memory coupled to the port for buffering TX data to be transmitted on the first differential signal pair and the second differential signal pair, or buffered in RX data received on the first differential signal pair and the second differential signal pair.22.The system of claim 21, wherein the buffer memory includes a common stack for each of the first differential signal pair and the second differential signal pair.23.The system of claim 21, wherein the buffer memory includes a first stack for the first differential signal pair and a second stack for the second differential signal pair.24.The system of claim 17, wherein the system manager is used to:Determining that the downlink bandwidth used by the device is more than the uplink bandwidth;Configuring the second differential signal pair to send data; andData transmission is performed on the first differential signal pair and the second differential signal pair.25.The system of claim 17, wherein the system manager is used to:Determining that the uplink bandwidth used by the device is more than the downlink bandwidth;Configuring the first differential signal pair to receive data; andData reception is performed on the first differential signal pair and the second differential signal pair.
Dynamically negotiate asymmetric link widths in multi-path linksBackground techniqueInterconnects can be used to provide communication between different devices in the system, using some type of interconnection mechanism. A typical communication protocol used for communication interconnection between devices in a computer system is the Fast Peripheral Component Interconnect (Express PCITM (PCIeTM)) communication protocol. This communication protocol is an example of a load / store input / output (I / O) interconnect system. Communication between devices is usually performed serially at very high speed according to this protocol.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates an embodiment of a block diagram of a computing system including a multi-core processor.Figure 2 shows an embodiment of a transmitter and receiver pair for an interconnect architecture.3 is a schematic diagram of an example rapid peripheral component interconnect (PCIe) link architecture according to an embodiment of the present disclosure.FIG. 4A is a schematic diagram of an example multi-channel interconnect architecture according to an embodiment of the present disclosure.4B is a schematic diagram of an example channel direction switching according to an embodiment of the present disclosure.FIG. 5A is a schematic diagram of an example symmetric link topology according to an embodiment of the present disclosure.5B is a schematic diagram of an example asymmetric link topology according to an embodiment of the present disclosure.5C is a schematic diagram of an exemplary asymmetric link topology according to an embodiment of the present disclosure.6 is a schematic diagram of a variable link width topology showing an arrangement of channel width variability according to an embodiment of the present disclosure.7A-7B are schematic diagrams of an example logical stack implementation for extending a link width of a multi-channel link according to an embodiment of the present disclosure.8 is a flowchart of a process for dynamically negotiating an asymmetric link width in a multi-channel link according to an embodiment of the present disclosure.FIG. 9 illustrates an embodiment of a computing system including an interconnect architecture.FIG. 10 illustrates an embodiment of an interconnect architecture including a layered stack.FIG. 11 illustrates an embodiment of a request or packet to be generated or received within an interconnect architecture.FIG. 12 illustrates another embodiment of a block diagram of a computing system including a processor.FIG. 13 illustrates an embodiment of a block for a computing system including multiple processor sockets.detailed descriptionIn the following description, many specific details are set forth, such as examples of specific types of processors and system configurations, specific hardware structures, specific architecture and microarchitecture details, specific register configurations, specific instruction types, specific system components , Specific measurements / heights, specific processor pipeline stages and operations, etc. in order to provide a thorough understanding of the invention. However, it will be apparent to those skilled in the art that these specific details need not be used to practice the invention. In other instances, well-known components or methods have not been described in detail, such as specific and alternative processor architectures, specific logic circuits / codes for the described algorithms, specific firmware codes, specific interconnect operations, specific Logic configuration, specific manufacturing techniques and materials, specific compiler implementations, specific expressions of algorithms using code, specific power-down and gating technologies / logic, and other specific operational details of computer systems to avoid unnecessary obscurity this invention.Although the following embodiments may be described with reference to energy saving and energy efficiency in a particular integrated circuit such as a computing platform or microprocessor, other embodiments may be applied to other types of integrated circuits and logic devices. Similar techniques and teachings of the embodiments described herein can be applied to other types of circuits or semiconductor devices that can also benefit from better energy efficiency and energy savings. For example, the disclosed embodiments are not limited to desktop computer systems or UltrabookTM. It can also be used in other devices, such as handheld devices, tablet computers, other thin notebook computers, system-on-chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, digital signal processor (DSP), system on a chip, network computer (NetPC), set-top box, network hub, wide area network (WAN) switch, or any other system that can perform the functions and operations taught below. In addition, the devices, methods, and systems described herein are not limited to physical computing devices, but may also involve software optimization for energy conservation and efficiency. As will become apparent in the description below, embodiments of the methods, devices, and systems described herein (whether referenced to hardware, firmware, software, or a combination thereof) are future-proof for "green technologies" that are balanced with performance considerations. Important.With the development of computing systems, the components therein have become more and more complex. As a result, interconnect architectures for coupling and communicating between components have also increased in complexity to ensure that bandwidth requirements are met for optimal component operation. In addition, different market segments require different aspects of the interconnect architecture to adapt to market needs. For example, servers require higher performance, and the mobile ecosystem can sometimes sacrifice overall performance for power savings. However, the unique purpose of most structures is to provide the highest possible performance and maximum power savings. A number of interconnections are discussed below that may benefit from aspects of the invention described herein.Referring to FIG. 1, an embodiment of a block diagram of a computing system including a multi-core processor is depicted. The processor 100 includes any processor or processing device, such as a microprocessor, embedded processor, digital signal processor (DSP), network processor, handheld processor, application processor, coprocessor, system on chip (SOC ) Or other device used to execute code. In one embodiment, the processor 100 includes at least two cores-cores 101 and 102, which may include asymmetric cores or symmetric cores (the embodiment shown). However, the processor 100 may include any number of processing elements that may be symmetric or asymmetric.In one embodiment, a processing element refers to hardware or logic that supports software threads. Examples of hardware processing elements include: thread units, thread slots, threads, processing units, contexts, context units, logical processors, hardware threads, cores, and / or any other elements that can maintain the state of the processor, such as execution state Or architecture status. In other words, in one embodiment, a processing element refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor (or processor socket) generally refers to an integrated circuit, which may include any number of other processing elements, such as cores or hardware threads.A core typically refers to logic located on an integrated circuit capable of maintaining independent architectural states, where each independently maintained architectural state is associated with at least some dedicated execution resources. Unlike a core, a hardware thread generally refers to any logic on an integrated circuit that can maintain an independent architectural state, where independently maintained architectural states share access to execution resources. It can be seen that when certain resources are shared and other resources are dedicated to an architectural state, the boundaries between the names of the hardware threads and the cores overlap. Generally, operating systems treat cores and hardware threads as separate logical processors, where the operating system is able to schedule operations on each logical processor individually.The physical processor 100, as shown in FIG. 1, includes two cores-cores 101 and 102. Herein, the cores 101 and 102 are considered to be symmetric cores, that is, cores having the same configuration, functional units, and / or logic. In another embodiment, core 101 includes an out-of-order processor core and core 102 includes an in-order processor core. However, cores 101 and 102 can be individually selected from any type of core, such as a local core, a software-managed core, a core suitable for executing a local instruction set architecture (ISA), a core suitable for executing a converted instruction set architecture (ISA) , Publicly designed cores, or other known cores. In a heterogeneous core environment (ie, an asymmetric core), some form of transformation (such as a binary transformation) can be used to schedule or execute code on one or two cores. For further discussion, the functional units shown in the core 101 are described in further detail below, as the units in the core 102 operate in a similar manner as in the depicted embodiment.As shown, the core 101 includes two hardware threads 101a and 101b, which may also be referred to as hardware thread slots 101a and 101b. Thus, in one embodiment, a software entity such as an operating system potentially treats the processor 100 as four separate processors, that is, four logical processors or processing elements capable of executing four software threads simultaneously. As described above, the first thread is associated with the architecture status register 101a, the second thread is associated with the architecture status register 101b, the third thread may be associated with the architecture status register 102a, and the fourth thread may be associated with the architecture status register 102b . Herein, each of the architectural status registers (101a, 101b, 102a, and 102b) may be referred to as a processing element, a thread slot, or a thread unit, as described above. As shown, the architectural status register 101a is copied in the architectural status register 101b, so separate architectural states / contexts can be stored for the logical processor 101a and the logical processor 101b. In core 101, other smaller resources may also be copied for threads 101a and 101b, such as instruction pointers and renaming logic in allocator and renaming block 130. Some resources such as the reordering buffer, ILTB 120, load / store buffers, and queues in the reordering / retirement unit 135 may be shared by partitions. Other resources such as general purpose internal registers, page table base registers, low-level data cache and data TLB 115, execution unit 140, and portions of out-of-order unit 135 are potentially fully shared.The processor 100 typically includes other resources that can be fully shared, shared by partitions, or dedicated to / for processing elements. In FIG. 1, an embodiment of a purely exemplary processor with illustrative logic units / resources of a processor is shown. Note that the processor may include or omit any of these functional units, and also include any other known functional units, logic, or firmware not depicted. As shown, core 101 includes a simplified, representative out-of-order (OOO) processor core. However, an ordered processor may be used in different embodiments. The OOO core includes a branch target buffer 120 for predicting a branch to be executed / taken, and an instruction-translation buffer (I-TLB) 120 for storing an address translation entry for an instruction.The core 101 further includes a decoding module 125 coupled to the instruction fetching unit 120 to decode the obtained elements. In one embodiment, the fetch logic includes separate sequencers associated with thread slots 101a, 101b, respectively. Generally, the core 101 is associated with a first ISA that defines / designates instructions executable on the processor 100. Machine code instructions that are part of the first ISA typically include a portion of an instruction (called an opcode) that references / specifies the instruction or operation to be executed. Decoding logic 125 includes circuitry that identifies these instructions from their opcodes and transfers the decoded instructions to the pipeline for processing as defined by the first ISA. For example, as discussed in more detail below, the decoder 125 in one embodiment includes logic designed or adapted to identify a particular instruction, such as a transaction instruction. As a result of being identified by the decoder 125, the architecture or core 101 takes specific, predefined actions to perform tasks associated with appropriate instructions. It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of them may be new or old instructions. Note that in one embodiment, the decoder 126 identifies the same ISA (or a subset thereof). Alternatively, in a heterogeneous core environment, the decoder 126 identifies a second ISA (a subset of the first ISA or a different ISA).In one example, the allocator and renaming block 130 includes an allocator to reserve resources, such as a register file for storing instruction processing results. However, threads 101a and 101b are potentially capable of out-of-order execution, where the allocator and renaming block 130 also reserves other resources, such as reordering buffers for tracking instruction results. The unit 130 may further include a register renamer for renaming the program / instruction reference register to other registers inside the processor 100. The reorder / retreat unit 135 includes components such as the reorder buffer, load buffer, and storage buffer described above to support out-of-order execution and subsequently retire out-of-order instructions.In one embodiment, the scheduler and execution unit block 140 includes a scheduler unit for scheduling instructions / operations on the execution unit. For example, floating-point instructions are dispatched on a port of an execution unit with an available floating-point execution unit. A register file associated with the execution unit is also included to store information instruction processing results. Exemplary execution units include floating-point execution units, integer execution units, jump execution units, load execution units, storage execution units, and other known execution units.A low-level data cache and data conversion buffer (D-TLB) 150 is coupled to the execution unit 140. The data cache is used to store recently used / operated elements, such as data operands, which may be held in a memory coherent state. D-TLB is used to store the most recent virtual / linear to physical address translation. As a specific example, the processor may include a page table structure for dividing the physical memory into a plurality of virtual pages.Herein, the cores 101 and 102 share access to higher-level or further external caches, such as the secondary cache associated with the on-chip interface 110. Note that higher-level or further-out caching refers to an increase in cache level or more paths from the execution unit. In one embodiment, the higher-level cache is a last-level data cache—the last cache in the memory hierarchy on the processor 100—such as a second- or third-level data cache. However, the higher-level cache is not limited to this, because it may be associated with or contain an instruction cache. A trace cache (a type of instruction cache) may be coupled after the decoder 125 to store recently decoded traces. Herein, an instruction may refer to a macro instruction (that is, a general instruction recognized by a decoder) that can be decoded into multiple micro instructions (micro operations).In the depicted configuration, the processor 100 also includes an on-chip interface module 110. Historically, a memory controller described in more detail below has been included in a computing system external to the processor 100. In this scenario, the on-chip interface 11 is used to communicate with devices external to the processor 100, such as system memory 175, a chipset (typically including a memory controller hub for connecting to the memory 175, and for connecting peripheral devices I / O controller hub), memory controller hub, Northbridge or other integrated circuits. And in this scenario, the bus 105 may include any known interconnect, such as a multi-drop bus, point-to-point interconnect, serial interconnect, parallel bus, coherent (e.g., cache coherent) bus, layered protocol architecture, differential bus And GTL bus.The memory 175 may be dedicated to the processor 100 or shared with other devices in the system. Common examples of the type of memory 175 include DRAM, SRAM, non-volatile memory (NV memory), and other known storage devices. Note that the device 180 may include a graphics accelerator, processor, or card coupled to a memory controller hub, a data memory coupled to the I / O controller hub, a wireless transceiver, a flash memory device, an audio controller, a network controller, or other Known equipment.However, recently, as more and more logic and devices are integrated on a single die such as a SOC, each of these devices can be incorporated into the processor 100. For example, in one embodiment, the memory controller hub is located on the same package and / or die as the processor 100. Herein, a portion of the core (upper core) 110 includes one or more controllers for interfacing with other devices, such as the memory 175 or the graphics device 180. Configurations that include interconnects and controllers for engaging with these devices are often referred to as on-core (or coreless configurations). As an example, the on-chip interface 110 includes a ring interconnect for on-chip communication and a high-speed serial point-to-point link 105 for off-chip communication. However, in a SOC environment, even more devices (e.g., network interfaces, coprocessors, memory 175, graphics processor 180, and any other known computer devices / interfaces) can be integrated on a single die or integrated circuit to provide Small form factor with high functionality and low power consumption.In one embodiment, processor 100 is capable of executing compiler, optimization, and / or converter code 177 to compile, transform, and / or optimize application code 176 to support or interface with the devices and methods described herein. A compiler typically includes a program or set of programs to convert source text / code into target text / code. Generally, compiling a program / application code with a compiler is done in multiple stages and multiple passes to convert high-level programming language code to low-level machine language or assembly language code. However, single-pass compilers can still be used for simple compilation. The compiler can utilize any known compilation technique and perform any known compiler operation, such as lexical analysis, preprocessing, parsing, semantic analysis, code generation, code conversion, and code optimization.Larger compilers usually include multiple phases, but in most cases, these phases are included in two general phases: (1) the front end, where usually syntax processing, semantic processing, and some transformations / optimizations can occur, and 2) Backend, where analysis, conversion, optimization, and code generation typically occur. Some compilers refer to the middle section, which shows the blurring of the line between the compiler's front-end and back-end. Therefore, a reference to a compiler's insert, association, generation, or other operation can occur at any of the above stages or passes, as well as at any other known stage or pass of the compiler. As an illustrative example, the compiler may insert operations, calls, functions, etc. in one or more phases of compilation, such as inserting calls / operations in the front-end phase of compilation, and then convert the calls / operations to Lower-level code. Note that during dynamic compilation, compiler code or dynamically optimized code can insert such operations / calls and optimize the code for execution during runtime. As a specific illustrative example, binary code (compiled code) may be dynamically optimized during runtime. Herein, the program code may include dynamically optimized code, binary code, or a combination thereof.Similar to compilers, converters such as binary converters convert code statically or dynamically to optimize and / or convert code. Therefore, references to the execution of code, application code, program code, or other software environments may refer to: (1) dynamically or statically executing a compiler program, optimizing a code optimizer or converter to compile program code, maintaining software structure, Perform other operations, optimize code, or transform code; (2) execute main program code that includes operations / calls, such as application code that has been optimized / compiled; (3) execute the code associated with the main program code Other program code (such as a library) to maintain the software structure, perform other software-related operations, or optimize the code; or (4) a combination thereof.PCIe supports hot-plugging capabilities, but may lack a consistent way to report certain key bits of information to system software, making it difficult to manage the PCIe subsystem optimally, and resulting in system limitations and poor user experience. These same limitations affect converged input / output (CIO) or "open" Thunderbolt because these I / O configurations use PCIe as the I / O architecture in the form of a tunnel.CIO is a tunnel for PCIe and display ports. The CIO link can be a single channel or an aggregation of two channels, operating at 10Gbps to 40Gbps or higher. The CIO can operate across USB Type-C connectors (as a standby mode) and enables PCIe devices outside the system box.Referring next to FIG. 2, an embodiment of a PCIe serial point-to-point structure is shown. Although an embodiment of a PCIe serial point-to-point link is shown, the serial point-to-point link is not limited to this because it includes any transmission path for transmitting serial data. In the embodiment shown, the basic PCIe link includes two, low-voltage differential drive signal pairs: a transmit pair 206/211 and a receive pair 212/207. Accordingly, the device 205 includes transmission logic 206 to send data to the device 210 and reception logic 207 to receive data from the device 210. In other words, two transmit paths, paths 216 and 217, and two receive paths, paths 218 and 219, are contained in the PCIe link.The connection between two devices, such as device 205 and device 210, is called a link, such as link 215. The link can support one channel-each channel represents a set of differential signal pairs (one for transmission and one for reception). To scale bandwidth, a link can aggregate multiple channels represented by xN, where N is any supported link width, such as 1, 2, 4, 8, 12, 16, 32, 64, or wider. A send or receive path refers to any path used to send or receive data, such as transmission lines, copper wires, light, wireless communication channels, infrared communication links, or other communication paths. In FIG. 2, one channel is shown, and the channel includes transmission paths 216 and 217 and reception paths 218 and 219. In this application, the transmission path is also called the TX line; and the reception path is also called the RX line.A differential pair refers to two transmission paths, such as paths 216 and 217, to send differential signals. As an example, when path 216 switches from a low voltage level to a high voltage level, ie, a rising edge, path 217 is driven from a high logic level to a low logic level, ie, a falling edge. Differential signals potentially exhibit better electrical characteristics, such as better signal integrity, ie cross-coupling, voltage overshoot / undershoot, ringing, etc. This allows for better timing windows, which makes faster transmission frequencies possible.Each channel in the link may include one or more paths or signaling channels. In some implementations, differential signaling can be used, and the signaling channel can include one or more differential signaling pairs. In some implementations, such as PCIe-based interconnects, by default, a channel of a link can be defined as having at least one differential signaling pair that is initially configured to send data from the first device 205 To the second device 210, and at least one additional differential signaling pair, which is initially configured to receive data from the second device 210 at the first device 205 such that the channel facilitates bidirectional communication between the first and second devices Communication.However, reconfiguration of channels such as those discussed herein can cause a single channel to be reconfigured into one or more unidirectional channels, among other configurations. In addition, a link may include multiple channels or channels to increase the total potential bandwidth that can be carried on the link. In some implementations of a multi-channel link, the link may be initially configured to have the same number of transmit and receive channels. In addition, the physical lines used to implement each channel can also be balanced such that an equal or equivalent number of physical lines are included in each channel, as well as other example implementations.In some computing systems, with interfaces that allow multiple protocols to coexist, many applications use asymmetric bandwidth allocation. In some cases, the outbound (or downstream) bandwidth is greater than the inbound (or upstream) bandwidth, such as in display applications, where data is sent from a central processing system to a display device (eg, a monitor). In some cases, the upstream bandwidth is greater than the downstream bandwidth. When the total bandwidth of the I / O interface is limited, some channels are underutilized, while others are oversubscribed. This disclosure describes the dynamic switching between symmetric interfaces to asymmetric interfaces without affecting current services. The systems, methods, and devices described herein can keep the event stream intact and use available bandwidth paths without affecting the user experience.Multi-channel interconnects such as PCIe, Super Path Interconnect (UPI), Thunderbolt (TBT and other Converged IO (CIO)), etc. are symmetrical links. That is, for an x4 link, 4 channels can be configured as uplink channels, and 4 channels can be configured as downlink channels. There are many emerging applications where bandwidth requirements are asymmetric or can change dynamically over time. For example, a central processing unit (CPU) connected to a memory drive (MD) can use more bandwidth in the inbound (from MD to CPU) direction than the outbound direction because in MD applications, reads are more frequent than writes, And the write involves a read operation before the write operation. Asymmetric link widths can be based on PCIe PHY to support serialized-deserialized (SERDES) -based differential DIMMs or memory-driven interconnects. The present disclosure facilitates asymmetric link configurations, some of which may be bidirectional channels. For example, a bidirectional channel can operate as two independent unidirectional channels by converting a transmit-receive (TX-RX) pair to an RX-TX pair, and vice versa. This disclosure also describes dynamically changing the number of channels in each direction depending on the bandwidth requirements of the application. The systems, methods, and computer program products described herein can be applied to PCIe-based interconnects, as well as other types of interconnects, and also to links that reconnect timers.The present disclosure describes systems, methods, and devices for changing the direction of one or more transmission paths of a multi-channel link on a per-channel basis, depending on the workload and whether each hardware can support such directional changes.FIG. 3 is a schematic diagram of an example rapid peripheral component interconnect (PCIe) link architecture 300 according to an embodiment of the present disclosure. The PCIe link architecture 300 includes a first component 302, which may be an uplink component, a root complex, or a switch that complies with the PCIe protocol. The first component 302 may include a downlink port 310 that facilitates communication with uplink components across the link 322, such as a PCIe-compliant link. The first component 302 may be coupled to a second component 308, and the second component 308 may be a downlink component, an endpoint, or a switch that complies with the PCIe protocol. In some embodiments, the first component may be linked to one or more intermediate components, such as a first re-timer 304 and a second re-timer 306.In an embodiment, the first component 302 may include a downlink port 310 to facilitate downlink communication with the second component 308 (if directly connected) or with the uplink (pseudo) port 312 of the retimer 304 (eg, toward the second component 308 ). The second component 308 may include an uplink port 320 to facilitate uplink communication with the first component 302 (if directly connected) or with the downlink (pseudo) port 312 of the retimer 304 (eg, toward the first component 302).In the example shown in FIG. 3, the first component 302 may be linked to the first retimer 304 through a first link segment 324. Similarly, the first retimer 304 may be linked to the second retimer 306 through a link segment 326. The second re-timer 306 may be linked to the second component 308 through a link segment 328. Link segments 324, 326, and 328 may constitute all or a portion of link 322.The link 322 may facilitate uplink and downlink communications between the first component 302 and the second component 308. In the embodiment, uplink communication refers to data and control information sent from the second component 308 to the first component 302; and downlink communication refers to data and control information sent from the first component 302 to the second component 308. As described above, one or more retimers (eg, retimers 304 and 306) may be used to extend the range of the link 322 between the first component 302 and the second component 308.A link 322 containing one or more retimers (e.g., retimers 304, 306) may form two or more at a data rate comparable to that achieved by a link using a similar protocol but without a retimer Separate electronic links. For example, if the link 322 includes a single retimer, the link 322 may form a link with two separate sublinks, each sublink existing at a rate of 8.0 GT / s or higher. As shown in FIG. 3, multiple retimers 304, 306 can be used to extend the link 322. Three link segments 322, 324, and 326 can be defined by two retimers 304, 306, where the first sublink 322 connects the first component 302 to the first retimer 304 and the second sublink 324 connects the first The re-timer 304 is connected to the second re-timer 306, and the third sub-link 326 connects the second re-timer 306 to the second re-timer 306.As shown in example FIG. 3, in some implementations, the retimer may include two ports (or pseudo ports), and the ports may dynamically determine their corresponding downlink / uplink directions. In an embodiment, the retimer 304 may include an uplink port 312 and a downlink port 314. Similarly, the retimer 306 may include an uplink port 316 and a downlink port 318. Each retimer 304, 306 may have an uplink path and a downlink path. In addition, the retimers 304, 306 may support an operation mode including a forwarding mode and an execution mode. In some examples, the retimers 304, 306 may decode the data received on the sub-link and re-encode the data that it will forward downstream on its other sub-link. In this way, the retimer can capture the received bitstream before regenerating the bitstream and retransmitting the bitstream to another device or even another retimer (or switching driver or repeater). In some cases, a retimer can modify certain values in the data it receives, such as when processing and forwarding an ordered collection of data. In addition, the retimer can support any width option as its maximum width, such as a set of width options defined by specifications such as PCIe.As the data rate of serial interconnects (eg, PCIe, UPI, USB, etc.) increases, retimers are increasingly used to extend channel coverage. Multiple retimers can be cascaded to achieve longer channel coverage. It is expected that as the signal speed increases, the channel range will generally decrease. As the interconnect technology accelerates, the use of retimers may become more common. As an example, a PCIe Gen-4 with 16GT / s and support for PCIe Gen-3 (8GT / s) may increase the use of retimers in the PCIe interconnect. As the speed increases, other interconnects may be affected That's it.Before the link is established or when the link 322 is not working properly, the system software can access the downlink port 310 (for example, in the first component 302, it may be an uplink component, such as a root complex or switch). In an embodiment, a register such as a link capability register may be set to perform clock mode selection in the downstream port 310. The system firmware / software can configure the downstream port 310 to the expected mode, and if changes are needed, this will be done by the system firmware / software, not by hardware.In an embodiment, the link architecture 300 may include a controller hub 350. The controller hub 350 may be part of the root complex, the central processing core, or other controller logic of the host system. The controller hub may include a system manager 352. The system manager 352 may be implemented in hardware circuits and / or software, such as by system management software embodied in a non-transitory computer-readable medium. For example, the system manager may be implemented as a software manager, a hardware circuit (e.g., a protocol stack circuit), firmware (e.g., firmware of a data processor), or some combination of these. The system manager 352 may include a CIO connection manager, a PCIe connection manager, a USB connection manager, or other connection management logic, which may establish and / or tear down multi-channel links (e.g., chains based on the PCIe, USB, or CIO protocols Connection) of the connected downstream equipment.The system manager can use the register interface to configure the upstream and downstream channels to establish an asymmetric link interface between the host device (e.g., the upstream device 302 and / or any intermediate retimers 304, 306) and the downstream device (308) . The system manager may use the register information announced by the downlink device 308 to determine whether the downlink device includes an interface port of an additional channel that can handle uplink or downlink services. Likewise, the system manager can use the register information from the retimer to determine whether any intermediate retimers 304, 306 can support more than a standard number of uplink or downlink channels. The uplink port 310 of the uplink component 302 should also be configured to support multiple uplink and / or downlink channels to support asymmetric interfaces. If all components include ports that can support asymmetric interfaces, the system manager can configure the ports and corresponding channels as an asymmetric configuration (e.g., via a register interface on upstream component 302, downstream component 308, and any intermediate retiming (304, 306).FIG. 4A is a schematic diagram of an example multi-channel interconnect architecture 400 according to an embodiment of the present disclosure. It is believed that the systems and methods described herein can be applied to any number of total channels or switching channels. 4A and 4B show four channels with one switching channel for illustrative purposes. The multi-channel interconnect architecture 400 may include an uplink component 402. The uplink component 402 may be similar to the uplink component 302 of FIG. 3. The uplink component 402 may include a first downlink port 412 and a first uplink port 414. The multi-channel interconnect architecture 400 may further include a downlink component 404. The downlink component 404 may be similar to the downlink component 308. The downlink component 404 may include a second downlink port 416 and a second uplink port 418. The first downlink port 412 may be coupled to the second downlink port 416 through a downlink including a channel 0 and a channel 1. The first uplink port 414 may be coupled to the second uplink port 418 by an uplink including a channel 2 and a channel 3.The first downlink port 412, the first uplink port 414, the second downlink port 416, and the second uplink port 418 may include logic circuits and software that can support switching in the service direction.For example, the uplink component 402 may include a controller 450 that includes logic implemented in one or both of hardware or software for switching the direction of one or more channels of the multi-channel interconnect architecture 400. The controller 450 may also control one or more ports to accommodate an increase (or decrease) in data traffic. The downstream component 404 may also include a controller 460, which may be similar to the controller 450, which includes logic implemented in one or both of hardware or software. The controller 460 may control one or more ports in the downlink component 404 to accommodate an increase or decrease in data traffic passing through the corresponding port.The controller 450 may be or may include a system manager. The system manager may be, for example, a CIO connection manager, a PCIe connection manager, or other types of system management software for managing the link direction of the multi-channel interconnect architecture. System management software can use one or more parameters to determine whether the port can accommodate the increase in data traffic entering or leaving the port. The system manager may use register settings or capability announcements to determine that the upstream component 402 and the downstream component 404 (and any intermediate retimers) support line direction changes. For example, the system manager may set a register in the upstream component 402 and / or the downstream component 404 so that each component recognizes a change in the direction of the line. The system manager can also determine whether the corresponding port can accommodate the increase in traffic. For example, a dedicated downstream port may not be able to accommodate any upstream traffic. The system manager can determine if the port can accommodate line direction switching before performing any dynamic line direction switching.In addition, the system manager can use one or more parameters to determine that the connected components can benefit from an asymmetric link configuration. In some embodiments, the system manager may use the bandwidth topology information to dynamically adjust multiple uplink and / or downlink lines to accommodate traffic flows of connected devices that will use multiple types of lines (eg, uplink and downlink). For example, monitors can use more downlink than uplink, and storage devices or video cameras can use more uplink than downlink. If bandwidth is available on the line, the system manager can switch the direction of one or more lines of a multi-channel link to establish an asymmetric interface.The downlink may refer to a transmission path that couples TX logic at the first device with RX logic at the second device. The uplink may refer to a receiving path that couples RX logic at the first device with TX logic at the second device.The multi-channel link architecture 400 illustrates an example interface between two systems: an upstream component 402 and a downstream component 404. The interface includes four channels: channel 0 422, channel 1 424, channel 2 426, and channel 3 428. FIG. 4B is a schematic diagram of an example line direction switching according to an embodiment of the present disclosure. In the example scenario shown in Figures 4A and 4B, the multi-channel link dynamically switches between symmetric mode and asymmetric mode (where channel 0 422, channel 1 424, and channel 3 428 remain unchanged, but channel 2 426 uses two downlinks, indicated by two arrows from the upstream component 402 to the downstream component 404).FIG. 5A is a schematic diagram of an example symmetric link topology 500 according to an embodiment of the present disclosure. The symmetric link topology 500 may include a first component 502 and a second component 504. For ease of disclosure, the first component 502 may be an upstream component, such as component 302 of FIGS. 3A-3B; the second component 504 may be a downstream component, such as component 404. However, it should be understood that the first component may be a downstream component and the second component may be an upstream component without departing from the scope of the present disclosure.The first component 502 may be linked to the second component 504 through multiple channels (eg, channel 0 510, channel 1 511, channel 2 512, and channel 3513). Each channel of a multi-channel link may include TX and RX lines. For example, channel 0 510 includes TX line 510a and RX line 510b. In some embodiments, the alternate channel channel S 515 may be used to extend the bandwidth of a multi-channel link. The configuration shown in FIG. 5A is a default channel configuration with 4 (+1 spare) TX lines and 4 (+1 spare) RX lines. For an embodiment based on the PCIe protocol for a multi-channel link, the link in FIG. 5A will be an x4 link. In this example, x4PCIeLink will have 4 uplink and 4 downlink channels.For a PCIe multi-lane link, the component may announce the ability to change the directionality of the lane as an optional function in each lane of each component (including the retimer). More specifically, the first component 502 and the second component 504 may announce whether one or more TX channels or lines within the channel can serve as RX lines and whether one or more RX lines can serve as TX lines. Based on the capabilities of the component indication, different link width arrangements are possible depending on the usage requirements. For example, depending on the workload, a multi-channel link can be configured with 2 channels down and 8 channels up or 8 channels down and 2 channels up. Other permutations between channels are allowed: (downlink channel, uplink channel): (1,9), (9,1), (3,7), (7,3), (4,6), (6 , 4), (5, 5). Without a backup channel, an x4 link can have the following permutations between (downlink, uplink) channels: (4,4), (1,7), (7,1), (3,5), (5,3) , (2,6), (6,2). Although for simplicity, at least one channel in each direction is maintained to facilitate the transfer of credits, responses, ACK / NACK for transactions, etc. However, in an embodiment, a single channel may be used in both directions through time-multiplexed use of the channel.FIG. 5B is a schematic diagram of an example asymmetric link topology 550 according to an embodiment of the present disclosure. The asymmetric link topology 550 of FIG. 5B includes two channels (channel 0 510 and channel 1 511), which are the same as the default settings. The asymmetric link topology 550 includes additional RX lines. Instead of channel 2 522, which includes one TX line and one RX line, channel 2 522 now includes two RX lines (RX line 522a and RX line 522b). Similarly, channel 3 523 includes two RX lines 523a and 523b, and backup channel channel S 525 includes two RX lines 525a and 525b.In the example of FIG. 5B, the second component may be a downlink component that uses more uplink bandwidth than downlink bandwidth. Examples of downstream devices using more upstream bandwidth may include memory devices or video cameras.FIG. 5C is a schematic diagram of an example asymmetric link topology 560 according to an embodiment of the present disclosure. The asymmetric link topology 560 of FIG. 5C includes two channels (channel 0 510 and channel 1 511), which are the same as the default settings. Asymmetric link topology 560 includes additional TX lines. Channel 2 532 now includes two TX lines 532a and 532b. Likewise, channel 3 533 includes two TX lines 533a and 533b, and backup channel channel S 535 includes two TX 535a and 535b.In the example of FIG. 5C, the second component may be a downlink component that uses more downlink bandwidth than uplink bandwidth. An example of a downlink device using more downlink bandwidth than uplink bandwidth may include a display device.FIG. 6 is a schematic diagram of a variable link width topology 600 showing an arrangement of channel width variability according to an embodiment of the present disclosure. Each channel can independently announce its ability to change direction (for example, during the link training phase, discussed below). Table 1 summarizes the capabilities based on each component on each channel, as shown in Figure 6. Each of the host and the connected device should support link width variability of the channel direction to be changed.The topology 600 illustrates a first component 602 coupled to a second component 604. The first component may be the host device described in Table 1; and the second component may be the connection device of Table 1. As shown in FIG. 6, the multi-channel link is an x4 link including four channels: channel 0 610, channel 1 611, channel 2 612, and channel 3 613.For channel 0 610, the first component includes hardware circuitry that supports full TX-RX line switching (for example, TX and RX lines can be switched). However, the second component does not support channel switching at channel 0 610. Therefore, channel 0 610 does not support link width variability.For channel 1 611, the first component supports the use of RX as TX, but the second component only supports the use of RX as TX. In short, both the first component 602 and the second component 604 support the use of the RX line as the TX line, but neither support the use of the TX line as the RX line. Without an additional RX line, the component will not be able to handle additional reception by adding a TX line. Therefore, link width variability is not supported at channel 1 611.For channel 2 612, the first component supports the use of RX as TX and the second component supports the use of TX as RX. Therefore, link width variability is supported by channel 2 612 in the downstream (host-> connected device) direction. The additional channels are indicated by the dotted arrows in FIG. 6.For channel 3 613, the first and second components each support full link width variability. Therefore, each of the first and second components can handle an additional TX line or an additional RX line. Additional channels are indicated by dashed arrows in FIG. 6.Table 1. Arrangement of channel width variabilityAs a result, channel 2 612 may form an additional downstream (TX) line 612a from the host 602 to the device 604. As shown in FIG. 6, the host 602 may include hardware circuits and accompanying software and / or firmware to support an additional downlink 612a. The host may include a first transmitter circuit element 622 (labeled T2 during the channel number assignment phase of link training) implemented at least in part in hardware circuitry. The transmitter circuit element 622 may include pins that electrically connect the physical lines of the multi-channel link with the host circuit. The host 602 may also include a receiver circuit element 624 implemented at least partially in hardware circuits (if a receive line is used, it may be marked as R2 during channel training channel assignment of the link). The host 602 may also include a fourth transmitter circuit element 624 implemented at least partially in hardware circuitry (and if a second downlink 612a is to be used, it is labeled T4 during the channel number assignment phase of the link training). The common pin may be used to connect the device 604 with the receiver circuit element 624 and / or the second transmitter circuit element 626. Other naming conventions and orders are possible and are consistent with the scope of this disclosure.The connecting device 604 may also include additional circuitry to facilitate link width expansion. For example, the connection device 604 may include hardware circuits and accompanying software and / or firmware to support an additional downlink 612a. The device 604 may include a first receiver circuit element 632 (labeled R2 during the channel number assignment phase of link training) implemented at least partially in hardware circuitry. The receiver circuit element 632 may include pins that electrically connect the physical lines of the multi-channel link with the host circuit. The device 604 may also include a first transmitter circuit element 634 implemented at least partially in hardware circuitry (if an uplink is used on the L2 612, it may be marked as T2 during the channel number assignment for link training). The device 604 may also include a fourth receiver circuit element 634 implemented at least partially in hardware circuits (and if the second downlink 612a is to be used, it is labeled R4 during the channel number assignment phase of the link training). The common pin may be used to connect the host 602 with the receiver circuit element 634 and / or the second transmitter circuit element 636. Other naming conventions and orders are possible and are consistent with the scope of this disclosure.Channel 3 613 may include additional channels (downlink 613a or uplink 613b) in each direction. As shown in FIG. 6, the host 602 may include hardware circuits and accompanying software and / or firmware to support the additional downlink 613a and the additional uplink 613b. The host 602 may include a third transmitter circuit element 642 (labeled T3 during the channel number assignment phase of link training) implemented at least partially in hardware circuits. The host 602 may also include a third receiver circuit element 644 implemented at least partially in hardware circuits (if an uplink is used, it may be marked as R3 during the channel number assignment of the link training).The host 602 may also include a fifth receiver circuit element 646 implemented at least partially in hardware circuits (and if the second uplink 613a is to be used, it is marked as R5 during the channel number assignment phase of the link training). The common pin can be used to connect the device 604 with the third transmitter circuit element 642 and / or the fifth receiver circuit element 646. The host 602 may also include a sixth transmitter circuit element 648 implemented at least partially in hardware circuitry (and if the second downlink 613b is to be used, it is labeled T6 during the channel number assignment phase of the link training). The common pin may be used to connect the device 604 with the third receiver circuit element 644 and / or the sixth transmitter circuit element 648. Other naming conventions and orders are possible and are consistent with the scope of this disclosure.The connecting device 604 may also include additional circuitry to facilitate link width expansion. For example, the connection device 604 may include hardware circuitry and accompanying software and / or firmware to support additional downlinks 613a or 613b. The device 604 may include a third receiver circuit element 652 implemented at least partially in hardware circuits (if a downlink is used on L3 613, it is marked as R3 during the channel number assignment phase of link training). The device 604 may also include a third transmitter circuit element 654 implemented at least partially in hardware circuits (if an uplink is used on L3 613, it may be marked as T3 during the channel number assignment for link training). The device 604 may also include a fifth transmitter circuit element 656 implemented at least partially in hardware circuits (and if the second uplink 613a is to be used, it is marked as T5 during the channel number assignment phase of the link training). The common pin can be used to connect the host 602 with the third receiver circuit element 652 or the fifth transmitter circuit element 656. The device 604 may also include a sixth receiver circuit element 658 implemented at least partially in hardware circuits (and if a second downlink 613b is to be used, it is labeled R6 during the channel number assignment phase of the link training). The common pin can be used to connect the host 602 with the third transmitter circuit element 654 or the sixth receiver circuit element 658. Other naming conventions and orders are possible and are consistent with the scope of this disclosure.As described herein, a hardware circuit that can be used in a host or connected device to extend the link width can include one or more buffer memory elements (also known as a logic stack). 7A-7B are schematic diagrams of an example logical stack implementation for extending a link width of a multi-channel link according to an embodiment of the present disclosure.FIG. 7A is a schematic diagram 700 of an example common logic stack 704 that resides on a host device 702 and supports extended link width according to an embodiment of the present disclosure. In situations where the bandwidth in each direction is variable, using a common stack 704 may be beneficial. The common stack 704 can be used to handle the widest possible link in each direction. In this case, the implementation should map all virtual channels to the widest direction to mimic a single link. In some embodiments, the internal stack may have the challenge of handling potential double bandwidth. For example, a PCIe link that can handle up to 16 lanes in each direction will not be able to provide 32 lanes of bandwidth. In these cases, you can choose to implement multiple logical stacks. FIG. 7B is a schematic diagram 750 of an example host 752 implementing multiple logical stacks 754 and 756 according to an embodiment of the present disclosure. For example, if the x16PCIe link effectively becomes x32, the host 752 can use two different x16 stacks 754 and 756. Narrow sections (Tx) can be shared / multiplexed between two stacks for passing credit.FIG. 8 is a process flowchart 800 for dynamically negotiating an asymmetric link width in a multi-channel link according to an embodiment of the present disclosure. First, the host device can detect the presence of a downstream connected device (802). As part of the link establishment, the host device may initiate a link training process to train a multi-channel link interconnecting the host to the downstream device (804). The host can detect the capability of the downstream connected device for channel width variability (806). Asymmetric link capability negotiation occurs during link training. For example, in PCIe, when a link is trained to L0 at a rate of 2.5G, each party announces asymmetric capabilities based on each channel, including standby channels, standby protocols, and EQ bypass during the configuration cycle. The modified TS1 / TS2 ordered centralization in the "Alternate Protocol Details" field (16 bits) in the regular channel can be used to announce the asymmetric capabilities per channel. Other bit fields of TS1 / TS2OS can also be used. The following codes can be used to indicate asymmetric support on a per-channel basis: 00: no asymmetric support, 01: only TX can become RX, 10: only RX can become TX, 11: TX can be RX and RX can be TX. The retimer can be expected to override these bits to reflect its capabilities on this channel in combination with anything supported by the other party. For example, if the device announces "11" in its capabilities (that is, TX can be RX and vice versa), but the retimer only supports its TX as RX on this channel (01), the retimer will set this field Amended to 01B.The host device may use link training to determine how many additional uplink or downlink ports to configure (808). For example, a host can use bandwidth information to determine that an interconnect can support an increased number of channels in either direction.The host may perform channel number assignment of the channel during link training (810). An example of a channel number scheme is shown in FIG. 6. If backup channels exist, and if downstream ports can be driven on these channels, the downstream port (DSP) can assign channel numbers to the backup channels. Upstream port (USP) can use increments when driving. For example, in an x4 link with 2 spare channels, the DSP will use channel numbers 4, 5, 6, 7 for the spare channels. If the USP driver, it will use the same number. Channels that cannot be driven by the DSP, the USP must provide channels and line numbers that are consistent with the rest of the TX line number.The channel may undergo equalization (812). Equalization is performed on all possible TX / RX pairs, including spare channels. It is assumed that channel 0 does not change direction (even if it is capable). During phase 2 of link training (the USP requests the DSP to adjust its TX settings for RX), the widest possible link width from the DSP to the USP direction will be balanced, and the backward channel of the additional channel will be available for different channels. Requests are time-multiplexed between channel 0 transfers. During phase 3, a reverse occurs. Some channels can be equalized twice or three times but in opposite directions between different (TX, RX) pairs. Therefore, in Figure 6, link equalization (EQ) can occur three times for each data rate: the first equalization configured for the default channel; the second equalization only for all additional uplinks; and the The third equilibrium.At the beginning of Phase 2 (3), the direction changing channel will be allowed a period of electrical idling to allow a change in direction. At the end of the EQ, when entering the recovery state, the link will return to its required settings and allow a direction change (if needed) on some channels during a brief electrical idle state. If channel 0 also needs to support the reverse direction, another round of equalization can be performed to use another channel to equalize in the reverse direction of channel 0 to perform reverse channel equalization.At any time during link operation, the link width may be adjusted by the host or device through a register setting indication (816). When the link needs to change width in any direction, it does so by moving to the configured state for reconfiguration. Swap the required width in each direction and determine the width. This can be specified by the system software writing configuration registers to change the desired width in each direction the hardware follows. It can also be done autonomously by hardware based on the expected bandwidth requirements in each direction, following a predetermined algorithm (for example, the DSP can determine the width in a proportional manner depending on its needs and the bandwidth requirements of the USP.) In an embodiment, the link may go through the previously described link training process (818); the multi-channel link may then be initialized (e.g., in a default state) (814).An interconnect architecture includes a Peripheral Component Interconnect (PCI) Express (PCIe) architecture. The main goal of PCIe is to enable components and devices from different vendors to interoperate in an open architecture across multiple market segments; clients (desktop and mobile), servers (standards and enterprises), and embedded and communications device. Fast PCI is a high-performance, general-purpose I / O interconnect that is defined for a variety of future computing and communication platforms. Some PCI attributes (such as its usage model, load storage architecture, and software interface) have been maintained through its revisions, while previous parallel bus implementations have been replaced by highly scalable fully serial interfaces. The latest version of Fast PCI takes advantage of point-to-point interconnect, switch-based technology, and encapsulated protocols to deliver higher levels of performance and functionality. Power management, quality of service (QoS), hot-plug / hot-plug support, data integrity and error handling are among the advanced features supported by fast PCI.Referring to FIG. 9, an embodiment of a structure composed of a point-to-point link interconnecting a group of components is shown. The system 900 includes a processor 905 and a system memory 910 coupled to a controller hub 915. The processor 905 includes any processing element such as a microprocessor, a host processor, an embedded processor, a coprocessor, or other processors. The processor 905 is coupled to a controller hub 915 through a front-side bus (FSB) 906. In one embodiment, the FSB 906 is a serial point-to-point interconnect as described below. In another embodiment, the link 906 includes a serial, differential interconnect architecture that conforms to different interconnect standards.The system memory 910 includes any memory device such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible to devices in the system 900. The system memory 910 is coupled to the controller hub 915 through a memory interface 916. Examples of the memory interface include a double data rate (DDR) memory interface, a dual channel DDR memory interface, and a dynamic RAM (DRAM) memory interface.In one embodiment, the controller hub 915 is a root hub, root complex, or root controller in a rapid peripheral component interconnect (PCIe or PCIE) interconnect hierarchy. Examples of the controller hub 915 include a chipset, a memory controller hub (MCH), a north bridge, an interconnected controller hub (ICH), a south bridge, and a root controller / hub. The term chipset generally refers to two physically separate controller hubs, a memory controller hub (MCH) coupled to an interconnected controller hub (ICH). Note that current systems typically include an MCH integrated with the processor 905, and the controller 915 will communicate with the I / O devices in a manner similar to that described below. In some embodiments, peer-to-peer routing is optionally supported by the root complex 915.Herein, the controller hub 915 is coupled to the switch / bridge 920 through a serial link 919. Input / output modules 917 and 921, which may also be referred to as interfaces / ports 917 and 921, include / implement a layered protocol stack to provide communication between the controller hub 915 and the switch 920. In one embodiment, multiple devices can be coupled to the switch 920.Switch / Bridge 920 routes packets / messages from device 925 up (i.e. up the hierarchy towards the root complex) to the controller hub 915 and down (i.e. down the hierarchy away from the root controller) from the processor 905 or the system The memory 910 is routed to the device 925. In one embodiment, the switch 920 is referred to as a logical component of multiple virtual PCI-to-PCI bridge devices. The device 925 includes any internal or external devices or components to be coupled to the electronic system, such as I / O devices, network interface controllers (NICs), add-in cards, audio processors, network processors, Hard drives, storage devices, CD / DVD ROM, monitors, printers, mice, keyboards, routers, portable storage devices, FireWire devices, universal serial bus (USB) devices, scanners, and other input / output devices. Often in PCIe native languages, such as devices, they are called endpoints. Although not specifically shown, the device 925 may include a PCIe to PCI / PCI-X bridge to support traditional or other versions of PCI devices. Endpoint devices in PCIe are generally classified as traditional, PCIe, or root complex integrated endpoints.The graphics accelerator 930 is also coupled to the controller hub 915 via a serial link 932. In one embodiment, the graphics accelerator 930 is coupled to an MCH coupled to the ICH. The switch 920 and the corresponding I / O device 925 are then coupled to the ICH. I / O modules 931 and 918 also implement a layered protocol stack to communicate between graphics accelerator 930 and controller hub 915. Similar to the MCH discussion above, the graphics controller or graphics accelerator 930 itself may be integrated into the processor 905.Turning to Fig. 10, an embodiment of a layered protocol stack is shown. The layered protocol stack 1000 includes any form of layered communication stack, such as a fast path interconnect (QPI) stack, a PCie stack, a next-generation high-performance computing interconnect stack, or other layered stacks. Although the following discussion is related to the PCIe stack, the same concepts can be applied to other interconnect stacks. In one embodiment, the protocol stack 1000 is a PCIe protocol stack including a transaction layer 1005, a link layer 1010, and a physical layer 1020. Interfaces such as interfaces 917, 918, 921, 922, 926, and 931 in FIG. 9 may be represented as a communication protocol stack 1000. The representation as a communication protocol stack may also be referred to as a module or interface that implements / includes a protocol stack.Fast PCI uses packets to transfer information between components. Packets are formed in the transaction layer 1005 and the data link layer 1010 to transfer information from the sending component to the receiving component. As sent packets flow through other layers, they are extended with additional information needed to process data packets on these layers. On the receiving side, the reverse process occurs and the packet is converted from its physical layer 1020 representation to the data link layer 1010 representation and finally (for the transaction layer packet) into a form that can be processed by the transaction device 1005 of the receiving device.Transaction layerIn one embodiment, the transaction layer 1005 will provide an interface between the processing core of the device and the interconnect architecture (eg, the data link layer 1010 and the physical layer 1020). In this regard, the main responsibility of the transaction layer 1005 is the assembly and disassembly of packets (ie, transaction layer data packets or TLPs). The conversion layer 1005 typically manages credit-based flow control for TLP. The PCIe implementation divides transactions, that is, transactions with requests and responses separated by time, allowing links to carry other services, while the target device collects data for responses.In addition, PCIe uses credit-based flow control. In this scheme, the device announces an initial credit amount in the transaction layer 1005 for each of the receive buffers. An external device at the opposite end of the link, such as the controller hub 115 in FIG. 1, counts the number of credits consumed by each TLP. If the transaction does not exceed the credit limit, the transaction can be sent. Once a reply is received, the credit line will be restored. The advantage of a credit scheme is that the delay in credit returns does not affect performance as long as no credit limit is encountered.In one embodiment, the four transaction address spaces include a configuration address space, a memory address space, an input / output address space, and a message address space. Memory space transactions include one or more of a read request and a write request to transfer data to or from a memory-mapped location. In one embodiment, a memory space transaction can use two different address formats, such as a short address format (such as a 32-bit address) or a long address format (such as a 64-bit address). Configuration space transactions are used to access the configuration space of the PCIe device. Transactions to the configuration space include read requests and write requests. Message space transactions (or messages for short) are defined to support in-band communication between PCIe agents.Therefore, in one embodiment, the transaction layer 1005 assembles a packet header / payload 1006. The format of the current packet header / payload can be found in the PCIe specification.Quick reference to FIG. 11 illustrates an embodiment of a PCIe transaction descriptor. In one embodiment, the transaction descriptor 1100 is a mechanism for carrying transaction information. In this regard, the transaction descriptor 1100 supports identification of transactions in the system. Other potential uses include tracking modifications to the default transaction ordering and the association of transactions with channels.The transaction descriptor 1100 includes a global identifier field 1102, an attribute field 1104, and a channel identifier field 1106. In the illustrated example, the global identifier field 1102 is depicted as including a local transaction identifier field 1108 and a source identifier field 1110. In one embodiment, the global transaction identifier 1102 is unique for all outstanding requests.According to one implementation, the local transaction identifier field 1108 is a field generated by the requesting agent, and is unique for all outstanding requests that need to be completed for that requesting agent. Further, in this example, the source identifier 1110 uniquely identifies the requester agent within the PCIe hierarchy. Accordingly, along with the source ID 1110, the local transaction identifier 1108 field provides a global identification of transactions within the hierarchical domain.The attribute field 1104 specifies the characteristics and relationships of the transaction. In this regard, the attribute field 1104 may be used to provide additional information that allows modification of the default processing of a transaction. In one embodiment, the attribute field 1104 includes a priority field 1112, a reserved field 1114, a sort field 1116, and a no-snoop field 1118. Herein, the priority subfield 1112 may be modified by the initiator to assign a priority to a transaction. The reserved attribute field 1114 is reserved for future or vendor-defined use. Possible usage models that use priority or security attributes can be implemented using reserved attribute fields.In this example, the ordering attribute field 1116 is used to provide optional information conveying the ordering type that can modify the default ordering rule. According to an example implementation, the sorting attribute "0" indicates that a default sorting rule will be applied, where the sorting attribute "1" indicates loose sorting, where writing can pass writes in the same direction, and read completion can pass writes in the same direction. The snoop attribute field 1118 is used to determine whether to snoop a transaction. As shown, the channel ID field 1106 identifies the channel with which the transaction is associated.Link layerThe link layer 1010 (also referred to as the data link layer 1010) serves as an intermediate stage between the transaction layer 1005 and the physical layer 1020. In one embodiment, the responsibility of the data link layer 1010 is to provide a reliable mechanism for exchanging links for transaction layer data packets (TLP) between two components. One side of the data link layer 1010 accepts the TLP assembled by the transaction layer 1005, applies a packet sequence identifier 1011 (that is, an identification number or a packet number), calculates and applies an error detection code, that is, a CRC 1012, and submits the modified TLP The physical layer 1020 is provided for transmission to external devices across the physical layer.Physical layerIn one embodiment, the physical layer 1020 includes a logical sub-block 1021 and an electrical sub-block 1022 to physically send packets to an external device. Here, the logical sub-block 1021 is responsible for the "digital" functions of the physical layer 1021. In this regard, the logical sub-block includes a transmitting section for preparing output information for transmission by the physical sub-block 1022, and a receiving section for identifying and preparing before passing the received information to the link layer 1010. Received information.Physical block 1022 includes a transmitter and a receiver. The transmitter supplies the symbols from logic sub-block 1021, and the transmitter serializes the symbols and sends them to an external device. The receiver is supplied with serialized symbols from an external device and converts the received signal into a bit stream. The bit stream is deserialized and supplied to the logical sub-block 1021. In one embodiment, an 8b / 10b transmission code is used in which 10-bit symbols are transmitted / received. Herein, a special symbol is used to frame a packet with a frame 1023. In addition, in one example, the receiver also provides a symbol clock recovered from the input serial stream.As described above, although the transaction layer 1005, the link layer 1010, and the physical layer 1020 are discussed with reference to a specific embodiment of the PCIe protocol stack, the layered protocol stack is not limited thereto. Virtually any layered protocol can be included / implemented. As examples, the ports / interfaces represented as layered protocols include: (1) the first layer for assembling packets, the transaction layer; the second layer for ordering packets, the link layer; and the The third layer of the transmitted packet, the physical layer. As a specific example, a Common Standard Interface (CSI) layered protocol is used.Turning to FIG. 14, there is shown a block diagram of an exemplary computer system formed by a processor including an execution unit for executing instructions, wherein one or more of the interconnects implement one or Multiple characteristics. System 1200 includes a component (eg, processor 1202) that uses an execution unit that includes logic for executing an algorithm for processing data according to the present invention, such as in the embodiments described herein. System 1200 represents a processing system based on the PENTIUM IIITM, PENTIUM 4TM, XeonTM, Itanium, XScaleTM, and / or StrongARMTM microprocessors available from Intel Corporation of Santa Clara, California, but other systems (including those with other microprocessors, engineering PCs such as workstations, set-top boxes, etc. can also be used. In one embodiment, the sample system 1200 executes a version of the WINDOWSTM operating system available from Microsoft Corporation of Redmond, Washington, but other operating systems (such as UNIX and Linux), embedded software, and / or graphics may also be used User Interface. Therefore, embodiments of the present invention are not limited to any specific combination of hardware circuits and software.Embodiments are not limited to computer systems. Alternative embodiments of the invention can be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications may include a microcontroller, a digital signal processor (DSP), a system on a chip, a network computer (NetPC), a set-top box, a network hub, a wide area network (WAN) switch, or may execute one or more instructions according to at least one embodiment Any other system.In the embodiment shown herein, the processor 1202 includes one or more execution units 1208 for implementing an algorithm that executes at least one instruction. One embodiment may be described in the context of a single processor desktop computer or server system, but alternative embodiments may be included in a multi-processor system. System 1200 is an example of a "hub" system architecture. The computer system 1200 includes a processor 1202 for processing data signals. As an illustrative example, the processor 1202 includes a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, and processing that implements instruction set composition Or any other processor device, such as a digital signal processor. The processor 1202 is coupled to a processor bus 1210 that sends data signals between the processor 1202 and other components in the system 1200. Components of system 1200 (e.g. graphics accelerator 1212, memory controller hub 1216, memory 1220, I / O controller hub 1224, wireless transceiver 1226, flash BIOS 1228, network controller 1234, audio controller 1236, serial expansion port 1238 , I / O controller 1240, etc.) perform conventional functions known to those skilled in the art.In one embodiment, the processor 1202 includes a level one (LI) internal cache memory 1204. Depending on the architecture, the processor 1202 may have a single internal cache or multiple levels of internal cache. Other embodiments include a combination of internal and external caches depending on the specific implementation and needs. The register file 1206 is used to store different types of data in various registers including integer registers, floating point registers, vector registers, group registers, shadow registers, checkpoint registers, status registers, and instruction pointer registers.An execution unit 1208 including logic for performing integer and floating point operations also resides in the processor 1202. In one embodiment, the processor 1202 includes a microcode (ucode) ROM for storing microcode that, when executed, will execute algorithms for certain macro instructions or handle complex scenarios. In this article, the microcode may be updateable to handle logic errors / fixes for processor 1202. For one embodiment, the execution unit 1208 includes logic for processing the encapsulated instruction set 1209. By including the encapsulated instruction set 1209 in the instruction set of the general-purpose processor 1202 and associated circuits for executing instructions, operations used by many multimedia applications can be performed using the encapsulated data in the general-purpose processor 1202. Therefore, by using the full width of the processor's data bus for performing operations on packaged data, many multimedia applications are accelerated and executed more efficiently. This may eliminate the need to transfer smaller data units on the processor's data bus to perform one or more operations, one data element at a time.Alternative embodiments of the execution unit 1208 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. The system 1200 includes a memory 1220. The memory 1220 includes a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, or other storage devices. The memory 1220 stores instructions and / or data represented by data signals to be executed by the processor 1202.Note that any of the above features or aspects of the invention may be used on one or more of the interconnections shown in FIG. 12. For example, an unshown interconnect-on-die (ODI) for coupling internal units of the processor 1202 implements one or more aspects of the invention described above. Alternatively, the present invention is connected to a processor bus 1210 (e.g., Intel Express Path Interconnect (QPI) or other known high-performance computing interconnects), a high-bandwidth memory path 1218 to memory 1220, and a point-to-point link to graphics accelerator 1212 (E.g., Fast Peripheral Component Interconnect (PCIe) compatible structures), controller hub interconnect 1222, I / O, or other interconnects (e.g., USB, PCI, PCIe) for coupling other illustrated components are associated. Some examples of these components include audio controller 1236, firmware hub (flash BIOS) 1228, wireless transceiver 1226, data storage device 1224, traditional I / O controller 1210 including user input and keyboard interface 1242, such as universal serial A serial expansion port 1238 such as a bus (USB) and a network controller 1234. The data storage device 1224 may include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.Referring now to FIG. 13, a block diagram of a second system 1300 according to an embodiment of the present invention is shown. As shown in FIG. 13, the multi-processor system 1300 is a point-to-point interconnection system, and includes a first processor 1370 and a second processor 1380 coupled via the point-to-point interconnection 1350. Each of the processors 1370 and 1380 may be a certain version of the processor. In one embodiment, 1352 and 1354 are part of a serial, point-to-point consistent interconnect architecture such as Intel's Quick Path Interconnect (QPI) architecture. As a result, the present invention can be implemented within a QPI architecture.Although only two processors 1370, 1380 are shown, it should be understood that the scope of the invention is not limited thereto. In other embodiments, one or more additional processors may be present in a given processor.The processors 1370 and 1380 are shown to include integrated memory controller units 1372 and 1382, respectively. The processor 1370 also includes point-to-point (P-P) interfaces 1376 and 1378 as part of its bus controller unit; similarly, the second processor 1380 includes P-P interfaces 1386 and 1388. The processors 1370, 1380 may use P-P interface circuits 1378, 1388 to exchange information via a point-to-point (P-P) interface 1350. As shown in FIG. 13, IMCs 1372 and 1382 couple the processors to respective memories, namely memory 1332 and memory 1334, which may be portions of the main memory locally attached to the respective processors.The processors 1370, 1380 each use point-to-point interface circuits 1376, 1394, 1386, 1398 to exchange information with the chipset 1390 via separate PP interfaces 1352, 1354. The chipset 1390 also exchanges information with the high-performance graphics circuit 1338 via the interface circuit 1392 along the high-performance graphics interconnect 1339.A shared cache (not shown) can be included in either processor or outside of both processors; it is also connected to the processor via a PP interconnect so that if the processor is placed in low-power mode, any Local cache information for one or two processors is stored in the shared cache.The chipset 1390 may be coupled to the first bus 1316 via an interface 1396. In one embodiment, the first bus 1316 may be a peripheral component interconnect (PCI) bus, or a bus such as a fast PCI bus or another third-generation I / O interconnect bus, but the scope of the present invention Not limited to this.As shown in FIG. 13, various I / O devices 1314 are coupled to the first bus 1316 together with a bus bridge 1318 that couples the first bus 1316 to the second bus 1320. In one embodiment, the second bus 1320 includes a low pin count (LPC) bus. In one embodiment, various devices are coupled to the second bus 1320, such as a keyboard and / or mouse 1322, a communication device 1327, and a storage unit 1328 (e.g., a disk drive that typically includes instructions / code and data 1330 Or other mass storage devices). Further, the audio I / O 1324 is shown as being coupled to the second bus 1320. Note that other architectures are possible, including different components and interconnect architectures. For example, instead of the point-to-point architecture of FIG. 13, the system may implement a multi-drop bus or other such architecture.The previous disclosure has presented a number of example test link states that can complement the standard link states defined in the interconnect protocol. It should be appreciated that in addition to those identified above, other test link states may be provided without departing from the more general principles contained in this disclosure. For example, although some of the example state machines and ordered sequences discussed herein are described with reference to a PCIe or PCIe-based protocol, it should be appreciated that similar, corresponding enhancements to other interconnect protocols may be made, which Connection protocols such as OpenCAPITM, Gen-ZTM, UPI, Universal Serial Bus (USB), Cache Coherent Interconnect for Accelerators (CCIXTM), Advanced Micro DeviceTM (AMDTM) InfinityTM, Common Communication Interface (CCI), or QualcommTM CentriqTM interconnect and more.Note that the above-mentioned devices, methods, and systems can be implemented in any electronic device or system as described above. As a specific illustration, the following figure provides an exemplary system for utilizing the present invention as described herein. As the following systems are described in more detail, many different interconnections are disclosed, described, and reconsidered from the discussion above. And it's clear that the above advances can be applied to any of those interconnects, structures, or architectures. For example, a host and a device can be implemented that are equipped to implement authentication and authentication as discussed in the examples above in any of a variety of computing architectures (e.g., using any of a variety of different interconnects or structures). Measure the functionality of the architecture. For example, a host can connect to a device that supports an authentication architecture within a personal computing system (e.g., on a laptop, desktop computer, mobile device, smartphone, Internet of Things (IoT) device, smart device, game console, media control Taiwan, etc.). In another example, a host may connect to devices that support an authenticated architecture within a server computing system (eg, rack servers, blade servers, tower servers, rack-scale server architectures, or other disaggregated server architectures) and other examples.Systems, methods, and devices may include one or a combination of the following examples:Example 1 is an apparatus for configuring a multi-channel link, the apparatus including a port including hardware for supporting a multi-channel link, the link including a channel including a first differential signal pair and A second differential signal pair, wherein the first differential signal is initially configured to transmit data and the second differential signal is initially configured to receive data; and the link configuration logic is implemented, at least in part, in hardware circuits, with It is determined that the port includes hardware for supporting one or two of receiving data on the first differential signal pair or transmitting data on the second differential signal pair, and reconfiguring the first differential signal pair to utilize the first Two differential signal pairs receive data, or a second differential signal pair pair is reconfigured to send data using the first differential signal pair; and wherein the port is used for a port based on one or both of the first differential signal pair and the second differential signal pair Reconfigure to send and receive data.Example 2 may include the subject matter of Example 1, wherein the port includes a high-speed peripheral component interconnect (PCIe) -based port.Example 3 may include the subject matter of any of Examples 1-2, wherein the link configuration logic is configured to receive an announcement during a link training phase of operation, the announcement indicating that the port includes hardware that is used to support the first Either one or both of the data is received on the differential signal pair or the data is transmitted on the second differential signal pair.Example 4 may include the subject matter of any of Examples 1-3, wherein the link configuration logic is used to perform link equalization for the first and second differential signal pairs.Example 5 may include the subject matter of any of Examples 1-4, wherein the device includes a buffer memory coupled to the port to buffer transmit data to be transmitted on the first and second differential signal pairs, or to buffer the first And received data on the second differential signal pair.Example 6 may include the subject matter of Example 5, wherein the buffer memory includes a common stack for each of the first and second differential signal pairs.Example 7 may include the subject matter of Example 5, wherein the buffer memory includes a first stack for a first differential signal pair and a second stack for a second differential signal pair.Example 8 may include the subject matter of any one of Examples 1-7, wherein the port includes hardware for supporting multiple TX lines and multiple RX lines, where the port includes hardware and the hardware is used for multiple TX Receive data on a subset of lines and / or send data on a subset of multiple RX lines.Example 9 may include the subject matter of Example 8, wherein link configuration logic is used to assign a channel number to a subset of multiple TX lines or a subset of multiple RX lines.Example 10 may include the subject matter of any of Examples 1-9, wherein the port includes hardware for receiving control signaling on a first differential signal pair or sending control signaling on a second differential signal pair.Example 11 may include the subject matter of any of Examples 1-10, wherein the link configuration logic is used to determine a reconfiguration of the first or second differential signal pair based on the bandwidth utilization information.Example 12 is at least one non-transitory machine-accessible storage medium having instructions stored thereon that, when executed on the machine, cause the machine to detect a cross-link device-to-host device connection, where the link includes: A signal channel that is initially configured to send data from the device to the host device; and a second signal channel that is initially configured to receive data from the host device at the device; receiving a capability announcement from the device, the capability The announcement indicates that the device may support at least one of the following: conversion of the first signal channel for receiving data or conversion of the second signal channel for sending data; and performing channel configuration to reconfigure the first signal channel to receive data or reconfigure A second signal path to send data; and data retransmission over the link based on reconfiguration of one or both of the first and second signal channels.Example 13 may include the subject matter of Example 12, wherein the instructions, when executed, cause the machine to perform link training on one or more channels connecting the host to the device; detecting a capability announcement during the link training, the capability announcement indicating that The device may support at least one of the following: conversion of the first signal channel for receiving data or conversion of the second signal channel for sending data; and configuring the first signal channel to receive data or configuring the second signal channel to send data; Equalization is performed on the channels during link training.Example 14 may include the subject matter of Example 13, wherein the instructions, when executed, cause the machine to cause the machine to enter an L0 state of an active state power management (ASPM) protocol after completing link training.Example 15 may include the subject matter of Example 13, wherein the instructions, when executed, cause a machine to determine a device's bandwidth utilization capability based on link training; and configure a multi-channel link to Symmetrical.Example 16 may include the subject matter of Example 13, wherein the instructions, when executed, cause the machine to detect instructions from the device for returning one or more channels to a default state; and reconfigure the link to return to the default state.Example 17 is a system including a host including a data processor, a port, and a system manager; and a device connected to the host across a multi-channel link, the multi-channel link including a channel, the channel including a first differential A signal pair, which is initially configured to send data and a second differential signal pair in the first channel of the link, and is initially configured to receive data in the first channel of the link; wherein the system manager A capability announcement of the device, the capability announcement indicating that the device is capable of receiving data using the first differential signal pair or transmitting data using the second differential signal pair; reconfiguring the first differential signal pair to receive data based at least in part on the capability announcement Reconfigure the second differential signal pair to send data; and after reconfiguring the second differential signal pair, perform data transmission on the first and second differential signal pairs or after reconfiguring the first differential signal pair, Data reception is performed on two differential signal pairs.Example 18 may include the subject matter of Example 17, wherein the ports include ports based on Express Peripheral Component Interconnect (PCIe).Example 19 may include the subject matter of any of Examples 17-18, wherein the system manager is configured to receive an announcement during a link training phase of operation, the announcement indicating that the port includes a first differential signal pair to support receiving data Or hardware for the second differential signal pair used to send data.Example 20 may include the subject matter of any of Examples 17-19, wherein the system manager logic is used to perform link equalization for the first and second differential signal pairs.Example 21 may include the subject matter of any of Examples 17-20, and may also include a buffer memory coupled to the port for buffering TX data to be transmitted on the first and second differential signal pairs, or for buffering RX data received on the first and second differential signal pairs.Example 22 may include the subject matter of Example 21, wherein the buffer memory includes a common stack for each of the first and second differential signal pairs.Example 23 may include the subject matter of Example 21, wherein the buffer memory includes a first stack for a first differential signal pair and a second stack for a second differential signal pair.Example 24 may include the subject matter of any one of Examples 17-23, wherein the system manager is used to determine that the device uses more downstream bandwidth than upstream bandwidth; configures a second differential signal pair to send data; and performs first and second The differential signal pair performs data transmission.Example 25 may include the subject matter of any one of Examples 17-24, wherein the system manager is used to determine that the device uses more upstream bandwidth than downstream bandwidth; configure the first differential pair to receive data; and Perform data reception.Example 26 is a method that includes detecting a cross-link device-to-host device connection, where the link includes a first signal channel that is initially configured to send data and a second signal channel from the device to the host device, which Initially configured to receive data from the host device at the device; receive a capability announcement from the device, the capability announcement indicating that the device may support at least one of the following: the first signal channel is used for conversion of received data or the second signal channel is used for For transmitting data; and performing channel configuration to reconfigure the first signal channel to receive data or reconfigure the second signal channel to send data; based on the reconfiguration of one or both of the first and second signal channels, The link transmits data.Example 27 may include the subject matter of Example 26, and further includes performing link training on one or more channels connecting the host to the device; detecting a capability announcement during the link training, the capability announcement indicating that the device may support at least A: Conversion of the first signal channel for receiving data or conversion of the second signal channel for sending data; and configuring the first signal channel to receive data or configuring the second signal channel to transmit data; and The channel performs equalization.Example 28 may include the subject matter of Example 27, and also includes placing the machine into the L0 state of the Active State Power Management (ASPM) protocol after link training is completed.Example 29 may include the subject matter of Example 27, further including determining a device's bandwidth utilization capability based on link training; and configuring the multi-channel link to be asymmetric based at least in part on the bandwidth utilization capability.Example 30 may include the subject matter of Example 27, and further include detecting an indication from the device to return one or more channels to a default state; and reconfiguring the link to return to the default state.Example 31 is an apparatus for configuring a multi-channel link, the apparatus including a port including hardware for supporting a multi-channel link, the link including a channel, the channel including a first differential signal pair and a second A differential signal pair, wherein a first differential signal pair is initially configured to transmit data, and a second differential signal pair is initially configured to receive data; a unit for determining that a port includes hardware, and the hardware is configured to support the One or two of receiving data on a differential signal pair or transmitting data on a second differential signal pair, and for reconfiguring the first differential signal pair to receive data with the second differential signal pair or reconfigure the second differential signal A unit for transmitting data using a first differential signal pair; and wherein the port is configured to transmit and receive data based on a reconfiguration of one or both of the first differential signal pair and the second differential signal pair.
An integrated circuit (IC) is disclosed. The IC includes a first global voltage node and a second global voltage node. The IC further includes two or more power domains (21, 22) each coupled to the first global voltage node. Each of the two or more power domains (21, 22) includes a functional unit (24) and a local voltage node coupled to the functional unit (24). Each of the plurality of power domains (21, 22) further includes a power-gating transistor (25) coupled between the local voltage node and the second global voltage node, and an ESD (electrostatic discharge) circuit (26) configured to detect an occurrence of an ESD event and further configured to cause activation of the transistor (25) responsive to detecting the ESD event.
WHAT IS CLAIMED IS: An integrated circuit comprising: a first global voltage node and a second global voltage node; two or more power domains each coupled to the first global voltage node, wherein each of the two or more power domains includes: a local voltage node; a first transistor coupled between the local voltage node and the second global voltage node; and an ESD (electrostatic discharge) circuit configured to detect an occurrence of an ESD event and further configured to cause activation of the first transistor responsive to detecting the ESD event. The integrated circuit as recited in claim 1 , wherein each of the power domains includes a functional unit coupled between the first global voltage node and its respective local voltage node, and wherein the ESD circuit of each of the power domains is further configured provide power to the functional unit of its respective one of the plurality of power domains by activating the first transistor responsive to receiving a first indication from a power control unit of the integrated circuit. The integrated circuit as recited in claim 2, wherein, in absence of an ESD event, the first transistor is configured to be inactive responsive to the ESD circuit receiving a second indication from the power control unit. The integrated circuit as recited in claim 2, wherein the power control unit is further configured to control powering on and off of the plurality of power domains independently of one another, wherein powering on a particular one of the plurality of power domains comprises providing the first indication to the ESD circuit of that one of the plurality of power domains, and wherein removing power from the particular one of the plurality of power domains comprises providing a second indication to the ESD circuit of that one of the plurality of power domains. The integrated circuit as recited in claim 2, wherein the ESD circuit includes: an RC (resistive-capacitive) circuit having a resistor and a capacitor coupled in series between the first global voltage node and the second global voltage node; and a logic gate having a first input coupled to a junction of the resistor and the capacitor. 6. The integrated circuit as recited in claim 5, wherein the logic gate further includes a second input coupled to receive the first indication from the power control unit. 7. The integrated circuit as recited in claim 1 wherein the first global voltage node is a power supply node, wherein the second global voltage node is a return node. 8. The integrated circuit as recited in claim 1 , wherein the first global voltage node is a return node and wherein the second global voltage node is a voltage supply node. 9. The integrated circuit as recited in claim 1, wherein each of the plurality of power domains includes one or more decoupling capacitors coupled between the first voltage node and its respective local voltage node. 10. The integrated circuit as recited in claim 9, wherein each of the plurality of power domains includes a second transistor coupled between the first voltage node and its respective local voltage node, wherein the ESD circuit is configured to activate the second transistor responsive to detecting the ESD event. 11. The integrated circuit as recited in claim 1, wherein each of the two or more power domains includes two or more transistors coupled between its respective local voltage node and the second global voltage node, wherein each of the two or more transistors is coupled to its respective ESD circuit, and wherein the respective ESD circuit is configured to activate the two or more transistors responsive to detecting the ESD event or responsive to receiving a corresponding indication from a power control unit. 12. A method comprising: an ESD (electrostatic discharge) circuit detecting an ESD event, wherein the ESD circuit is associated with one of a plurality of power domains of an integrated circuit (IC), wherein each of the plurality of power domains is associated with a corresponding one of a plurality of ESD circuits and is coupled between a first global voltage node and a second global voltage node; and providing a discharge path between the second global voltage node a local voltage node of the one of plurality of power domain responsive to detecting the ESD event. 13. The method as recited in claim 12, wherein said providing the discharge path comprises the ESD circuit activating one or more transistors coupled between the local voltage node and the second global voltage node. 14. The method as recited in claim 13, further comprising the ESD circuit activating the one or more transistors of coupled between the local voltage node and the second global voltage node responsive to receiving a first indication from a power control unit. 15. The method as recited in claim 14, further comprising the power control unit powering on particular ones of the plurality of power domains independent of one another, and further comprising the power control unit independently powering down particular ones of the plurality of power domains independently of one another by providing a second indication to the particular ones of the plurality of power domains. 16. The method as recited in claim 13, wherein activating the one or more transistors comprises coupling a global supply voltage node to a local supply voltage node, wherein the second global voltage node is the global supply voltage node and wherein the first global voltage node is a return voltage node. 17. The method as recited in claim 13, wherein activating the one or more transistors comprises coupling a global return voltage node to a local return voltage node, wherein the second global voltage node is the global return voltage node, and wherein the first global voltage node is a supply voltage node. 18. A non-transitory computer readable medium storing a data structure which is operated upon by a program executable on a computer system, the program operating on the data structure to perform a portion of a process to fabricate an integrated circuit including circuitry described by the data structure, the circuitry described in the data structure including: an integrated circuit (IC) having a first global voltage node and a second global voltage node; two or more power domains each coupled to the first global voltage node, wherein each of the two or more power domains includes: a local voltage node; a transistor coupled between the local voltage node and the second global voltage node; and an ESD (electrostatic discharge) circuit configured to detect an occurrence of an ESD event and further configured to cause activation of the transistor responsive to detecting the ESD event 19. The computer readable medium as recited in claim 18, wherein the ESD circuit described in the data structure is further configured provide power to a functional unit of its respective one of the plurality of power domains by activating the transistor responsive to receiving a first indication from a power control unit of the integrated circuit, wherein the functional unit of each of the power domains is coupled between the first global voltage node and its respective local voltage node. The computer readable medium as recited in claim 19, wherein the power control unit described in the data structure is further configured to power on each of the plurality of power domains independently of one another, wherein powering on a particular one of the plurality of power domains comprises providing the first indication to the ESD circuit of that one of the plurality of power domains, and wherein removing power from the particular one of the plurality of power domains comprises providing a second indication to the ESD circuit of that one of the plurality of power domains. The computer readable medium as recited in claim 18, wherein each of the two or more power domains of the IC described in the data structure includes two or more transistors coupled between its respective local voltage node and the second global voltage node, wherein each of the two or more transistors is coupled to its respective ESD circuit, and wherein the respective ESD circuit is configured to activate the two or more transistors responsive to detecting the ESD event or responsive to receiving a corresponding indication from a power control unit. 22. The computer readable medium as recited in claim 18, wherein the data structure comprises one or more of the following types of data: HDL (high-level design language) data; RTL (register transfer level) data; Graphic Data System (GDS) II data.
TITLE: ELECTROSTATIC DISCHARGE CIRCUIT BACKGROUND 1. Field of the Invention This invention relates to electronic circuits, and more particularly, to circuits for protecting against damage from electrostatic discharge (ESD). 2. Description of the Related Art One of the hazards of handling electronic devices is that resulting from electrostatic discharge (ESD). ESD is a sudden increase in electrical current between two points at different electrical potentials from a field of static electricity. Contact between the two points may provide a discharge path for the electric field. Since the potential difference between the two points may be very large, the current resulting from ESD may also be very large. Semiconductor devices (e.g., integrated circuits) are particularly vulnerable to damage from ESD. During the manufacturing process, and later in the field, the handling of semiconductor devices and/or assemblies may result in ESD events. Such ESD events can damage or destroy semiconductor devices. Personnel that handle electronic devices and assemblies may take precautions, such as the use of grounding straps or the wearing of grounded shoes, in order to prevent ESD from damaging handled components. However, these precautions may not always be sufficient. Accordingly, many modern electronic devices are designed with ESD protection built in. One type of ESD circuit is referred to as an ESD clamp. An ESD clamp may include an RC (resistive-capacitive) circuit coupled between a power node and a ground node, and a relatively large transistor having a gate terminal coupled to the junction of the resistor and the capacitor of the RC circuit. When an ESD event occurs, the voltage on the junction of the RC circuit may activate the transistor, thereby providing a discharge path for the current from the discharge. SUMMARY OF EMBODIMENTS OF THE DISCLOSURE An integrated circuit (IC) is disclosed. In one embodiment, the IC includes a first global voltage node and a second global voltage node. The IC further includes two or more power domains each coupled to the first global voltage node. Each of the two or more power domains includes a functional unit and a local voltage node coupled to the functional unit. Each of the plurality of power domains further includes a transistor coupled between the local voltage node and the second global voltage node, and an ESD (electrostatic discharge) circuit configured to detect an occurrence of an ESD event and further configured to cause activation of the transistor responsive to detecting the ESD event. In one embodiment, a method includes an ESD (electrostatic discharge) circuit detecting an ESD event. The ESD circuit is associated with one of a plurality of power domains of an IC), wherein each of the plurality of power domains is associated with a corresponding one of a plurality of ESD circuits and is coupled between a first global voltage node and a second global voltage node. The method further includes providing a discharge path between the second global voltage node a local voltage node of the one of plurality of power domain responsive to detecting the ESD event. BRIEF DESCRIPTION OF THE DRAWINGS Other aspects of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings in which: Fig. 1 is a diagram illustrating one embodiment of an integrated circuit (IC) having a plurality of power domains which each utilize a power-gating transistor to provide an ESD discharge path; Fig. 2 is a diagram illustrating one embodiment of an ESD circuit in an IC; Fig. 3 is a diagram illustrating another embodiment of an IC having a plurality of power domains which each utilize a power-gating transistor to provide an ESD discharge path; Fig. 4 is a diagram illustrating another embodiment of an ESD circuit in an IC; Fig. 5 is a diagram illustrating another embodiment of an IC having a plurality of power domains which each utilize a power-gating transistor to provide an ESD discharge path; Fig. 6 is a flow diagram illustrating one embodiment of a method for providing an ESD discharge path in an IC; and Fig. 7 is a block diagram of one embodiment of a carrier medium storing a data structure representative of an embodiment of an IC. While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and description thereto are not intended to limit the invention to the particular form disclosed, but, on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. DETAILED DESCRIPTION The present disclosure is directed to ESD (electrostatic discharge) protection for an integrated circuit (IC) having multiple power domains that may be selectively and independently powered on or off for the purposes of conserving power. In each power domain, an ESD detection circuit may be implemented in order to detect ESD events. Upon detecting an ESD event, the ESD detection circuit may generate a signal to activate a power-gating transistor, which may thereby complete a discharge path for the current generated by the ESD event. The ESD detection circuit of each power domain may also be coupled to a power control unit of the IC. A selected power domain may be powered up responsive to the power control unit providing a first indication to its respective ESD detection circuit, thereby activating the power-gating transistor. Similarly, the selected power domain may be powered down responsive to the power control unit providing a second indication to the ESD detection circuit, which may in turn deactivate the power-gating transistor. Accordingly, the power-gating transistor, in addition to its function of applying or removing power from a corresponding power domain, may also be used for ESD protection purposes. This may in turn obviate the need to provide extra transistors specifically for the purpose of ESD protection, thereby resulting in area savings on an IC die. Various embodiments of such an IC will now be discussed in further detail. For the purposes of this disclosure, an ESD event may be defined as any sudden increase in electrical current between two points at different electrical potentials resulting from a field of static electricity. When such ESD events occur in an electronic circuit (e.g., in an IC), they may cause damage to circuitry therein in the absence of a discharge path. A global voltage node may be defined, for the purposes of this disclosure, as any voltage node (e.g., voltage supply node, ground node) that is coupled to two or more power domains of an IC or other type of electronic system in which circuitry therein may be powered on or off independently of circuitry in other power domains. A local voltage node for the purposes of this disclosure may be defined as a voltage node that is local to the circuitry of a particular power domain, and is thus not coupled to circuitry in another power domain. Thus, for the purposes of this disclosure, applying power to a particular power domain may include coupling a local voltage node of that power domain to a corresponding global voltage node (e.g., coupling a local voltage supply node to a global voltage supply node). IC and ESP Circuit Embodiments: Fig. 1 is a diagram illustrating one embodiment of an IC having a plurality of power domains which each utilize a power-gating transistor to provide an ESD discharge path. In the embodiment shown, IC 10 includes a first power domain 21 and a second power domain 22. The exact number of power domains in a given embodiment may vary, and thus the example shown here is not intended to be limiting. Each of power domains 21 and 22 in the embodiment shown is coupled to receive a voltage from a global voltage supply node, Vdd. In addition, IC 10 includes a second voltage node, Vss, which serves as a global return voltage node. A decoupling capacitance 27 may be provided between the global voltage supply node and the global voltage return node. Decoupling capacitance 27 may be implemented using one or more capacitors, and may be distributed across IC 10. Power supply noise may be shunted to the return node through decoupling capacitance 27, thereby maintaining the voltage difference between the global voltage supply node and the global voltage return node at a substantially constant value. Each of the power domains 21 and 22 in the embodiment shown include a local return node, Vss-Local land Vss-Local 2, respectively. A local decoupling capacitance 23 comprised of one or more capacitors may be provided in each of power domains 21 and 22. These capacitors may provide a similar function to that of the global decoupling capacitance 27 described above, and may also provide a portion of a discharge path for current generated from an ESD event, as will be described in further detail below. Respective power-gating transistors 25 are coupled between the local return nodes of power domains 21 and 22, and the global return node, Vss. A particular one of power domains 21 and 22 may be powered on by activating its corresponding power-gating transistor 25, which may effectively couple its local return node to the global return node Vss. It is noted that power domains 21 and 22 may be powered on and off independently of one another. IC 10 may be one of many different type of IC's that includes multiple power domains that may be powered on or off independently of one another. For example, IC 10 may in one embodiment be a multi-core processor with each functional unit 24 comprising the circuitry that makes up the core. In another embodiment, IC 10 may be an IC intended for use in a portable device in which preserving battery power is critical, with each power domain including a corresponding functional unit 24 that may be powered off when not in use. It should be noted that functional units 24 may be identical in some embodiments of IC 10, while in other embodiments, functional units 24 may be different from one another. In general, IC 10 may be any type of IC which includes portions (e.g., power domains) that may be powered on or off independently of other portions. Similarly, functional unit 24 may be any type of functional circuitry that performs one or more intended functions of IC 10. Each of power domains 21 and 22 in the embodiment shown may be powered on by activation of its corresponding power-gating transistor 25. In the embodiment shown, each power-gating transistor 25 has a gate terminal coupled to a respective ESD detection circuit 26. Each ESD detection circuit 26 is coupled to receive a respective signal from power control unit 28. When an ESD detection circuit 26 receives a respective power on signal (e.g., Power On 1 to power domain 21, Power on 2 to power domain 22), it may respond by asserting a signal ('Detect/On') that is received on the gate terminal of the power-gating transistor 25 of that power domain. These signals may be de-asserted by their respective ESD detection circuit 26 responsive to de-assertion of a respective power on signal by power control unit 28. Accordingly, power control unit 28 may effectively control whether or not power is provided to power domains 21 and 22 during normal operation of IC 10. The assertion of a 'Detect/On' signal on the gate terminal of a power-gating transistor 25 may in turn activate that transistor, thus effectively coupling its local return voltage node to the global return voltage node. For example, if power-gating transistor 25 of power domain 21 is activated, Vss-Locall may effectively be coupled to Vss-Global, thereby enabling power to be provided to functional unit 24. Conversely, de-assertion of a signal on the gate terminal of a power-gating transistor 25 may remove power therefrom. For example, if the signal provided to the gate terminal of power-gating transistor 25 in power domain 21 is de-asserted, Vss-Locall is effectively decoupled from Vss-Global, and thus power may be removed from that power domain. In addition to the power-gating functions described above, power-gating transistors 25 may also be used to complete a discharge path for current generated during an ESD event. Each ESD detection circuit 26 may be configured to detect ESD events that might otherwise be potentially damaging to the circuitry in each of power domains 21 and 22. Responsive to detection of an ESD event, an ESD detection circuit 26 may assert its corresponding 'Detect/On' signal, thereby activating the power-gating transistor 25 of its respective power domain. When the power-gating transistor 25 of power domains 21 and 22 are active, a discharge path for current may be provided, through the capacitance 23, to the local voltage return node (e.g., Vss- Locall) and through the active power-gating transistor 25. Accordingly, the power-gating transistors 25 in the embodiment shown may provide the functionality of providing an ESD discharge path in addition to their function of performing the power-gating function previously described. Using power-gating transistors to provide an ESD discharge path in the manner described may obviate the need for providing separate transistors to perform this function. This may thus enable the provision of ESD protection of an IC such as IC 10, while also saving circuit area that might otherwise be consumed by separate ESD transistors, which can be relatively large. Fig. 2 is a diagram illustrating one embodiment of ESD circuit 26 of IC 10. For the sake of illustration, additional elements are shown in Fig. 2 in order to fully illustrate the relationship of ESD detection circuit 26 to these other elements, which are numbered here as in Fig. 1 for the sake of convenience. In the embodiment shown, ESD detection circuit 26 includes a capacitor 32 and a resistor 33 coupled in series. A junction of capacitor 32 and resistor 33, i.e. the node labeled 'Event', is used as an input to OR gate 31. Capacitor 32 in the embodiment shown is coupled between the Event node and the global voltage supply node. Resistor 33 is coupled between the Event node and the global return voltage node. In the absence of an ESD event, the Event node may be decoupled from the voltage present on the global voltage supply node Vdd by capacitor 32. Thus the Event node may be pulled toward the voltage present on the global voltage return node, Vss- Global, through resistor 33. Furthermore, if Power On 1 is not asserted by power control unit 28, then the output of OR gate 31 may be low in the absence of an ESD event. The gate terminal of power-gating transistor 25, which is coupled to the output of OR gate 31, is thus low in the absence of an ESD event when Power On 1 is not asserted. In this embodiment, power-gating transistor 25 is an NMOS (n-channel metal oxide semiconductor) transistor that may activate responsive to a logic high voltage on its gate terminal. Accordingly, power-gating transistor 25 is thus inactive when the output of OR gate 31 is low. When an ESD event occurs, the voltage difference between Vdd and Vss-Global may increase rapidly. Since the voltage across a capacitor cannot change instantaneously, the amount of current flowing through resistor 33 may increase rapidly in response to the ESD event. This sudden rush of current through resistor 33 may thus increase the corresponding voltage drop between the Event node and Vss-Global. If the voltage drop is sufficient, OR gate 31 may interpret the voltage present on the Event node as a logic 1. Responsive thereto, OR gate 31 may assert a logic 1 (i.e. a logic high voltage in this case), thereby causing the activation of power- gating transistor 25. As previously noted, the activation of power-gating transistor 25 may effectively couple the local voltage return node (Vss-Locall in this example) to the Vss-Global. Thus, the activation of power-gating transistor 25 responsive to detection of the ESD event may thus complete a discharge path between Vdd and Vss-Global through the power domain (power domain 21 in this example). Providing a discharge path through power domain 21 when it is otherwise inactive may thus prevent ESD damage to the circuitry contained therein (e.g., functional unit 24). Fig. 3 is a diagram illustrating another embodiment of an IC having a plurality of power domains which each utilize a power-gating transistor to provide an ESD discharge path. In this particular embodiment, IC 40 includes power domains 41 and 42. Each of power domains 41 and 42 include a corresponding functional unit 24, corresponding local decoupling capacitors 23, corresponding power-gating transistors 45, and corresponding ESD detection circuits 46. Power control unit 28 and decoupling capacitance 27 in the embodiment shown are analogous to like- numbered elements shown in Figs. 1 and 2. In the embodiment shown, IC 40 includes a global voltage supply node, Vdd-Global and a global voltage return node, Vss-Global. Power domains 41 and 42 each include local voltage supply nodes, Vdd-Locall and Vdd-Local2, respectively. Corresponding power-gating transistors 45 are coupled between their respective local voltage supply nodes and global supply voltage node Vdd. In contrast to the embodiments illustrated in Figs. 1 and 2, power-gating transistors are PMOS (p-channel metal oxide semiconductor) transistors. Moreover, referring momentarily to Fig. 4, ESD detection circuit 46 utilizes NOR gate 57 instead of an OR gate 31 as utilized in ESD detection circuit 26 of Figs. 1 and 2. Thus, when an ESD event occurs, a logic 1 detected on the Event node may cause NOR gate 57 to drive its output low and thus activate the corresponding power-gating transistor 45 coupled thereto. When a power-gating transistor 45 is activated in either of power domains 41 and 42, the corresponding local voltage supply node may effectively be coupled to the global voltage supply node. Thus, a discharge path for current from an ESD event may be provided through the active power-gating transistor 45 and the corresponding local decoupling capacitor 23, which is coupled between Vss-Global and the local voltage supply node for that particular power domain. Power-gating transistor 45 for each of power domains 41 and 42 may also be activated responsive to ESD detection circuit 45 receiving a corresponding signal from power control unit 28 (e.g., Power On 1 to ESD detection circuit 46 of power domain 41). Thus, the assertion of the Power On 1 signal provided to ESD detection circuit 46 of power domain 41 may cause NOR gate 57 to drive its output low and thus activate the corresponding power-gating transistor 45. Operation of ESD circuit 46 based on receiving an asserted Power On 2 signal from power control unit 28 may be the same. Fig. 5 is a diagram illustrating another embodiment of an IC having a plurality of power domains which each utilize multiple power-gating transistors to provide an ESD discharge path. IC 50 in the embodiment shown is similar to IC 10 shown in Fig. 1, with like-numbered elements performing the same functions. However, power domains 21 and 22 in IC 50 each include multiple instances of power-gating transistor 25, instead of a single power-gating transistor 25 for power domains of IC 10. Implementing multiple instances of a power-gating transistor may in some embodiments allow these transistors to be smaller than in embodiments wherein only a single power-gating transistor is utilized. In the embodiment shown, power domains 21 and 22 each include an additional transistor 55 that is coupled in parallel with decoupling capacitor 23. Each instance of transistor 55 includes a gate terminal coupled to its respective ESD circuit 26. More particularly, the gate terminals of each of transistors 55 may be coupled to the Event node, shown in the embodiment of Fig. 2, of the corresponding ESD circuit 26. Accordingly, transistors 55 in the embodiment shown are configured to be activated responsive only to an ESD event, in contrast to transistors 25, which may be activated to provide power to their respective power domains in addition to being activated responsive to an ESD event. When active, a given instance of transistor 55 may provide an additional discharge path between Vdd and the respective Vss-Local node, in parallel with the corresponding capacitor 23. In general, a wide variety of embodiments of an IC may be implemented in accordance with the discussion above, wherein the function of power-gating transistors may double as ESD protection devices. Such power-gating transistors may be coupled between global voltage supply nodes and local voltage supply nodes, global ground nodes and local ground nodes, and/or other local and global voltage nodes. ESD circuits may be coupled to such power-gating transistors, and may enable their activation responsive to an ESD event in order to provide a discharge path and thus prevent damage to their respective power domains. The power-gating transistors may also be used to independently apply or remove power from their corresponding power domains. Method Flow: Fig. 6 is a flow diagram illustrating one embodiment of a method for providing an ESD discharge path in an IC. In the embodiment shown, method 60 begins with the detection of an ESD event (block 62). Responsive to the detection of an ESD event, and indication is generated (block 64) by one or more ESD detection circuits, each of which may be associated with a particular power domain. Each ESD detection circuit may be coupled to a corresponding power- gating transistor for the particular power domain. For a given power domain, if its respective power-gating transistor is not active when the ESD event is detected (block 66, no), then that power-gating transistor may be activated (block 68) in order to provide a discharge path for the current generated by the ESD event. If the some or all of the power-gating transistors associated with corresponding power domains are already active when the ESD event occurs (block 66, yes), then no further action need be taken for these power domains, as a discharge path is already provided through the active power-gating transistors as well as through corresponding decoupling capacitance. It is noted that since the power-gating transistors for each domain may be activated or de-activated independent of those associated with other power domains, some may be active at a given time while others are inactive. Thus, at times it may be necessary to momentarily activate otherwise inactive power-gating transistors when an ESD event occurs while others are already active. Computer Accessible Storage Medium: Turning next to FIG. 7, a block diagram is shown of a computer accessible storage medium 300 including a database representative of any one (or all) of IC's 10, 40, or 50 as discussed above. Generally speaking, a computer accessible storage medium may include any non-transitory storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium may include storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, CD-ROM, or DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray. Storage media may further include volatile or non-volatile memory media such as RAM (e.g. synchronous dynamic RAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM, low-power DDR (LPDDR2, etc.) SDRAM, Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, Flash memory, non-volatile memory (e.g. Flash memory) accessible via a peripheral interface such as the Universal Serial Bus (USB) interface, etc. Storage media may include microelectromechanical systems (MEMS), as well as storage media accessible via a communication medium such as a network and/or a wireless link. Generally, the database or other type of data structure representative of the IC 10, 40, and/or 50 carried on the computer accessible storage medium 300 may be a database which can be read by a program and used, directly or indirectly, to fabricate the hardware comprising the described IC(s). For example, the database may be a behavioral-level description or register- transfer level (RTL) description of the hardware functionality in a high level design language (HDL) such as Verilog or VHDL. The description may be read by a synthesis tool which may synthesize the description to produce a netlist comprising a list of gates and other circuits from a synthesis library. The netlist comprises a set of gates and other circuitry which also represent the functionality of the hardware comprising the described IC(s). The netlist may then be placed and routed to produce a data set describing geometric shapes to be applied to masks. The masks may then be used in various semiconductor fabrication steps to produce a semiconductor circuit or circuits corresponding to the described IC(s). Alternatively, the database on the computer accessible storage medium 300 may be the netlist (with or without the synthesis library) or the data set, as desired. While the computer accessible storage medium 300 carries a representation of one or more of ICs 10, 40, and/or 50, other embodiments may carry a representation of any portion of these ICs, as desired, including any set of agents (e.g. ESD circuit 26, power control unit 28, functional unit 24, etc.), portions of an agent (e.g., OR gate 31), and so forth. While the present invention has been described with reference to particular embodiments, it will be understood that the embodiments are illustrative and that the invention scope is not so limited. Any variations, modifications, additions, and improvements to the embodiments described are possible. These variations, modifications, additions, and improvements may fall within the scope of the inventions as detailed within the following claims.
An intellectual property (IP) block design methodology for three-dimensional (3D) integrated circuits may comprise folding at least one two-dimensional (2D) block that has one or more circuit components into a 3D block that has multiple tiers, wherein the one or more circuit components in the folded 2D block may be distributed among the multiple tiers in the 3D block. Furthermore, one or more pins may be duplicated across the multiple tiers in the 3D block and the one or more duplicated pins may be connected to one another using one or more intra-block through-silicon-vias (TSVs) placed inside the 3D block.
1.A method for designing an integrated circuit, comprising:folding a two-dimensional 2D block with one or more circuit components into a three-dimensional 3D block with multiple layers, wherein the one or more circuit components in the folded 2D block are distributed over the in multiple layers;determining shared space available across the plurality of layers in the 3D block;placing one or more input/output I/O pin locations in the shared space;replicating the one or more I/O pin locations across the plurality of layers in the 3D block; andusing one or more in-block TSVs placed inside the 3D block to connect the replicated I/O pin locations,Wherein a first I/O pin at one of the replicated I/O pin locations is coupled to two or more circuit components in the same layer.2.The method of claim 1, further comprising:One or more other blocks in the integrated circuit are connected to one of the replicated I/O pin locations according to the layer locations associated with the one or more other blocks.3.The method of claim 2, further comprising:3D floorplanning is performed to package the 3D block and the one or more other blocks into a full-chip design associated with the integrated circuit.4.The method of claim 3, wherein determining the one or more I/O pin locations comprises:The one or more I/O pin locations are selected to replicate across the multiple layers in the 3D block to minimize bus length and footprint in the full chip design.5.The method of claim 1, wherein the one or more intra-block TSVs provide vertical connections between replicated I/O pin locations.6.The method of claim 1, wherein the 2D blocks and the 3D blocks comprise intellectual property IP blocks.7.The method of claim 1, wherein collapsing the 2D block into the 3D block further comprises:partitioning the 2D block into the plurality of layers; andre-implementing the arrangement and routing associated with the one or more circuit components in the 2D block to distribute the one or more circuit components in the folded 2D block across all the circuit components in the 3D block and interconnecting the one or more circuit components distributed in the plurality of layers in the 3D block.8.The method of claim 7, further comprising:determining a shared space available across the plurality of layers in the 3D block according to the reimplemented arrangement and the reimplemented routing; andThe copied I/O pin locations are placed according to the shared space available across the plurality of layers in the 3D block.9.A three-dimensional 3D intellectual property block, comprising:multiple layers;one or more circuit components distributed across the plurality of layers; andone or more input/output I/O pin locations replicated across the plurality of layers,wherein the one or more I/O pin locations are placed in a shared space available across the plurality of layers in the 3D intellectual property block, andWherein a first I/O pin at one of the replicated I/O pin locations is coupled to two or more circuit components in the same layer.10.The 3D intellectual property block of claim 9, further comprising:One or more TSVs placed inside the 3D intellectual property block, wherein the one or more TSVs connect the one or more I/O pin locations replicated across the plurality of layers.11.11. The 3D intellectual property block of claim 10, wherein the one or more TSVs provide vertical connections between replicated I/O pin locations.12.9. The 3D intellectual property block of claim 9, wherein the one or more I/O pin locations replicated across the plurality of layers are selected to minimize Bus length and occupied area.13.9. The 3D intellectual property block of claim 9, wherein the 3D intellectual property block comprises a two-dimensional 2D intellectual property block that has been folded into the plurality of layers and reimplemented to display in the 3D The one or more circuit components are distributed and interconnected among the plurality of layers in the intellectual property block.14.A three-dimensional 3D integrated circuit, comprising:at least one 3D block having one or more circuit components distributed across multiple layers and one or more input/output I/O pin locations replicated across the multiple layers; andat least one additional block located on one of the plurality of layers, wherein the at least one additional block is connected to the at least one additional block according to the one of the plurality of layers on which the at least one additional block is positioned one of the I/O pin locations replicated in the at least one 3D block,wherein the one or more I/O pin locations are placed in a shared space available across the plurality of layers in the at least one 3D block, andWherein a first I/O pin at one of the replicated I/O pin locations is coupled to two or more circuit components in the same layer.15.15. The 3D integrated circuit of claim 14, wherein the at least one 3D block further comprises a connection to the one or more I/O pin locations replicated across the plurality of layers in the at least one 3D block One or more TSVs.16.15. The 3D integrated circuit of claim 14, wherein the one or more I/O pin locations replicated across the plurality of layers in the at least one 3D block are selected to minimize interaction with the 3D integrated circuit The associated bus length and footprint.17.15. The 3D integrated circuit of claim 14, wherein the at least one 3D block comprises folded into the plurality of layers and reimplemented to distribute and interconnect the one or more circuits in the plurality of layers A 2D 2D block of components.18.15. The 3D integrated circuit of claim 14, wherein at least one of the 3D block or the at least one additional block comprises an intellectual property IP block.19.15. The 3D integrated circuit of claim 14, wherein the at least one additional block comprises one or more of a two-dimensional 2D block or a second 3D block.
IP block design with folded blocks and replicated pins for 3D integrated circuitstechnical fieldThe present invention relates generally to integrated circuits and, in particular, to generating intellectual property (IP) blocks in 3D integrated circuit designs for low power and high performance applications.Background techniqueIn electronic design automation, an integrated circuit (IC) floorplan schematically represents a tentative arrangement of major functional blocks associated with an IC. In the modern electronic design process, floorplans are typically generated during a floorplanning stage that is an early stage in a layered approach to chip design. The floorplan takes into account some geometric constraints in the design, including, for example, the location of bond pads for off-chip connections. Furthermore, in electronic design, an intellectual property (IP) block (or IP core) refers to a reusable logic cell, battery or chip partial design that is considered the intellectual property of a particular party. Accordingly, authorized parties and/or parties who own intellectual property rights (eg, patents, source code copyrights, trade secrets, know-how, etc.) that exist in the design may use IP blocks as building blocks within the IC design. In general, there may be various advantages to using three-dimensional (3D) IP blocks in conjunction with 2D IP blocks to improve the overall quality of a full-chip 3DIC design.For example, a 3D semiconductor device (or stacked IC device) may include two or more semiconductor devices that are vertically stacked and thus occupy less space than two or more conventionally arranged semiconductor devices space. A stacked IC device is a single integrated circuit built by stacking vertically interconnected silicon wafers and/or ICs to appear as a single device. Conventionally, stacked semiconductor devices are wired together using input/output (I/O) ports at the perimeter of the device and/or in areas spanning the device. The I/O ports slightly increase the length and width of the assembly. In some new 3D stacking, a technology called Through Silicon Stacking (TSS) uses Through Silicon Through Silicon (TSV) to enable stacked IC devices to pack a large number of functions into a small footprint by forming vertical connections through the body of the semiconductor device Instead, the edge routing is completely or partially replaced. However, device scale and interconnect performance mismatch has increased exponentially and is expected to continue to increase further. This exponential increase in device and interconnect performance mismatch drives designers to use techniques such as re-buffering of global interconnects, which increase chip area and power consumption.Therefore, current 3D methods that focus on assembling 2D blocks into 3D stacks only help to reduce the inter-block nets (if applicable) without exploiting 3D ICs within blocks and without further improvement of the workbench. On the other hand, starting from an existing 2D IP block, a technique called "block folding" can perform layer partitioning and rearranging and routing all layers under the same footprint in order to create a 3D IP block and thereby build the final 3D IP layout. However, the prior art utilizing block folding does not address how to place I/O pins in a folded 3D IP block, which may affect the final 3D IC design quality in terms of wire length, area and number for inter-block connections have a major impact.SUMMARY OF THE INVENTIONThe following presents a simplified overview of one or more aspects and/or embodiments disclosed herein. Accordingly, the following summary should not be construed as an extensive overview related to all contemplated aspects and/or embodiments, nor should the following summary be considered to identify key or critical elements related to all contemplated aspects and/or embodiments, or Depicts the scope associated with any particular aspect and/or embodiment. Thus, the sole purpose of the following summary is to present some concepts related to one or more aspects and/or embodiments disclosed herein in a simplified form before the detailed description presented below.According to various exemplary aspects, an intellectual property (IP) block design method for a three-dimensional (3D) integrated circuit may include folding at least one two-dimensional (2D) block having one or more circuit components into a 3D having multiple layers block, wherein the one or more circuit components in the folded 2D block may be distributed among the plurality of layers in the 3D block. Additionally, one or more pins may be replicated across multiple layers in the 3D block and one or more replicated pins may use a One or more through-silicon vias (TSVs) are connected to each other. Furthermore, in various embodiments, one or more other blocks in the 3D integrated circuit may each be connected to one of the replicated pins according to their associated layer positions, and the 3D block and one or more other blocks may then be connected Packaged into a final full-chip design associated with an integrated circuit, where one or more pins replicated across multiple layers in a 3D block can be selected to minimize bus length and footprint in the full-chip design and/or According to the shared space available across multiple layers in the 3D block.According to various exemplary aspects, a 3D intellectual property block may include multiple layers, one or more circuit components distributed across the multiple layers, and one or more pins replicated across the multiple layers. For example, in various embodiments, a 3D intellectual property block may include a 2D intellectual property block that has been folded into multiple layers and reimplemented to distribute and interconnect one or more circuit components among the multiple layers in the 3D intellectual property block Intellectual Property Block. Furthermore, in various embodiments, the 3D intellectual property block may include one or more pins placed inside the 3D block to connect one or more pins replicated across multiple layers and to provide vertical connections between the one or more replicated pins Multiple TSVs.According to various exemplary aspects, a 3D integrated circuit can include at least one 3D block having one or more circuit components distributed across multiple layers and one or more pins replicated across multiple layers; and At least one extra block located on one of the plurality of layers, wherein the at least one extra block is connected to one of the duplicate pins in the at least one 3D block according to the at least one layer on which the at least one extra block is positioned. For example, in various embodiments, at least one 3D block may comprise a 2D block that has been folded into multiple layers and reimplemented to distribute and interconnect one or more circuit components among the multiple layers. Furthermore, in various embodiments, the at least one 3D block may additionally include one or more in-block TSVs connecting one or more replicated pins, wherein the replicated pins may be selected to minimize the amount of stress associated with the 3D integrated circuit Bus length and occupied area.Other objects and advantages associated with the various aspects and/or embodiments disclosed herein will be apparent to those skilled in the art based on the drawings and detailed description.Description of drawingsAspects of the invention, and a more complete understanding of the aspects of the invention and its many attendant advantages, can be readily obtained by reference to the following detailed description, taken in conjunction with the accompanying drawings, which are presented for purposes of illustration only and without limiting the invention, and in the accompanying drawings:1 illustrates an exemplary multilayer three-dimensional (3D) integrated circuit (IC) floorplan implementing one or more two-dimensional (2D) blocks in conjunction with one or more 3D blocks in accordance with various aspects.2 illustrates, in accordance with various aspects, for folding one or more existing 2D and/or 2D and/or 2D which can be subsequently packaged into a final multi-layer 3D IC layout according to power, performance, and other design quality goals associated with the overall multi-layer 3D IC layout or an exemplary method of 3D blocks.3 illustrates an example for automatically floorplanning a multi-layer 3D IC layout combining one or more 2D and one or more 3D blocks to improve the quality associated with full-chip multi-layer 3D IC design in accordance with various aspects method.4 illustrates an exemplary method for pin assignment in a multi-layer 3DIC combining one or more 2D blocks with one or more 3D blocks in accordance with various aspects.5 illustrates an exemplary method for replicating pin assignments in a multi-layer 3D block produced from folding an existing 2D block, in accordance with various aspects.6 illustrates an exemplary multi-layer 3D block with replicated pins that can be produced from folding an existing 2D block in accordance with various aspects.7A-7C illustrate an exemplary 3D load store unit (LSU) with replicated pins in accordance with various aspects.detailed descriptionVarious aspects are disclosed in the following description and the associated drawings to illustrate examples related to specific exemplary embodiments. Alternative embodiments will be apparent to those skilled in the art after reading this disclosure, and can be constructed and practiced without departing from the scope or spirit of this disclosure. Additionally, well-known elements will not be described in detail or may be omitted so as not to obscure the relevant details of the aspects and embodiments disclosed herein.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Likewise, the term "embodiments" does not require that all embodiments include the discussed feature, advantage, or mode of operation.The terminology used herein describes specific embodiments only and should not be construed as limiting any embodiments disclosed herein. As used herein, the singular forms "a" and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will be further understood that the terms "comprising" and/or "comprising" when used herein specify the presence of stated features, integers, steps, operations, elements and/or components, but do not exclude one or more other features, integers, The presence or addition of steps, operations, elements, components and/or groups thereof.Furthermore, many aspects are described in terms of sequences of actions to be performed by, eg, elements of a computing device. It will be appreciated that the various actions described herein may be performed by specific circuits (eg, application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of the two. Additionally, the sequences of actions described herein may be considered to be fully embodied within any form of computer-readable storage medium having stored thereon a corresponding set of computer instructions that, when executed, will Causes the associated processor to perform the functionality described herein. Accordingly, various aspects of the present invention may be embodied in several different forms, all of which are intended to be within the scope of the claimed subject matter. Additionally, for each aspect and/or embodiment described herein, the corresponding form of any such aspect and/or embodiment may be described herein as, for example, "configured to (perform the described action) logic".1 illustrates an exemplary multilayer three-dimensional (3D) integrated circuit (IC) floorplan 110 implementing one or more two-dimensional (2D) blocks in conjunction with one or more 3D blocks, according to various embodiments. More specifically, the overall multi-layer 3D IC floorplan 110 is 3D because the floorplan 110 includes a first layer 112 , a second layer 114 , and a 3D block 130 spanning the first layer 112 and the second layer 114 . Furthermore, as shown in FIG. 1, the remaining blocks 120a, 120b, 120c, 120d, 120e in the 3D IC floorplan 110 are implemented in 2D and distributed between the first layer 112 and the second layer 114 (ie, the 2D block 120a). , 120b are implemented in the first layer 112, while the 2D blocks 120c, 120d, 120e are implemented in the second layer 114). Furthermore, arrows 132 in FIG. 1 may represent a via network having one or more vias (eg, through-silicon vias (TSVs)), which may be in multilayer 3D IC 110 (eg, in at least a first layer) 112 and the second layer 114) provide a communication path. In various embodiments, the multi-layer 3D IC floorplan 110 shown in FIG. 1 may be based on foldable one or more existing 2D blocks and/or one or more existing 3D blocks to build the final multi-layer 3D IC appropriate method of layout 110 to build.For example, according to various embodiments, FIG. 2 illustrates that the folds may then be together according to power, performance, and other design quality goals associated with an overall multilayer 3D IC layout (eg, 3D IC layout 110 shown in FIG. 1 ) Exemplary method 200 of packaging one or more existing 2D blocks and/or 3D blocks into a final multilayer 3D IC layout. In general, the method 200 shown in FIG. 2 may be applied with respect to individual blocks, and a 3D floorplan may then be implemented to package multiple blocks (including any individual blocks folded according to the method 200 shown in FIG. 2 ) into the final multilayer 3D IC design.In various embodiments, as described in further detail herein, the method 200 shown in FIG. 2 may generally re-characterize or otherwise re-implement existing individual 2D blocks and/or existing individual 3D blocks into corresponding portions , the plurality of corresponding portions can then be evaluated against an overall design quality goal associated with the overall 3D IC design (eg, whether or not individual blocks improve relative to the overall multilayer 3D IC after being split and folded into multilayer 3D blocks). connected power and performance envelope). For example, because certain individual blocks may perform better when folded across multiple layers, the method 200 shown in FIG. 2 may generally perform layer partitioning on individual blocks and reimplement it for all layers under the same footprint Arrangement and routing to further expand and re-implement individual blocks for the final multi-layer 3D IC design, provided that the folded blocks are superior to the constituent blocks that existed prior to the folding.In various embodiments, method 200 may begin with an initial netlist corresponding to a particular individual block at 210, where the individual block initial netlist may include one or more existing 2D blocks and/or one or more existing 3D blocks Blocks (eg, corresponding to Boolean algebraic representations of logic functions implemented as generic gates or process-specific standard cells). Furthermore, in the context of having an overall 3D IC layout, individual blocks may have an initial layer count greater than zero and less than N, where N represents the total number of layers in the generally fixed overall 3D IC layout (eg, up to a total of four layers). Thus, the initial layer count associated with individual blocks may fall in the range between one and N, where individual blocks occupying one layer may be considered 2D blocks and individual blocks occupying more than one layer may be 3D blocks. Thus, in an attempt to fold relative to individual blocks, the initial layer count associated with individual blocks may be incremented at 220 . For example, the layer count associated with individual blocks can vary between one and N because the overall 3D IC layout has N total layers and adding one or more additional layers is often very expensive and not recommended. Thus, at 220, the increased layer count associated with the individual block may be greater than one and less than N+1 (ie, greater than or equal to two and less than or equal to N, such that the individual block occupies multiple layers, but not more than Overall 3D IC with more layers).In various embodiments, the individual blocks associated with the initial netlist may then be reimplemented at 230, where reimplementing the individual blocks may include dividing the initial netlist across multiple layers, reimplementing in each layer under the same footprint Placement and routing are performed, and one or more vias (eg, high density interlayer vias) are inserted. In various embodiments, the quality associated with the segmented and reimplemented (ie, folded) blocks may then be evaluated relative to the overall design quality goal associated with the entire 3D IC to determine spanning the additional layers added at 220 Does folding individual blocks improve overall 3D IC design quality. For example, the design quality goal may include a weighted sum of total silicon area, timing, and power associated with the entire 3D IC, although those skilled in the art will understand that other suitable design quality goals may be assessed at 240 . Additionally, since the divided (folded) blocks tend to be much smaller than the entire 3D IC design, post-layout timing, power, and area values can be used to evaluate the overall 3D IC design quality goal at 240 to increase accuracy.In various embodiments, at 250, it may be determined whether the quality associated with the folded individual blocks is satisfactory relative to the overall 3DIC design quality target (ie, whether folding and reimplementing the individual blocks across additional layers improves the overall 3D IC design quality). Certainly, the collapsed individual blocks can be added at 260 to the block collection of the overall 3D IC layout, where the block collection will typically include multiple 2D and/or 3D blocks packaged into the final 3D IC. However, in response to determining at 250 whether the quality associated with the folded individual blocks is satisfactory relative to the overall 3D IC design quality goal, the folding performed at steps 220 and 230 may not be able to be added to the set of blocks for packaging into the final 3D IC, since folding does not improve the overall 3D IC design quality. Thus, in the event that the quality associated with the collapsed individual block at 250 is deemed unsatisfactory, other methods for collapsing the block may be considered at 270 . For example, one option may be to add more layers at 220, and then retry the folding at 230 to assess whether adding more layers results in a folded block that improves the overall 3D IC design quality (unless an unsatisfactory folded block is used). With N layers, in this case no additional layers can be added without exceeding the total layer count N in the overall 3D IC). Alternatively, another option may be to try a different split with the same layer count at 230 . In yet another alternative, method 200 may stop if folding individual blocks does not improve the overall 3D IC design quality, in which case the initial block design provided at 210 may be used in the final 3D IC, as it is different from the attempted 3D IC design. The initial block can be considered more satisfactory with respect to the overall 3D IC design quality compared to the folded block. Thus, method 200 may generally add individual blocks to the block set of the overall 3D IC layout depending on whether the individual blocks improve the overall 3D IC design quality when folded across additional layers, whereby the final package may be optimized according to overall 3D IC design quality goals. A collection of blocks for the final 3D IC, either by collapsing individual blocks across additional layers or using the original blocks.3 illustrates an example of a multi-layer 3D IC layout for floorplanning combining one or more 2D blocks with one or more 3D blocks to improve the quality associated with full-chip multi-layer 3D IC designs, according to various embodiments Sexual Methods 300. Rather, to build the final multilayer 3D IC layout, different blocks (eg, including the set of blocks presented at block 260 in FIG. 2 ) may be floorplanned into a multilayer 3D stack, where each floorplanned Blocks can begin as 2D and/or 3D implementations with different number of layers, timing, power, and area footprint. The target may be determined by a weighted sum of area footprint, line length, and delay, but other derived target functions may be considered depending on the particular design. The output may include (i) selections for implementing each block in 2D or 3D, and (ii) (x, y, z) coordinates.In various embodiments, the method 300 shown in FIG. 3 may correspond to a simulated annealing architecture that may implement an automated 3D floorplanning engine, where simulated annealing refers to an artificial intelligence technique based on the behavior of cooling metals. In practice, however, 3D floorplanning is often implemented manually rather than using automated floorplanning through simulated annealing. In this context, the method 300 shown in FIG. 3 may provide one exemplary technique for performing 3D floorplanning in an automated manner in order to find solutions to difficult or impossible combinatorial optimization problems, but those skilled in the art Persons will appreciate that 3D floorplanning techniques for packaging multiple 2D and/or 3D blocks into a final 3D IC layout can be performed manually, so that the method 300 shown in FIG. One possible 3D floorplanning option used by aspects and embodiments.For example, in various embodiments, the automatic floorplanning method 300 shown in FIG. 3 may include identifying an initial solution at 310, which may include setting the global parameter T to an initial value T0. Although the global parameter T may generally refer to temperature, T is not necessarily related to physical temperature. Alternatively, T may include a global parameter for controlling the advancement of the simulated annealing based 3D floorplanning engine. In various embodiments, the initial solution may be perturbed at 320 and then evaluated at 330 to determine whether quality of service (QoS) parameters are below optimal levels. For example, in various embodiments, QoS parameters may provide different priorities for different applications, users, or data flows, or guarantee a certain level of performance of data flows (eg, desired bit rate, delay, jitter, packet loss probability, error rate, etc.). In various embodiments, in response to determining at 330 that the QoS parameter is not below an optimal level associated therewith, the solution may be accepted at 340 with a probability proportional to T and method 300 may then proceed to 360 . Otherwise, in response to determining at 330 that the QoS parameters are below optimal levels, the solution may be accepted at 350 before proceeding to 360 . In either case, it may be determined at 360 whether the number of moves exceeds the maximum movement for a given T, which may be set to Mmax. In response to determining at 360 that the number of moves does not exceed Mmax, method 300 may return to 320, where the solution may be further perturbed. Otherwise, in response to determining at 360 that the number of moves exceeds Mmax, the global parameter T may be decreased at 370 and an evaluation may be performed at 380 to determine if the decrease in T is currently less than Tmin (eg, stop "temperature"). Certainly, the method 300 may stop if the decreased value of T is less than Tmin. Otherwise, in response to determining at 380 that the decrease in T is not less than Tmin, method 300 may return to 320, where the solution may be further perturbed.According to various aspects, FIG. 4 illustrates an exemplary method 400 for pin assignment in a multi-layer 3D IC combining one or more 2D blocks with one or more 3D blocks. More specifically, the network table associated with the individual block can be evaluated at 410 to determine whether the individual block is a hard macro. For example, in hard macros, logical components and physical paths and wiring diagrams between components are specified. Thus, in response to determining at 410 that an individual block is a hard macro, the pin assignments and block design have been completed, in which case method 400 may stop appropriately with respect to the individual block. On the other hand, in response to determining at 410 that the individual block is not a hard macro, the individual block may be considered a soft macro, indicating that the interconnection of the required logical components may have been specified, but the physical wiring diagram has not been specified. Thus, in response to determining at 410 that the individual block is a soft macro (ie, not a hard macro), pins may be assigned at 420 on each layer of the individual block. Knowing floorplanning solutions and inter-block connectivity, pin locations in multi-layer 3D IC layouts can now be fixed. Thus, using the pin assignments determined at 420 and the solution to partition the block across multiple layers, the 3D block may be implemented at 430, where the partitioning solution may be accomplished using 2D methods, 3D methods, and/or combinations thereof.In general, the block folding method described in further detail above may have application in a 3D implementation technique generally referred to as "monolithic". In a monolithic 3D integrated circuit, electronic components and their connections (eg, wiring) are built sequentially in layers on a single semiconductor wafer that is then singulated into 3D ICs. Initially, each successive layer has no means to eliminate or substantially reduce alignment requirements, thereby resulting in greater integration densities. Additionally, a network of high-density vias can provide communication paths in and between layers in a monolithic 3D IC. Still further, the block folding method described above can be used to construct new 3D intellectual property (IP) blocks (or 3D IP cores) that can be used in technologies built using monolithic 3D integration techniques. Thus, the new 3D IP block can be used as a reusable logic, cell or chip layout unit that can be used in larger designs containing pre-designed 3D IP blocks. In the following description, the block folding method described above is extended to provide exemplary techniques on how to place input/output (I/O) pins in a folded 3D IP block generated from an existing 2D IP block.5 illustrates an exemplary method 500 for replicating pin assignments in a folded 3D IP block having multiple layers, according to various embodiments. More specifically, an existing 2D IP block can be collapsed into a 3D IP block at 510, wherein collapsing the existing 2D IP block can include splitting the 2D IP block into multiple layers and re-implementing each layer with the same footprint. Arranged and routed (eg, according to the methods 100, 200 shown in Figures 1 and 2). In various embodiments, at 520, one or more pin locations may be allocated in a particular layer in the folded 3D IP block, wherein the pins are determined according to the space shared between the multiple layers in the folded 3D IP block location to minimize bus length and full chip footprint. Additionally, at 520, one or more pins (eg, one, some, or all pins) may be selected and replicated in each layer, thereby making replicated pins available for use in more than one layer, and replicating pins The pins may be connected vertically using one or more through-silicon vias (TSVs) placed inside the folded 3D IP block or using any other suitable stack of vertical vias inside the folded 3D IP block. Thus, at 530, one or more other 2D and/or 3D blocks in the final full-chip design may be connected to any of the replicated pins depending on their associated layer positions, which may save inter-block TSVs and allow more Tight full-chip block-level floorplanning. In various embodiments, at 540, a 3D floorplan may be performed to create the final multi-layer 3D layout associated with the full chip design, where the 3D floorplan may generally include packaging folded 3D IP blocks and any 2D IP blocks.Figure 6 illustrates an exemplary multi-layer 3D IP block with replicated pins that can result from folding an existing 2D IP block that can use the methods shown in Figure 5 and described above, according to various embodiments 500 formed. More specifically, in various embodiments, an existing 2D IP block 600 can be partitioned about line 605 into a 3D IP block with a top layer 600_top and a bottom layer 600_bot, wherein the top layer 600_top and bottom layer 600_bot can each be reimplemented and existing in the same footprint. There are 2D IP blocks 600 associated placement and routing. Additionally, existing 2D IP blocks may have I/O pins at location 640 that may be duplicated in the top 600_top and bottom layers 600_bot of the 3D IP block at respective locations 640_top and 640_bot. Thus, duplicated I/O pins are available at location 640_top and location 640_bot, thereby making duplicated I/O pins available in top layer 600_top and bottom layer 600_bot. In the folded 3D IP block, the duplicated I/O pins at positions 640_top and 640_bot can be connected vertically using the intra-block TSV 650 inside the folded 3D IP block. Therefore, other 2D and/or 3D blocks in the final full chip design may be connected to the duplicate pins at positions 600_top or 600_bot, depending on the layer positions associated with them. For example, 2D and/or 3D blocks in the top layer 600_top or higher layers in a full chip layout may be connected to I/O pins at location 600_top, while the bottom layer 600_bot or lower layers in a full chip layout may be connected to I/O pins 2D and/or 3D blocks can be connected to I/O pins at location 600_bot. Therefore, duplicating the I/O pins at positions 600_top and 600_bot allows the I/O pins to be used in more than one layer in the folded 3D IP block so that other blocks (2D and 3D) can be easily connected to the I/O pins, and duplicating pins using intra-block TSV 650 vertical connections saves inter-block TSV and allows for tighter full-chip block-level floorplanning.7A-7C illustrate an exemplary 3D load store unit (LSU) with replicated pins that may be formed using the method 500 shown in FIG. Designed multi-layer 3D IP blocks. More specifically, Figure 7A illustrates an exemplary 2D LSU 700 in a processor core (eg, an OpenSPARC T2 microprocessor with eight cores and integrating key server functionality on a single chip to provide a "server on chip" architecture. in the core). However, those skilled in the art will understand that the LSU and OpenSPARC T2 architectures are used herein for illustrative purposes only and that the design principles described herein may be applied in any suitable integrated circuit with foldable 2D IP blocks.As shown in Figure 7A, 2D LSU 700 includes different active elements 710, 720, 730 placed at different locations in 2D LSU 700 and interconnected by suitable wiring. Thus, in various embodiments, the 2D LSU 700 can be partitioned into a 3D LSU having multiple layers including at least a top layer 700_top and a bottom layer 700_bot, where the top layer 700_top can be reimplemented in a smaller footprint with different The associated arrangement of source elements 710, 720, 730, and the reimplemented routing associated with 2D LSU 700, may be distributed across top 700_top and bottom 700_bot. Therefore, folding the 2D LSU 700 into a multi-layer 3D LSU can be achieved with respect to footprint (about 50% smaller), wire length (about 12% smaller), cache (about 10% less) and power consumption (about 7.5% less) Massive savings. Furthermore, as shown in Figure 7A, the 3D LSU may have some shared space available in the top layer 700_top and bottom layer 700_bot, where the shared space may correspond to a suitable location for placing the TSV landing pads.In various embodiments, as shown in Figure 7B, one or more I/O pin placements may then be determined to provide communication paths to the different active elements 710, 720, 730 in the top layer 700_top. In particular, the top layer 700_top may generally include different rows, each having different sites that circuit components may occupy. Thus, I/O pin placement may be selected in rows with free or otherwise unoccupied sites in which I/O pins may be placed to provide communication paths to different active elements 710, 720, 730. For example, a first I/O pin and a second I/O pin may be placed at corresponding free sites in rows 742_top and 744_top to provide a communication path to the active element 710, the third I/O Pins may be placed at free sites in row 746_top to provide communication paths to active elements 730, and fourth I/O pins may be placed at free sites in row 748_top to provide communication paths to active elements 720. In the bottom layer 700_bot, the I/O pins in the top layer 700_top may be replicated at rows 742_bot, 744_bot, 746_bot, and 748_bot, respectively, but those skilled in the art will understand that one, some, or all of the I/O pins may be selected for use in Replication (eg, according to design goals trying to minimize bus length and full chip footprint). In any case, replicated I/O pins are connected vertically using intra-block TSVs (eg, TSVs placed inside a folded 3D LSU block) so that other 2D and/or 3D blocks can be connected through any of the replicated I/O pins to 3D LSU.Thus, Figure 7C illustrates a comparison to the same processor core implementing the folded 3D LSU from Figures 7A and 7B after 3D floorplanning has been performed to package the folded 3D LSU and one or more other blocks into a final full-chip layout The implementation comes from the processor core of the 2D LSU 700 of Figure 7A. As shown therein, folding a 3D LSU enables footprint optimization, wire length optimization, power consumption optimization, and others described above, while providing the same functionality as the 2D LSU 700 and easily allowing blocks in the top layer 700_top or bottom layer 700_bot to pass through the top The I/O pin duplication and in-block TSV design principles described in further detail are connected to the 3D LSU.Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols that may be referenced throughout the foregoing description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof and chips.In addition, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether this functionality is implemented as hardware or software depends on the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as departing from the scope of the present invention.The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed by a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-available processor A programmed gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (eg, a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors combined with a DSP core, or any other such configuration).The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be implemented directly in hardware, in software modules executed by a processor, or in a combination of the two. A software module may reside in RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integrated into the processor. The processor and storage medium may reside in the ASIC. ASICs can reside in IoT devices. In the alternative, the processor and storage medium may reside in the user terminal as discrete components.In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium can be any available medium that can be accessed by a computer. By way of example and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage, or may be used to carry or store instructions or data the desired program code in the form of a structure and any other medium that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies (eg, infrared, radio, and microwave), coaxial cable, fiber optic cable, dual Stranded wire, DSL, or wireless technologies (eg, infrared, radio, and microwave) are included in the definition of media. As used herein, magnetic disks and optical disks include CDs, laser disks, optical disks, DVDs, floppy disks, and Blu-ray disks, where disks typically reproduce data magnetically and/or optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.While the foregoing disclosure shows illustrative aspects of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the aspects of the invention described herein do not have to be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
The present invention provides metal-containing compounds that include at least one ß-diketiminate ligand, and methods of making and using the same. In certain embodiments, the metal-containing compounds include at least one ß-diketiminate ligand with at least one fluorine-containing organic group as a substituent. In other certain embodiments, the metal-containing compounds include at least one ß-diketiminate ligand with at least one aliphatic group as a substituent selected to have greater degrees of freedom than the corresponding substituent in the ß-diketiminate ligands of certain metal-containing compounds known in the art. The compounds can be used to deposit metal-containing layers using vapor deposition methods. Vapor deposition systems including the compounds are also provided. Sources for ß-diketiminate ligands are also provided.
What is claimed is: 1. A method of forming a metal-containing layer on a substrate, the method comprising: providing a substrate; providing a vapor comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group with the proviso that at least one of the R groups is a fluorine-containing organic group; and contacting the vapor comprising the at least one compound of Formula I with the substrate to form a metal-containing layer on at least one surface of the substrate using a vapor deposition process. 2. The method of claim 1 wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group having 1 to 10 carbon atoms. 3. The method of claim 1 wherein at least one L is selected from the group consisting of a halide, an alkoxide group, an amide group, a mercaptide group, cyanide, an alkyl group, an amidinate group, a guanidinate group, an isoureate group, a [beta]-diketonate group, a [beta]-iminoketonate group, a [beta]-diketiminate group, and combinations thereof. 4. The method of claim 3 wherein the at least one L is a [beta]-diketiminate group having a structure that is the same as that of the [beta]-diketiminate ligand shown in Formula I. 5. The method of claim 3 wherein the at least one L is a [beta]-diketiminate group having a structure that is different than that of the [beta]-diketiminate ligand shown in Formula I. 6. The method of claim 1 wherein at least one Y is selected from the group consisting of a carbonyl, a nitrosyl, ammonia, an amine, nitrogen, a phosphine, an alcohol, water, tetrahydrofuran, and combinations thereof. 7. A method of manufacturing a semiconductor structure, the method comprising: providing a semiconductor substrate or substrate assembly; providing a vapor comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10;each R<1>, R<2>, R<3>, R<4>, and R<5>is independently hydrogen or an organic group with the proviso that at least one of the R groups is a fluorine-containing organic group; and directing the vapor comprising the at least one compound of Formula I to the semiconductor substrate or substrate assembly to form a metal-containing layer on at least one surface of the semiconductor substrate or substrate assembly using a vapor deposition process. 8. The method of claim 7 further comprising providing a vapor comprising at least one metal-containing compound different than Formula I, and directing the vapor comprising the at least one metal-containing compound different than Formula I to the semiconductor substrate or substrate assembly. 9. The method of claim 8 wherein the metal of the at least one metal-containing compound different than Formula I is selected from the group consisting of Ti, Ta, Bi, Hf, Zr, Pb, Nb, Mg, Al, and combinations thereof. 10. The method of claim 7 further comprising providing at least one reaction gas. 11. The method of claim 7 wherein the vapor deposition process is a chemical vapor deposition process. 12. The method of claim 7 wherein the vapor deposition process is an atomic layer deposition process comprising a plurality of deposition cycles. 13. A method of forming a metal-containing layer on a substrate, the method comprising: providing a substrate; providing a vapor comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, 7t-propyl, isopropyl, [pi]-butyl, sec-butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2 -methyl- 1- butyl, 3-methyl-2-butyl, isopentyl, tert-pentyl, and neopentyl; and contacting the vapor comprising the at least one compound of Formula I with the substrate to form a metal-containing layer on at least one surface of the substrate using a vapor deposition process. 14. The method of claim 13 wherein at least one of R<2>, R<3>, and R<4> is sec-butyl. 15. A method of manufacturing a semiconductor structure, the method comprising: providing a semiconductor substrate or substrate assembly; providing a vapor comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, n-propyl, isopropyl, n-butyl, ^ec-butyl, isobutyl, [pi]-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, tert--pentyl,.and neopentyl; and directing the vapor comprising the at least one compound of Formula I to the semiconductor substrate or substrate assembly to form a metal-containing layer on at least one surface of the semiconductor substrate or substrate assembly using a vapor deposition process. 16. The method of claim 15 further comprising providing a vapor comprising at least one metal-containing compound different than Formula I, and directing the vapor comprising the at least one metal-containing compound different than Formula I to the semiconductor substrate or substrate assembly. 17. The method of claim 16 wherein the metal of the at least one metal- containing compound different than Formula I is selected from the group consisting of Ti, Ta, Bi, Hf, Zr, Pb, Nb, Mg, Al, and combinations thereof. 18. The method of claim 15 further comprising providing at least one reaction gas. 19. The method of claim 15 wherein the vapor deposition process is a chemical vapor deposition process. 20. The method of claim 15 wherein the vapor deposition process is an atomic layer deposition process comprising a plurality of deposition cycles. 21. A method of forming a metal-containing layer on a substrate, the method comprising: providing a substrate; providing a vapor comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of n- propyl, 77-butyl, sec-butyl, isobutyl, n-pentyl, 2- pentyl, 3-pentyl, 2-methyl-l -butyl, 3 -methyls- butyl, isopentyl, and tert-pentyl; and contacting the vapor comprising the at least one compound of Formula I with the substrate to form a metal-containing layer on at least one surface of the substrate using a vapor deposition process. 22. The method of claim 21 wherein at least one of R<1> and R<5> is sec-butyl. 23. A method manufacturing a semiconductor structure, the method comprising: providing a semiconductor substrate or substrate assembly; providing a vapor comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal;each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of n- propyl, n-butyl, sec-butyl, isobutyl, w-pentyl, 2- pentyl, 3-pentyl, 2-methyl-l -butyl, 3-methyl-2- butyl, isopentyl, and fert-pentyl; and directing the vapor comprising the at least one compound of Formula I to the semiconductor substrate or substrate assembly to form a metal-containing layer on at least one surface of the semiconductor substrate or substrate assembly using a vapor deposition process. 24. The method of claim 23 further comprising providing a vapor comprising at least one metal-containing compound different than Formula I, and directing the vapor comprising the at least one metal-containing compound different than Formula I to the semiconductor substrate or substrate assembly. 25. The method of claim 24 wherein the metal of the at least one metal- containing compound different than Formula I is selected from the group consisting of Ti, Ta, Bi, Hf, Zr, Pb, Nb, Mg, Al, and combinations thereof. 26. The method of claim 23 further comprising providing at least one reaction gas. 27. The method of claim 23 wherein the vapor deposition process is a chemical vapor deposition process. 28. The method of claim 23 wherein the vapor deposition process is an atomic layer deposition process comprising a plurality of deposition cycles. 29. A compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R , and R<5> is independently hydrogen or an organic group with the proviso that at least one of the R groups is a fluorine-containing organic group. 30. The compound of claim 29 wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group having 1 to 10 carbon atoms. 31. The compound of claim 29 wherein M is selected from the group consisting of Ca, Sr, Ba, and combinations thereof. 32. The compound of claim 29 wherein at least one L is selected from the group consisting of a halide, an alkoxide group, an amide group, a mercaptide group, cyanide, an alkyl group, an amidinate group, a guanidinate group, an isoureate group, a [beta]-diketonate group, a [beta]-iminoketonate group, a [beta]-diketiminate group, and combinations thereof. 33. The compound of claim 32 wherein the at least one L is a [beta]-diketiminate group having a structure that is the same as that of the [beta]-diketiminate ligand shown in Formula I. 34. The compound of claim 32 wherein the at least one L is a [beta]-diketiminate group having a structure that is different than that of the [beta]-diketiminate ligand shown in Formula I. 35. The compound of claim 29 wherein at least one Y is selected from the group consisting of a carbonyl, a nitrosyl, ammonia, an amine, nitrogen, a phosphine, an alcohol, water, tetrahydrofuran, and combinations thereof. 36. A method of making a metal-containing compound, the method comprising combining components comprising: a ligand source of the formula (Formula III): , a tautomer thereof, or a deprotonated conjugate base or metal complex thereof; optionally a source for an anionic ligand L; optionally a source for a neutral ligand Y; and a metal (M) source; wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group with the proviso that at least one of the R groups is a fluorine- containing organic group; and wherein the metal (M) source is selected from the group consisting of a Group 2 metal source, a Group 3 metal source, a Lanthanide metal source, and combinations thereof, under conditions sufficient to provide a metal-containing compound of the formula (Formula I): wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, and R<5> are as defined above, n represents the valence state of the metal, z is from 0 to 10, and x is from 1 to n. 37. The method of claim 36 wherein the metal (M) source comprises a M(O), a M(II) halide, a M(II) pseudohalide, a M(II) amide, or combinations thereof. 38. The method of claim 36 wherein M is selected from the group consisting of Ca, Sr, Ba, and combinations thereof. 39. A precursor composition for a vapor deposition process, the composition comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, and R<5>is independently hydrogen or an organic group with the proviso that at least one of the R groups is a fluorine-containing organic group. 40. A compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal;x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, n-propyl, isopropyl, /[iota]-butyl, sec-butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, tert-pentyl, and neopentyl. 41. The compound of claim 40 wherein M is selected from the group consisting of Ca, Sr, Ba, and combinations thereof. 42. A method of making a metal-containing compound, the method comprising combining components comprising: a ligand source of the formula (Formula III): , a tautomer thereof, or a deprotonated conjugate base or metal complex thereof; optionally a source for an anionic ligand L; optionally a source for a neutral ligand Y; and a metal (M) source; wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, n-propyl, isopropyl, n-butyl, sec-butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, tert-pentyl, and neopentyl; and wherein the metal (M) source is selected from the group consisting of a Group 2 metal source, a Group 3 metal source, a Lanthanide metal source, and combinations thereof, under conditions sufficient to provide a metal-containing compound of the formula (Formula I): MYzLn-x wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, and R<5> are as defined above, n represents the valence state of the metal, z is from 0 to 10, and x is from 1 to n. 43. The method of claim 42 wherein the metal (M) source comprises a M(O), a M(II) halide, a M(II) pseudohalide, a M(II) amide, or combinations thereof. 44. The method of claim 42 wherein M is selected from the group consisting of Ca, Sr, Ba, and combinations thereof. 45. A precursor composition for a vapor deposition process, the composition comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal;x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, 72-propyl, isopropyl, n-butyl, sec-butyl, isobutyl, ro-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, tert-pentyl, and neopentyl. 46. A compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R , and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of n- propyl, rc-butyl, .sec-butyl, isobutyl, rc-pentyl, 2- pentyl, 3-pentyl, 2 -methyl- 1 -butyl, 3 -methyls- butyl, isopentyl, and tert-pentyl. 47. The compound of claim 46 wherein M is selected from the group consisting of Ca, Sr, Ba, and combinations thereof. 48. A method of making a metal-containing compound, the method comprising combining components comprising: a ligand source of the formula (Formula III): , a tautomer thereof, or a deprotonated conjugate base or metal complex thereof; optionally a source for an anionic ligand L; optionally a source for a neutral ligand Y; and a metal (M) source; wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5>is a moiety selected from the group consisting of 7[iota]-propyl, n-butyl, -sec-butyl, isobutyl, /t-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l -butyl, 3-methyl-2- butyl, isopentyl, and ter^-pentyl; and wherein the metal (M) source is selected from the group consisting of a Group 2 metal source, a Group 3 metal source, a Lanthanide metal source, and combinations thereof, under conditions sufficient to provide a metal-containing compound of the formula (Formula I): wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, and R<5> are as defined above, n represents the valence state of the metal, z is from 0 to 10, and x is from 1 to n. 49. The method of claim 48 wherein the metal (M) source comprises a M(O), a M(II) halide, a M(II) pseudohalide, a M(II) amide, or combinations thereof. 50. The method of claim 48 wherein M is selected from the group consisting of Ca, Sr, Ba, and combinations thereof. 51. A precursor composition for a vapor deposition process, the composition comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of n- propyl, n-butyl, sec-butyl, isobutyl, "-pentyl, 2- pentyl, 3-pentyl, 2-methyl-l -butyl, 3-methyl-2- butyl, isopentyl, and terf-pentyl. 52. A ligand source of the Formula (III): wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group with the proviso that at least one of the R groups is a fluorine- containing aliphatic group. 53. A method of making a [beta]-diketiminate ligand source, the method comprising combining components comprising: an amine of the formula R<1>NH2; a compound of the formula (Formula IV): , or a tautomer thereof; and an activating agent, under conditions sufficient to provide a ligand source of the formula (Formula III): , or a tautomer thereof, wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group with the proviso that at least one of the R groups is a fluorine- containing aliphatic group. 54. A ligand source of the Formula (III): or a tautomer thereof, wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, n-propyl, isopropyl, n-butyl, sec-butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, tert-pentyl, and neopentyl. 55. A method of making a [beta]-diketiminate ligand source, the method comprising combining components comprising: an amine of the formula R<1>NH2; a compound of the formula (Formula IV): , or a tautomer thereof; and an activating agent, under conditions sufficient to provide a ligand source of the formula (Formula III): , or a tautomer thereof, wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, ro-propyl, isopropyl, n -butyl, sec-butyl, isobutyl, [pi]-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, tert-pentyl, and neopentyl. 56. A ligand source of the Formula (HI): or a tautomer thereof, wherein each R<1>, R<2>, R<3>, R , and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of n-propyl, n-butyl, sec-butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l -butyl, 3 -methyls- butyl, isopentyl, and ter^-pentyl. 57. A method of making a [beta]-diketiminate ligand source, the method comprising combining components comprising: an amine of the formula R<1>NH2; a compound of the formula (Formula IV): , or a tautomer thereof; and an activating agent, under conditions sufficient to provide a ligand source of the formula (Formula III): or a tautomer thereof, wherein each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an alkyl group having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of /[iota]-propyl, "-butyl, sec- butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2 -methyl- 1 -butyl, 3-methyl-2-butyl, isopentyl, and tert-pentyl. 58. A vapor deposition system comprising: a deposition chamber having a substrate positioned therein; and at least one vessel comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10;each R<1>, R<2>, R<3>, R<4>, and R<5>is independently hydrogen or an organic group with the proviso that at least one of the R groups is a fluorine-containing organic group. 59. A vapor deposition system comprising: a deposition chamber having a substrate positioned therein; and at least one vessel comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal;each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, n-propyl, isopropyl, n-butyl, sec-butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, fert-pentyl, and neopentyl. 60. A vapor deposition system comprising: a deposition chamber having a substrate positioned therein; and at least one vessel comprising at least one compound of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of n- propyl, n-butyl, sec-butyl, isobutyl, /[iota]-pentyl, 2- pentyl, 3-pentyl, 2 -methyl- 1 -butyl, 3 -methyls- butyl, isopentyl, and tert-pentyl.
BETA-DIKETIMINATE LIGAND SOURCES AND METAL- CONTAINING COMPOUNDS THEREOF3 AND SYSTEMS ANDMETHODS INCLUDING SAMEThis application claims priority to U.S. Patent Application Serial No. 11/169,065, filed June 28, 2005, which is incorporated herein by reference in its entirety.BACKGROUNDThe scaling down of integrated circuit devices has created a need to incorporate high dielectric constant materials into capacitors and gates. The search for new high dielectric constant materials and processes is becoming more important as the minimum size for current technology is practically constrained by the use of standard dielectric materials. Dielectric materials containing alkaline earth metals can provide a significant advantage in capacitance compared to conventional dielectric materials. For example, the perovskite material SrTiO3 has a disclosed bulk dielectric constant of up to 500.Unfortunately, the successful integration of alkaline earth metals into vapor deposition processes has proven to be difficult. For example, although atomic layer deposition (ALD) of alkaline earth metal diketonates has been disclosed, these metal diketonates have low volatility, which typically requires that they be dissolved in organic solvent for use in a liquid injection system, hi addition to low volatility, these metal diketonates generally have poor reactivity, often requiring high substrate temperatures and strong oxidizers to grow a film, which is often contaminated with carbon. Other alkaline earth metal sources, such as those including substituted or unsubstituted cyclopentadienyl ligands, typically have poor volatility as well as low thermal stability, leading to undesirable pyrolysis on the substrate surface. New sources and methods of incorporating high dielectric materials are being sought for new generations of integrated circuit devices.SUMMARY OF THE INVENTION The present invention provides metal-containing compounds (i.e., metal- containing complexes) that include at least one [beta]-diketiminate ligand, and methods of making and using, and vapor deposition systems including the same. Certain metal-containing compounds having at least one [beta]-diketiminate ligand are known in the art. In such certain known metal-containing compounds, the [beta]- diketiminate ligand has isopropyl substituents on both nitrogen atoms, or tert- butyl substituents on both nitrogen atoms. See, for example, El-Kaderi et al., Organometallics, 23:4995-5002 (2004). The present invention provides metal- containing compounds (i.e., metal-containing complexes) including at least one [beta]-diketiminate ligand, which can have desirable properties (e.g., one or more of higher vapor pressure, lower melting point, and lower sublimation point) for use in vapor deposition methods.In certain embodiments, the present invention provides metal-containing compounds having at least one [beta]-diketiminate ligand with at least one fluorine- containing organic group as a substituent. In other certain embodiments, the present invention provides metal containing compounds having at least one [beta]- diketiminate ligand with at least one aliphatic group as a substituent selected to have greater degrees of freedom than the corresponding substituent in the [beta]- diketiminate ligands of certain metal-containing compounds known in the art. In one aspect, the present invention provides a method of forming a metal-containing layer on a substrate (e.g., a semiconductor substrate or substrate assembly) using a vapor deposition process. The method can be useful in the manufacture of semiconductor structures. The method includes: providing a substrate; providing a vapor including at least one compound of the formula (Formula I): and contacting the vapor including the at least one compound of Formula I with the substrate (and typically directing the vapor to the substrate) to form a metal- containing layer on at least one surface of the substrate. The compound of the formula (Formula I) includes at least one [beta]-diketiminate ligand, wherein M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group with the proviso that at least one of the R groups is a fluorine-containing organic group.In another aspect, the present invention provides a method of forming a metal-containing layer on a substrate (e.g., a semiconductor substrate or substrate assembly) using a vapor deposition process. The method can be useful in the manufacture of semiconductor structures. The method includes: providing a substrate; providing a vapor including at least one compound of the formula (Formula I):and contacting the vapor including the at least one compound of Formula I with the substrate (and typically directing the vapor to the substrate) to form a metal- containing layer on at least one surface of the substrate. The compound of the formula (Formula I) includes at least one [beta]-diketiminate ligand, wherein M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (preferably an aliphatic moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, 7[iota]-propyl, isopropyl, n- butyl, see-butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l -butyl, 3- methyl-2 -butyl, isopentyl, tert-pentyl, and neopentyl.In another aspect, the present invention provides a method of forming a metal-containing layer on a substrate (e.g., a semiconductor substrate or substrate assembly) using a vapor deposition process. The method can be useful in the manufacture of semiconductor structures. The method includes: providing a substrate; providing a vapor including at least one compound of the formula (Formula I):and contacting the vapor including the at least one compound of Formula I with the substrate (and typically directing the vapor to the substrate) to form a metal- containing layer on at least one surface of the substrate. The compound of the formula (Formula I) includes at least one [beta]-diketiminate ligand, wherein M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (preferably an aliphatic moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of w-propyl, ro-butyl, sec-butyl, isobutyl, 7?-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l -butyl, 3-methyl-2-butyl, isopentyl, and fert-pentyl.In another aspect, the present invention provides metal-containing compounds having at least one [beta]-diketiminate ligand, precursor compositions including such compounds, vapor deposition systems including such compounds, and methods of making such compounds. Such metal-containing compounds include those of the formula (Formula I):wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; and x is from 1 to n; and each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group with the proviso that at least one of the R groups is a fluorine-containing organic group. The present invention also provides sources for [beta]-diketiminate ligands having a fluorine- containing aliphatic group, and methods of making same, which are useful for making metal-containing compounds having at least one [beta]-diketiminate ligand having a fluorine-containing organic group.In another aspect, the present invention provides metal-containing compounds having certain [beta]-diketiminate ligands, precursor compositions including such compounds, vapor deposition systems including such compounds, and methods of making such compounds. Such metal-containing compounds include those of the formula (Formula I): wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (preferably an aliphatic moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R , R , and R is a moiety selected from the group consisting of ethyl, n-propyl, isopropyl, n-butyl, sec- butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l -butyl, 3-methyl-2-butyl, isopentyl, tert-pentyl, and neopentyl. In another aspect, the present invention provides metal-containing compounds having certain [beta]-diketiminate ligands, precursor compositions including such compounds, vapor deposition systems including such compounds, and methods of making such compounds. Such metal-containing compounds include those of the formula (Formula I):wherein: M is selected from the group consisting of a Group 2 metal, a Group 3 metal, a Lanthanide, and combinations thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (preferably an aliphatic moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of n-propyl, n-butyl, sec-butyl, isobutyl, n-pentyl, 2- pentyl, 3-pentyl, 2-methyl-l -butyl, 3-methyl-2-butyl, isopentyl, and tert-pentyl. Advantageously, the metal-containing compounds of the present invention can have desirable properties (e.g., one or more of higher vapor pressure, lower melting point, and lower sublimation point) for use in vapor deposition methods. DefinitionsAs used herein, formulas of the type:are used to represent pentadienyl-group type ligands (e.g., [beta]-diketiminate ligands) having delocalized electron density that are coordinated to a metal. The ligands may be coordinated to the metal through one, two, three, four, and/or five atoms (i.e., [eta]<1>-, [eta]<2>-, [eta]<3>-, [eta]<4>-, and/or incoordination modes).As used herein, the term "organic group" is used for the purpose of this invention to mean a hydrocarbon group that is classified as an aliphatic group, cyclic group, or combination of aliphatic and cyclic groups (e.g., alkaryl and aralkyl groups). In the context of the present invention, suitable organic groups for metal-containing compounds of this invention are those that do not interfere with the formation of a metal oxide layer using vapor deposition techniques. In the context of the present invention, the term "aliphatic group" means a saturated or unsaturated linear or branched hydrocarbon group. This term is used to encompass alkyl, alkenyl, and alkynyl groups, for example. The term "alkyl group" means a saturated linear or branched monovalent hydrocarbon group including, for example, methyl, ethyl, ro-propyl, isopropyl, tert-butyl, amyl, heptyl, and the like. The term "alkenyl group" means an unsaturated, linear or branched monovalent hydrocarbon group with one or more olefmically unsaturated groups (i.e., carbon-carbon double bonds), such as a vinyl group. The term "alkynyl group" means an unsaturated, linear or branched monovalent hydrocarbon group with one or more carbon-carbon triple bonds. The term "cyclic group" means a closed ring hydrocarbon group that is classified as an alicyclic group, aromatic group, or heterocyclic group. The term "alicyclic group" means a cyclic hydrocarbon group having properties resembling those of aliphatic groups. The term "aromatic group" or "aryl group" means a mono- or polynuclear aromatic hydrocarbon group. The term "heterocyclic group" means a closed ring hydrocarbon in which one or more of the atoms in the ring is an element other than carbon (e.g., nitrogen, oxygen, sulfur, etc.).As a means of simplifying the discussion and the recitation of certain terminology used throughout this application, the terms "group" and "moiety" are used to differentiate between chemical species that allow for substitution or that may be substituted and those that do not so allow for substitution or may not be so substituted. Thus, when the term "group" is used to describe a chemical substituent, the described chemical material includes the unsubstituted group and that group with nonperoxidic O, N, S, Si, or F atoms, for example, in the chain as well as carbonyl groups or other conventional substituents. Where the term "moiety" is used to describe a chemical compound or substituent, only an unsubstituted chemical material is intended to be included. For example, the phrase "alkyl group" is intended to include not only pure open chain saturated hydrocarbon alkyl substituents, such as methyl, ethyl, propyl, tert-butyl, and the like, but also alkyl substituents bearing further substituents known in the art, such as hydroxy, alkoxy, alkylsulfonyl, halogen atoms, cyano, nitro, amino, carboxyl, etc. Thus, "alkyl group" includes ether groups, haloalkyls, nitroalkyls, carboxyalkyls, hydroxyalkyls, sulfoalkyls, etc. On the other hand, the phrase "alkyl moiety" is limited to the inclusion of only pure open chain saturated hydrocarbon alkyl substituents, such as methyl, ethyl, propyl, tert-butyl, and the like.As used herein, "metal-containing" is used to refer to a material, typically a compound or a layer, that may consist entirely of a metal, or may include other elements in addition to a metal. Typical metal-containing compounds include, but are not limited to, metals, metal-ligand complexes, metal salts, organometallic compounds, and combinations thereof. Typical metal-containing layers include, but are not limited to, metals, metal oxides, metal silicates, and combinations thereof. As used herein, "a," "an," "the," and "at least one" are used interchangeably and mean one or more than one.As used herein, the term "comprising," which is synonymous with "including" or "containing," is inclusive, open-ended, and does not exclude additional unrecited elements or method steps. The terms "deposition process" and "vapor deposition process" as used herein refer to a process in which a metal- containing layer is formed on one or more surfaces of a substrate (e.g., a doped polysilicon wafer) from vaporized precursor composition(s) including one or more metal-containing compounds. Specifically, one or more metal-containing compounds are vaporized and directed to and/or contacted with one or more surfaces of a substrate (e.g., semiconductor substrate or substrate assembly) placed in a deposition chamber. Typically, the substrate is heated. These metal-containing compounds form (e.g., by reacting or decomposing) a non-volatile, thin, uniform, metal- containing layer on the surface(s) of the substrate. For the purposes of this invention, the term "vapor deposition process" is meant to include both chemical vapor deposition processes (including pulsed chemical vapor deposition processes) and atomic layer deposition processes."Chemical vapor deposition" (CVD) as used herein refers to a vapor deposition process wherein the desired layer is deposited on the substrate from vaporized metal-containing compounds (and any reaction gases used) within a deposition chamber with no effort made to separate the reaction components. In contrast to a "simple" CVD process that involves the substantial simultaneous use of the precursor compositions and any reaction gases, "pulsed" CVD alternately pulses these materials into the deposition chamber, but does not rigorously avoid intermixing of the precursor and reaction gas streams, as is typically done in atomic layer deposition or ALD (discussed in greater detail below).The term "atomic layer deposition" (ALD) as used herein refers to a vapor deposition process in which deposition cycles, preferably a plurality of consecutive deposition cycles, are conducted in a process chamber (i.e., a deposition chamber). Typically, during each cycle the precursor is chemisorbed to a deposition surface (e.g., a substrate assembly surface or a previously deposited underlying surface such as material from a previous ALD cycle), forming a monolayer or sub-monolayer that does not readily react with additional precursor (i.e., a self-limiting reaction). Thereafter, if necessary, a reactant (e.g., another precursor or reaction gas) may subsequently be introduced into the process chamber for use in converting the chemisorbed precursor to the desired material on the deposition surface. Typically, this reactant is capable of further reaction with the precursor. Further, purging steps may also be utilized during each cycle to remove excess precursor from the process chamber and/or remove excess reactant and/or reaction byproducts from the process chamber after conversion of the chemisorbed precursor. Further, the term "atomic layer deposition," as used herein, is also meant to include processes designated by related terms such as, "chemical vapor atomic layer deposition", "atomic layer epitaxy" (ALE) (see U.S. Patent No. 5,256,244 to Ackerman), molecular beam epitaxy (MBE), gas source MBE, or organometallic MBE, and chemical beam epitaxy when performed with alternating pulses of precursor composition(s), reactive gas, and purge (e.g., inert carrier) gas. As compared to the one cycle chemical vapor deposition (CVD) process, the longer duration multi-cycle ALD process allows for improved control of layer thickness and composition by self-limiting layer growth, and minimizing detrimental gas phase reactions by separation of the reaction components. The self-limiting nature of ALD provides a method of depositing a film on a wide variety of reactive surfaces, including surfaces with irregular topographies, with better step coverage than is available with CVD or other "line of sight" deposition methods such as evaporation or physical vapor deposition (PVD or sputtering).BRIEF DESCRIPTION OF THE FIGURESFigure 1 is a perspective view of a vapor deposition system suitable for use in methods of the present invention.Figure 2 is a graphical representation of degrees of freedom (x-axis) vs. melting point (<0>C; y-axis) for various metal-containing compounds having at least one [beta]-diketiminate ligand, which illustrates decreasing melting point for increasing degrees of freedom. Degrees of freedom were quantified by a method described by Li et al. in Inorganic Chemistiy, 44:1728-1735 (2005), and as further described herein. SDtBK represents a metal-containing compound of Formula I having zero degrees of freedom, wherein M = Sr (n = 2), R<1> = R<5> = fert-butyl, R<2> = R<4> = methyl, R<3> = H, x = 2, and z = 0. SDiPtBK represents a metal-containing compound of Formula I having 2 degrees of freedom (2 isopropyls), wherein M = Sr (n = 2), R<1> = isopropyl (1 degree of freedom), R<5> = tert-butyl, R<2> = R<4> = methyl, R<3> = H, x = 2, and z = 0. SDiPK represents a metal-containing compound of Formula I having 4 degrees of freedom (4 isopropyls), wherein M = Sr (n = 2), R<1> = R<5> = isopropyl (each isopropyl having 1 degree of freedom), R<2> = R<4> = methyl, R<3> = H, x = 2, and z = 0. SDsBK represents a metal-containing compound of Formula I having 12 degrees of freedom (4 sec-butyls), wherein M = Sr (n = 2), R = R = sec-butyl (each sec- butyl having 3 degrees of freedom) , R<2> = R<4> = methyl, R<3> - H, x = 2, and z = 0. The melting point for SDsBK (44-48[deg.]C) is disclosed herein in Example 2. The melting points for SDtBK (127-129<0>C) and SDiPK (87-89[deg.]C) have been disclosed in El-Kaderi et al., Organometallics, 23 :4995-5002 (2004). The melting point for SDiPtBK (see, U.S. Application Serial No. 11/169,082, entitled "UNSYMMETRICAL LIGAND SOURCES, REDUCED SYMMETRY METAL-CONTAINING COMPOUNDS, AND SYSTEMS AND METHODS INCLUDING SAME," filed June 28, 2005) was measured as 109.5[deg.]C.DETAILED DESCRIPTION OF CERTAIN EMBODIMENTSCertain metal-containing compounds having at least one [beta]-diketiminate ligand are known in the art. In such certain known metal-containing compounds, the [beta]-diketiminate ligand has isopropyl substituents on both nitrogen atoms, or tert-bxAyl substituents on both nitrogen atoms. See, for example, El-Kaderi et al., Organometallics, 23:4995-5002 (2004). The present invention provides metal-containing compounds (i.e., metal-containing complexes) including at least one [beta]-diketiminate ligand, which can have desirable properties (e.g., one or more of higher vapor pressure, lower melting point, and lower sublimation point) for use in vapor deposition methods. The present invention also provides methods of making and using such metal-containing compounds, and vapor deposition systems including the same.In one aspect, the present invention provides metal-containing compounds having at least one [beta]-diketiminate ligand with at least one fluorine- containing organic group as a substituent. Such metal-containing compounds including at least one fluorine-containing organic group can provide higher volatility than corresponding metal-containing compounds without a fluorine-containing organic group. Metal-containing compounds having higher volatility can be advantageous in deposition methods (e.g., CVD and ALD).In another aspect, the present invention provides metal containing compounds having at least one [beta]-diketiminate ligand with at least one aliphatic group (preferably an aliphatic moiety) having 1 to 5 carbon atoms as a substituent, wherein the aliphatic group is selected to have greater degrees of freedom than the corresponding substituent in the [beta]-diketiminate ligands of certain metal-containing compounds known in the art (i.e., compounds of Formula I wherein R<2> = R<4> = methyl; R<3> = H; and R<1> = R<5> = isopropyl or R<1> = R<5> = tert-butyl). Such metal-containing compounds having at least one [beta]- diketiminate ligand with at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal-containing compounds can have lower melting points and/or sublimation points than the certain known metal-containing compounds. Metal-containing compounds having lower melting points, lower sublimation points, or both, can be advantageous in deposition methods (e.g., CVD and ALD). For example, metal- containing compounds having lower melting points are particularly useful for molten precursor compositions, because the vapor pressure of molten materials is typically higher than that of analogous solid materials at the same temperature. In addition, the surface area of vaporizing molten precursor compositions (and thus the rates of vaporization from and heat transfer to such compositions) can change at regular and predictable rates. Finally, molten precursor compositions are typically not a source for undesirable particles in the deposition process. Thus, for a given class of precursor compositions, molten forms within that class can provide adequate vapor pressure for deposition at lower temperatures than non-molten forms, under reproducible conditions, and preferably without producing problematic particles in the process.In some embodiments, the metal-containing compounds are homoleptic complexes (i.e., complexes in which the metal is bound to only one type of ligand) that include [beta]-diketiminate ligands, which can be symmetric or unsymmetric. In other embodiments, the metal-containing compounds are heteroleptic complexes (i.e., complexes in which the metal is bound to more than one type of ligand) including at least one [beta]-diketiminate ligand, which can be symmetric or unsymmetric. See, for example, U.S. Application Serial No. 11/169,082 (entitled "UNSYMMETRICAL LIGAND SOURCES, REDUCED SYMMETRY METAL-CONTAINING COMPOUNDS, AND SYSTEMS AND METHODS INCLUDING SAME"), filed June 28, 2005. In some embodiments, the [beta]-diketiminate ligand can be in the incoordination mode.COMPOUNDS WITH AT LEAST ONE FLUORINE-CONTAINING ORGANIC GROUP In one aspect, metal-containing compounds including at least one [beta]- diketiminate ligand having at least one fluorine-containing organic group, and precursor compositions including such compounds, are disclosed. Such metal- containing compounds including at least one fluorine-containing organic group can provide higher volatility than corresponding metal-containing compounds without a fluorine-containing organic group. Metal-containing compounds having higher volatility can be advantageous in deposition methods (e.g., CVD and ALD).Such compounds include a compound of the formula (Formula I):wherein M is a Group 2 metal (e.g., Ca, Sr, Ba), a Group 3 metal (e.g., Sc, Y,La), a Lanthanide (e.g., Pr, Nd), or a combination thereof. Preferably M is Ca, Sr, or Ba. More preferably M is Sr. Each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; and x is from 1 to n.Each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety), with the proviso that at least one R group is a fluorine-containing organic group. In certain embodiments, R<1>, R<2>, R<3>, R<4>, and R<5> are each independently hydrogen or an organic group having 1 to 10 carbon atoms (e.g., methyl, ethyl, propyl, isopropyl, butyl, see-butyl, tert-butyl), and preferably hydrogen or an aliphatic group having 1 to 5 carbon atoms. In certain embodiments, R<3> = H and at least one of R<1>, R<2>, R<4>, and R<5> is a fluorine-containing organic group. The fluorine-containing organic group may be a partially fluorinated group (i.e., some, but not all, of the hydrogens have been replaced by fluorine) or a fully fluorinated group (i.e., a perfluoro group in which all of the hydrogens have been replaced by fluorine). In certain embodiments, the fluorine- containing organic group is a fluorine-containing aliphatic group, and preferably a fluorine-containing alkyl group. Exemplary fluorine-containing alkyl groups include, -CH2F, -CHF2, -CF3, -CH2CF3, -CF2CF3, -CH2CH2CF3, -CF2CF2CF3, -CH(CH3)(CF3), -CH(CF3)2, -CF(CF3)2, -CH2CH2CH2CF3, -CF2CF2CF2CF3, -CH(CF3)(CF2CF3), -CF(CF3)(CF2CF3), -C(CF3)3, and the like.L can represent a wide variety of anionic ligands. Exemplary anionic ligands (L) include halides, alkoxide groups, amide groups, mercaptide groups, cyanide, alkyl groups, amidinate groups, guanidinate groups, isoureate groups, [beta]-diketonate groups, [beta]-iminoketonate groups, [beta]-diketiminate groups, and combinations thereof. In certain embodiments, L is a [beta]-diketiminate group having a structure that is the same as that of the [beta]-diketiminate ligand shown in Formula I. In other certain embodiments, L is a [beta]-diketiminate group (e.g., symmetric or unsymmetric) having a structure that is different than that of the [beta]- diketiminate ligand shown in Formula I.Y represents an optional neutral ligand. Exemplary neutral ligands (Y) include carbonyl (CO), nitrosyl (NO), ammonia (NH3), amines (NR3), nitrogen (N2), phosphines (PR3), ethers (ROR), alcohols (ROH), water (H2O), tetrahydrofuran, and combinations thereof, wherein each R independently represents hydrogen or an organic group. The number of optional neutral ligands (Y) is represented by z, which is from 0 to 10, and preferably from 0 to 3. More preferably, Y is not present (i.e., z = 0). In one embodiment, a metal-containing compound including at least one [beta]-diketiminate ligand having at least one fluorine-containing organic group as a substituent can be made, for example, by a method that includes combiningcomponents including a [beta]-diketiminate ligand source having at least one fluorine-containing organic group as a substituent, a metal source, optionally a source for a neutral ligand Y, and a source for an anionic ligand L, which can be the same or different than the [beta]-diketiminate ligand source having at least one fluorine-containing organic group as a substituent. Typically, a ligand source can be deprotonated to become a ligand.An exemplary method includes combining components including: a ligand source of the formula (Formula III):, a tautomer thereof, or a deprotonated conjugate base or metal complex thereof (e.g., a tin complex); a source for an anionic ligand L (e.g., as described herein); optionally a source for a neutral ligand Y (e.g., as described herein); and a metal (M) source under conditions sufficient to form the metal-containing compound. Preferably, the components are combined in an organic solvent (e.g., heptane, toluene, or diethyl ether), typically under mixing or stirring conditions, and allowed to react at a convenient temperature (e.g., room temperature or below, refluxing or above, or an intermediate temperature) for a length of time to form a sufficient amount of the desired product. Preferably, the components are combined under an inert atmosphere (e.g., argon), typically in the substantial absence of water. The metal (M) source can be selected from the group consisting of aGroup II metal source, a Group III metal source, a Lanthanide metal source, and combinations thereof. A wide variety of suitable metal sources would be apparent to one of skill. Such metal sources can optionally include at least one neutral ligand Y as defined herein above. Exemplary metal sources include, for example, a M(II) halide (i.e., a M(II) compound having at least one halide ligand), a M(II) pseudohalide (i.e., a M(II) compound having at least one pseudohalide ligand), a M(II) amide (i.e., a M(II) compound having at least one amide ligand, e.g., a M(II) bis(hexamethyldisilazane) and/or a M(II) bis(hexamethyldisilazane)-bis(tetrahydrofuran)), a M(O) for use in a metal exchange reaction with a [beta]-diketiminate metal complex (e.g., a tin complex), or combinations thereof.Each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an organic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety), with the proviso that at least one R group is a fluorine-containing organic group. In certain embodiments, R<3> = H and at least one of R<1>, R<2>, R<4>, and R<5> is a fluorine- containing organic group.The method provides a metal-containing compound of the formula (Formula I):wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, and R<5> are as defined above, n represents the valence state of the metal, z is from 0 to 10, and x is from 1 to n.Sources for [beta]-diketiminate ligands having at least one fluorine-containing aliphatic group as a substituent can be made, for example, using condensation reactions. For example, exemplary [beta]-diketiminate ligand sources having at least one fluorine-containing aliphatic group can be made by a method including combining an amine of the formula R<1>NH2 with a compound of the formula (Formula IV):, or a tautomer thereof, in the presence of an agent capable of activating the carbonyl group for reaction with the amine, under conditions sufficient to provide a ligand source of the formula (Formula III): , or a tautomer thereof.Preferably, the components are combined in an organic solvent (e.g., heptane, toluene, or diethyl ether), typically under mixing or stirring conditions, and allowed to react at a convenient temperature (e.g., room temperature or below, refluxing or above, or an intermediate temperature) for a length of time to form a sufficient amount of the desired product. Preferably, the components are combined under an inert atmosphere (e.g., argon), typically in the substantial absence of water.Each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (e.g., methyl, ethyl, propyl, isopropyl, butyl, sec-butyl, tert-buty[iota]), with the proviso that at least one of the R groups is a fluorine-containing aliphatic group. In certain embodiments, R<3> = H and at least one of R<1>, R<2>, R<4>, and R<5> is a fluorine-containing aliphatic group. Accordingly, the present invention also provides ligand sources of Formula III.Tautomers of compounds of Formula III and Formula IV include isomers in which a hydrogen atom is bonded to another atom. Typically, tautomers can be in equilibrium with one another.Specifically, the present invention contemplates tautomers of Formula III including, for example,Similarly, the present invention contemplates tautomers of Formula IV including, for example, Suitable activating agents capable of activating a carbonyl group for reaction with an amine are well known to those of skill in the art and include, for example, alkylating agents and Lewis acids (e.g., TiCl4). Exemplary alkylating agents include triethyloxonium tetrafluoroborate, dimethyl sulfate, nitrosoureas, mustard gases (e.g., l,l-thiobis(2-chloroethane)), and combinations thereof.Additional metal-containing compounds including at least one [beta]- diketiminate ligand having at least one fluorine-containing organic group can be made, for example, by ligand exchange reactions between a metal-containing compound including at least one [beta]-diketiminate ligand having at least one fluorine-containing organic group, and a metal-containing compound including at least one different [beta]-diketiminate ligand. Such an exemplary method includes combining components including a compound of the formula (Formula I):and a compound of the formula (Formula V):MYzLn-xunder conditions sufficient to form the metal-containing compound.Each M is a Group 2 metal, a Group 3 metal, a Lanthanide, or a combination thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; and x is from 1 to n. Each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an organic group, with the proviso that at least one R group is a fluorine-containing organic group; and the [beta]-diketiminate ligands shown in Formula I and Formula V have different structures.The method can provide a metal-containing compound of the formula (Formula II):wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, R<10>, n, and z are as defined above.COMPOUNDS WITHATLEASTONE SUBSTITUENTHAVING GREATER DEGREES OFFREEDOMIn another aspect, the present invention provides metal containing compounds having at least one [beta]-diketiminate ligand with at least one aliphatic group (preferably an aliphatic moiety) having 1 to 5 carbon atoms as a substituent, wherein the aliphatic group is selected to have greater degrees of freedom than the corresponding substituent in the [beta]-diketiminate ligands of certain metal-containing compounds known in the art (i.e., compounds of Formula I wherein R<2> = R<4> = methyl; R<3> = H; and R<1> = R<5> = isopropyl or R<1> = R<5> = tert-hvXyl). See, for example, El-Kaderi et al., Organometallics, 23 :4995-5002 (2004).One scheme for quantifying degrees of freedom of a substituent of a ligand of a metal-containing compound has been disclosed by Li et al. in Inorganic Chemistry, 44:1728-1735 (2005). In this scheme for counting the degrees of freedom, rotations about non-hydrogen single bonds (including the single bond attaching a substituent to a ligand) are counted. However, a single bond that only rotates a methyl group around its 3 -fold axis, or a single bond that only rotates a tert-butyl group around its 3 -fold axis, are ignored, because the resulting changes in energy might not have much influence on crystal packing. A chiral carbon atom (i.e., a carbon atom having four different substituents) counts as an additional degree of freedom, because enantiomers cannot interconvert at typical temperatures encountered in deposition methods. The above scheme was used to quantify degrees of freedom for some exemplary substituents, and the results are given in Table 1.TABLE 1 : Total De rees of Freedom uantified for Exem lary Substituents The above described method for quantifying degrees of freedom of a substituent of a ligand of a metal-containing compound is one exemplary approach. One of skill in the art will appreciate that other methods for quantifying degrees of freedom of a substituent of a ligand of a metal-containing compound could also be used as desired.Such metal-containing compounds having at least one [beta]-diketiminate ligand with at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal- containing compounds can have lower melting points and/or sublimation points than certain known metal- containing compounds with at least one [beta]-diketiminate ligand. Metal- containing compounds having lower melting points, lower sublimation points, or both, can be advantageous in deposition methods (e.g., CVD and ALD). For example, metal-containing compounds having lower melting points are particularly useful for molten precursor compositions, because the vapor pressure of molten materials is typically higher than that of analogous solid materials at the same temperature. In addition, the surface area of vaporizing molten precursor compositions (and thus the rates of vaporization from and heat transfer to such compositions) can change at regular and predictable rates. Finally, molten precursor compositions are typically not a source for undesirable particles in the deposition process. Thus, for a given class of precursor compositions, molten forms within that class can provide adequate vapor pressure for deposition at lower temperatures than non-molten forms, under reproducible conditions, and preferably without producing problematic particles in the process. In one aspect, the present invention provides metal-containing compounds having at least one [beta]-diketiminate ligand with at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal-containing compounds. Such compounds include a compound of the formula (Formula I): wherein M is a Group 2 metal (e.g., Ca, Sr, Ba), a Group 3 metal (e.g., Sc, Y, La), a Lanthanide (e.g., Pr, Nd), or a combination thereof. Preferably M is Ca, Sr5 or Ba. More preferably, M is Sr. Each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; and x is from 1 to n.In one embodiment, each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R is a moiety selected from the group consisting of ethyl, n-propyl, isopropyl, /[iota]-butyl, sec-butyl, isobutyl, [pi]-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, tert-pentyi, and neopentyl. Notably the moieties listed in the above group all have higher quantified degrees of freedom (e.g., Table 1) than the corresponding substituents (R<2> = R<4> = methyl; and R<3> = H) in the metal-containing compounds disclosed in El-Kaderi et al., Organometallics, 23:4995-5002 (2004).In another embodiment, each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of n-propyl, rc-butyl, sec-butyl, isobutyl, ro-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l -butyl, 3 -methyls- butyl, isopentyl, and fert-pentyl. Notably the moieties listed in the above group all have higher quantified degrees of freedom (e.g., Table 1) than the corresponding substituents (R<1> = R<5> = isopropyl; or R<1> = R<5> = tert-bu[iota]y[iota]) in the metal-containing compounds disclosed in El-Kaderi et al., Organometallics, 23:4995-5002 (2004).L can represent a wide variety of anionic ligands. Exemplary anionic ligands (L) include halides, alkoxide groups, amide groups, mercaptide groups, cyanide, alkyl groups, amidinate groups, guanidinate groups, isoureate groups, [beta]-diketonate groups, [beta]-iminoketonate groups, [beta]-diketiminate groups, and combinations thereof. In certain embodiments, L is a [beta]-diketiminate group having a structure that is the same as that of the [beta]-diketiminate ligand shown in Formula I. In other certain embodiments, L is a [beta]-diketiminate group (e.g., symmetric or unsymmetric) having a structure that is different than that of the [beta]- diketiminate ligand shown in Formula I.Y represents an optional neutral ligand. Exemplary neutral ligands (Y) include carbonyl (CO), nitrosyl (NO), ammonia (NH3), amines (NR3), nitrogen (N2), phosphines (PR3), ethers (ROR), alcohols (ROH), water (H2O), tetrahydrofuran, and combinations thereof, wherein each R independently represents hydrogen or an organic group. The number of optional neutral ligands (Y) is represented by z, which is from O to 10, and preferably from 0 to 3. More preferably, Y is not present (i.e., z = 0). hi one embodiment, a metal-containing compound including at least one [beta]-diketiminate ligand with at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal-containing compounds can be made, for example, by a method that includes combining components including a [beta]-diketiminate ligand source with at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal-containing compounds, a metal source, optionally a source for a neutral ligand Y, and a source for an anionic ligand L, which can be the same or different than the [beta]-diketiminate ligand source with at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal-containing compounds. Typically, a ligand source can be deprotonated to become a ligand.An exemplary method includes combining components including: a ligand source of the formula (Formula III): , a tautomer thereof, or a deprotonated conjugate base or metal complex thereof; a source for an anionic ligand L (e.g., as described herein); optionally a source for a neutral ligand Y (e.g., as described herein); and a metal (M) source under conditions sufficient to form the metal-containing compound. Preferably, the components are combined in an organic solvent (e.g., heptane, toluene, or diethyl ether), typically under mixing or stirring conditions, and allowed to react at a convenient temperature (e.g., room temperature or below, refluxing or above, or an intermediate temperature) for a length of time to form a sufficient amount of the desired product. Preferably, the components are combined under an inert atmosphere (e.g., argon), typically in the substantial absence of water.The metal (M) source can be selected from the group consisting of a Group II metal source, a Group III metal source, a Lanthanide metal source, and combinations thereof. A wide variety of suitable metal sources would be apparent to one of skill. Such metal sources can optionally include at least one neutral ligand Y as defined herein above. Exemplary metal sources include, for example, a M(II) halide (i.e., a M(II) compound having at least one halide ligand), a M(II) pseudohalide (i.e., a M(II) compound having at least one pseudohalide ligand), a M(II) amide (i.e., a M(II) compound having at least one amide ligand, e.g., a M(II) bis(hexamethyldisilazane) and/or a M(II) bis(hexamethyldisilazane)-bis(tetrahydrofuran)), a M(O) for use in a metal exchange reaction with a [beta]-diketiminate metal complex (e.g., a tin complex), or combinations thereof.In one embodiment, each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, n-propyl, isopropyl, n-butyl, sec-butyl, isobutyl, 7t-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, fert-pentyl, and neopentyl. In another embodiment, each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of 7[iota]-propyl, ro-butyl, sec-butyl, isobutyl, rc-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l -butyl, 3-methyl-2- butyl, isopentyl, and tert-pentyl.The method provides a metal-containing compound of the formula (Formula I):wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, and R<5> are as defined above, n represents the valence state of the metal, z is from 0 to 10, and x is from 1 to n.Sources for [beta]-diketiminate ligands having at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal-containing compounds can be made, for example, using condensation reactions. For example, exemplary [beta]-diketiminate ligand sources having at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal-containing compounds can be made by a method including combining an amine of the formula R<1>NH2 with a compound of the formula (Formula IV):, or a tautomer thereof, in the presence of an agent capable of activating the carbonyl group for reaction with the amine, under conditions sufficient to provide a ligand source of the formula (Formula III): , or a tautomer thereof. Preferably, the components are combined in an organic solvent (e.g., heptane, toluene, or diethyl ether), typically under mixing or stirring conditions, and allowed to react at a convenient temperature (e.g., room temperature or below, refluxing or above, or an intermediate temperature) for a length of time to form a sufficient amount of the desired product. Preferably, the components are combined under an inert atmosphere (e.g., argon), typically in the substantial absence of water.In one embodiment, each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, and R<4> is a moiety selected from the group consisting of ethyl, 7t-propyl, isopropyl, n-butyl, sec-butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2-methyl-l- butyl, 3-methyl-2-butyl, isopentyl, tert-pentyl, and neopentyl. In another embodiment, each R<1>, R<2>, R<3>, R<4>, and R<5> is independently hydrogen or an aliphatic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<1> and R<5> is a moiety selected from the group consisting of /t-propyl, n-butyl, sec-butyl, isobutyl, 77-pentyl, 2-pentyl, 3-pentyl, 2 -methyl- 1 -butyl, 3 -methyl -2- butyl, isopentyl, and tert-pentyl.Tautomers of compounds of Formula III and Formula IV include isomers in which a hydrogen atom is bonded to another atom. Typically, tautomers can be in equilibrium with one another.Specifically, the present invention contemplates tautomers of Formula III including, for example, Similarly, the present invention contemplates tautomers of Formula IV including, for example,Suitable activating agents capable of activating a carbonyl group for reaction with an amine are well known to those of skill in the art and include, for example, alkylating agents and Lewis acids (e.g., TiCl4). Exemplary alkylating agents include triethyloxonium tetrafluoroborate, dimethyl sulfate, nitrosoureas, mustard gases (e.g., l,l-thiobis(2-chloroethane)), and combinations thereof.Additional metal-containing compounds including at least one [beta]- diketiminate ligand having at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal-containing compounds can be made, for example, by ligand exchange reactions between a metal-containing compound including at least one [beta]-diketiminate ligand having at least one substituent having greater degrees of freedom than the corresponding substituent in certain known metal-containing compounds, and a metal- containing compound including at least one different [beta]-diketiminate ligand. Such an exemplary method includes combining components including a compound of the formula (Formula I):and a compound of the formula (Formula V): MYzLn-xunder conditions sufficient to form the metal-containing compound.Each M is a Group 2 metal, a Group 3 metal, a Lanthanide, or a combination thereof; each L is independently an anionic ligand; each Y is independently a neutral ligand; n represents the valence state of the metal; z is from 0 to 10; and x is from 1 to n.In one embodiment, each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an aliphatic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<2>, R<3>, R<4>, R<7>, R<8>, and R<9> is a moiety selected from the group consisting of ethyl, n-propyl, isopropyl, n-butyl, sec-butyl, isobutyl, n-pentyl, 2- pentyl, 3-pentyl, 2 -methyl- 1 -butyl, 3-methyl-2-butyl, isopentyl, tert-pentyl, and neopentyl; and the [beta]-diketiminate ligands shown in Formula I and Formula V have different structures. In another embodiment, each R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, and R<10> is independently hydrogen or an aliphatic group (e.g., an alkyl group or, in certain embodiments, an alkyl moiety) having 1 to 5 carbon atoms, with the proviso that at least one of R<1>, R<5>, R<6>, and R<10> is a moiety selected from the group consisting of 7?-propyl, n-butyl, sec-butyl, isobutyl, n-pentyl, 2-pentyl, 3-pentyl, 2-methyl- 1 -butyl, 3 -methyl -2 -butyl, isopentyl, and tert-pentyl; and the [beta]-diketiminate ligands shown in Formula I and Formula V have different structures.The method can provide a metal-containing compound of the formula (Formula II): wherein M, L, Y, R<1>, R<2>, R<3>, R<4>, R<5>, R<6>, R<7>, R<8>, R<9>, R<10>, n, and z are as defined above.OTHER METAL-CONTAINING COMPOUNDSPrecursor compositions that include a metal- containing compound that includes at least one [beta]-diketiminate ligand can be useful for depositing metal- containing layers using vapor deposition methods, hi addition, such vapor deposition methods can also include precursor compositions that include one or more different metal-containing compounds. Such precursor compositions can be deposited/chemisorbed, for example in an ALD process discussed more fully below, substantially simultaneously with or sequentially to, the precursor compositions including metal-containing compounds with at least one [beta]- diketiminate ligand. The metals of such different metal-containing compounds can include, for example, Ti, Ta, Bi, Hf, Zr, Pb, Nb, Mg, Al, and combinations thereof. Suitable different metal-containing compounds include, for example, tetrakis titanium isopropoxide, titanium tetrachloride, trichlorotitanium dialkylamides, tetrakis titanium dialkylamides, tetrakis hafnium dialkylamides, trimethyl aluminum, zirconium (IV) chloride, pentakis tantalum ethoxide, and combinations thereof.VAPOR DEPOSITIONMETHODSThe metal-containing layer can be deposited, for example, on a substrate (e.g., a semiconductor substrate or substrate assembly). "Semiconductor substrate" or "substrate assembly" as used herein refer to a semiconductor substrate such as a base semiconductor layer or a semiconductor substrate having one or more layers, structures, or regions formed thereon. A base semiconductor layer is typically the lowest layer of silicon material on a wafer or a silicon layer deposited on another material, such as silicon on sapphire. When reference is made to a substrate assembly, various process steps may have been previously used to form or define regions, junctions, various structures or features, and openings such as transistors, active areas, diffusions, implanted regions, vias, contact openings, high aspect ratio openings, capacitor plates, barriers for capacitors, etc. "Layer," as used herein, refers to any layer that can be formed on a substrate from one or more precursors and/or reactants according to the deposition process described herein. The term "layer" is meant to include layers specific to the semiconductor industry, such as, but clearly not limited to, a barrier layer, dielectric layer (i.e., a layer having a high dielectric constant), and conductive layer. The term "layer" is synonymous with the term "film" frequently used in the semiconductor industry. The term "layer" is also meant to include layers found in technology outside of semiconductor technology, such as coatings on glass. For example, such layers can be formed directly on fibers, wires, etc., which are substrates other than semiconductor substrates. Further, the layers can be formed directly on the lowest semiconductor surface of the substrate, or they can be formed on any of a variety of layers (e.g., surfaces) as in, for example, a patterned wafer.The layers or films formed may be in the form of metal-containing films, such as reduced metals, metal silicates, metal oxides, metal nitrides, etc, as well as combinations thereof. For example, a metal oxide layer may include a single metal, the metal oxide layer may include two or more different metals (i.e., it is a mixed metal oxide), or a metal oxide layer may optionally be doped with other metals.If the metal oxide layer includes two or more different metals, the metal oxide layer can be in the form of alloys, solid solutions, or nanolaminates.Preferably, these have dielectric properties. The metal oxide layer (particularly if it is a dielectric layer) preferably includes one or more OfBaTiO3, SrTiO3, CaTiO3, (Ba5Sr)TiO3, SrTa2O6, SrBi2Ta2O9 (SBT), SrHfO3, SrZrO3, BaHfO3, BaZrO3, (Pb5Ba)Nb2O6, (Sr5Ba)Nb2O6, Pb[(Sc5Nb)0.575Ti0.425]O3 (PSNT)5 La2O3, Y2O3, LaAlO3, YAlO3, Pr2O3, Ba(Li5Nb)174O3-PbTiO3, and Ba(0.6)Sr(0.4)TiO3- MgO. Surprisingly, the metal oxide layer formed according to the present invention is essentially free of carbon. Preferably metal-oxide layers formed by the systems and methods of the present invention are essentially free of carbon, hydrogen, halides, phosphorus, sulfur, nitrogen or compounds thereof. As used herein, "essentially free" is defined to mean that the metal-containing layer may include a small amount of the above impurities. For example, for metal-oxide layers, "essentially free" means that the above impurities are present in an amount of less than 1 atomic percent, such that they have a minor effect on the chemical properties, mechanical properties, physical form (e.g., crystallinity), or electrical properties of the film.Various metal-containing compounds can be used in various combinations, optionally with one or more organic solvents (particularly forCVD processes), to form a precursor composition. Advantageously, some of the metal-containing compounds disclosed herein can be used in ALD without adding solvents. "Precursor" and "precursor composition" as used herein, refer to a composition usable for forming, either alone or with other precursor compositions (or reactants), a layer on a substrate assembly in a deposition process. Further, one skilled in the art will recognize that the type and amount of precursor used will depend on the content of a layer which is ultimately to be formed using a vapor deposition process. The preferred precursor compositions of the present invention are preferably liquid at the vaporization temperature and, more preferably, are preferably liquid at room temperature.The precursor compositions may be liquids or solids at room temperature (preferably, they are liquids at the vaporization temperature). Typically, they are liquids sufficiently volatile to be employed using known vapor deposition techniques. However, as solids they may also be sufficiently volatile that they can be vaporized or sublimed from the solid state using known vapor deposition techniques. If they are less volatile solids, they are preferably sufficiently soluble in an organic solvent or have melting points below their decomposition temperatures such that they can be used in flash vaporization, bubbling, microdroplet formation techniques, etc.Herein, vaporized metal-containing compounds may be used either alone or optionally with vaporized molecules of other metal-containing compounds or optionally with vaporized solvent molecules or inert gas molecules, if used. As used herein, "liquid" refers to a solution or a neat liquid (a liquid at room temperature or a solid at room temperature that melts at an elevated temperature). As used herein, "solution" does not require complete solubility of the solid but may allow for some undissolved solid, as long as there is a sufficient amount of the solid delivered by the organic solvent into the vapor phase for chemical vapor deposition processing. If solvent dilution is used in deposition, the total molar concentration of solvent vapor generated may also be considered as a inert carrier gas."Inert gas" or "non-reactive gas," as used herein, is any gas that is generally unreactive with the components it comes in contact with. For example, inert gases are typically selected from a group including nitrogen, argon, helium, neon, krypton, xenon, any other non-reactive gas, and mixtures thereof. Such inert gases are generally used in one or more purging processes described according to the present invention, and in some embodiments may also be used to assist in precursor vapor transport.Solvents that are suitable for certain embodiments of the present invention may be one or more of the following: aliphatic hydrocarbons or unsaturated hydrocarbons (C3-C20, and preferably C5-C10, cyclic, branched, or linear), aromatic hydrocarbons (C5-C20, and preferably C5-C10), halogenated hydrocarbons, silylated hydrocarbons such as alkylsilanes, alkylsilicates, ethers, polyethers, thioethers, esters, lactones, nitriles, silicone oils, or compounds containing combinations of any of the above or mixtures of one or more of the above. The compounds are also generally compatible with each other, so that mixtures of variable quantities of the metal-containing compounds will not interact to significantly change their physical properties.The precursor compositions of the present invention can, optionally, be vaporized and deposited/chemisorbed substantially simultaneously with, and in the presence of, one or more reaction gases. Alternatively, the metal-containing layers may be formed by alternately introducing the precursor composition and the reaction gas(es) during each deposition cycle. Such reaction gases may typically include oxygen, water vapor, ozone, nitrogen oxides, sulfur oxides, hydrogen, hydrogen sulfide, hydrogen selenide, hydrogen telluride, hydrogen peroxide, ammonia, organic amines, hydrazines (e.g., hydrazine, methylhydrazine, symmetrical and unsymmetrical dimethylhydrazines), silanes, disilanes and higher silanes, diborane, plasma, air, borazene (nitrogen source), carbon monoxide (reductant), alcohols, and any combination of these gases. For example, oxygen-containing sources are typically used for the deposition of metal-oxide layers. Preferable optional reaction gases used in the formation of metal-oxide layers include oxidizing gases (e.g., oxygen, ozone, and nitric oxide).Suitable substrate materials of the present invention include conductive materials, semi conductive materials, conductive metal-nitrides, conductive metals, conductive metal oxides, etc. The substrate on which the metal- containing layer is formed is preferably a semiconductor substrate or substrate assembly. A wide variety of semiconductor materials are contemplated, such as for example, borophosphosilicate glass (BPSG), silicon such as, e.g., conductively doped polysilicon, monocrystalline silicon, etc. (for this invention, appropriate forms of silicon are simply referred to as "silicon"), for example in the form of a silicon wafer, tetraethylorthosilicate (TEOS) oxide, spin on glass (i.e., a thin layer of SiO2, optionally doped, deposited by a spin on process), TiN, TaN, W, Ru, Al, Cu, noble metals, etc. A substrate assembly may also contain a layer that includes platinum, iridium, iridium oxide, rhodium, ruthenium, ruthenium oxide, strontium ruthenate, lanthanum nickelate, titanium nitride, tantalum nitride, tantalum-silicon-nitride, silicon dioxide, aluminum, gallium arsenide, glass, etc., and other existing or to-be-developed materials used in semiconductor constructions, such as dynamic random access memory (DRAM) devices, static random access memory (SRAM) devices, and ferroelectric memory (FERAM) devices, for example. For substrates including semiconductor substrates or substrate assemblies, the layers can be formed directly on the lowest semiconductor surface of the substrate, or they can be formed on any of a variety of the layers (i.e., surfaces) as in a patterned wafer, for example. Substrates other than semiconductor substrates or substrate assemblies can also be used in methods of the present invention. Any substrate that may advantageously form a metal-containing layer thereon, such as a metal oxide layer, may be used, such substrates including, for example, fibers, wires, etc. A preferred deposition process for the present invention is a vapor deposition process. Vapor deposition processes are generally favored in the semiconductor industry due to the process capability to quickly provide highly conformal layers even within deep contacts and other openings.The precursor compositions can be vaporized in the presence of an inert carrier gas if desired. Additionally, an inert carrier gas can be used in purging steps in an ALD process (discussed below). The inert carrier gas is typically one or more of nitrogen, helium, argon, etc. In the context of the present invention, an inert carrier gas is one that does not interfere with the formation of the metal- containing layer. Whether done in the presence of a inert carrier gas or not, the vaporization is preferably done in the absence of oxygen to avoid oxygen contamination of the layer (e.g., oxidation of silicon to form silicon dioxide or oxidation of precursor in the vapor phase prior to entry into the deposition chamber).Chemical vapor deposition (CVD) and atomic layer deposition (ALD) are two vapor deposition processes often employed to form thin, continuous, uniform, metal-containing layers onto semiconductor substrates. Using either vapor deposition process, typically one or more precursor compositions are vaporized in a deposition chamber and optionally combined with one or more reaction gases and directed to and/or contacted with the substrate to form a metal-containing layer on the substrate. It will be readily apparent to one skilled in the art that the vapor deposition process may be enhanced by employing various related techniques such as plasma assistance, photo assistance, laser assistance, as well as other techniques. Chemical vapor deposition (CVD) has been extensively used for the preparation of metal-containing layers, such as dielectric layers, in semiconductor processing because of its ability to provide conformal and high quality dielectric layers at relatively fast processing times. Typically, the desired precursor compositions are vaporized and then introduced into a deposition chamber containing a heated substrate with optional reaction gases and/or inert carrier gases in a single deposition cycle. In a typical CVD process, vaporized precursors are contacted with reaction gas(es) at the substrate surface to form a layer (e.g., dielectric layer). The single deposition cycle is allowed to continue until the desired thickness of the layer is achieved.Typical CVD processes generally employ precursor compositions in vaporization chambers that are separated from the process chamber wherein the deposition surface or wafer is located. For example, liquid precursor compositions are typically placed in bubblers and heated to a temperature at which they vaporize, and the vaporized liquid precursor composition is then transported by an inert carrier gas passing over the bubbler or through the liquid precursor composition. The vapors are then swept through a gas line to the deposition chamber for depositing a layer on substrate surface(s) therein. Many techniques have been developed to precisely control this process. For example, the amount of precursor composition transported to the deposition chamber can be precisely controlled by the temperature of the reservoir containing the precursor composition and by the flow of an inert carrier gas bubbled through or passed over the reservoir.A typical CVD process may be carried out in a chemical vapor deposition reactor, such as a deposition chamber available under the trade designation of 7000 from Genus, Inc. (Sunnyvale, CA), a deposition chamber available under the trade designation of 5000 from Applied Materials, Inc. (Santa Clara, CA), or a deposition chamber available under the trade designation of Prism from Novelus, Inc. (San Jose, CA). However, any deposition chamber suitable for performing CVD may be used.Several modifications of the CVD process and chambers are possible, for example, using atmospheric pressure chemical vapor deposition, low pressure chemical vapor deposition (LPCVD), plasma enhanced chemical vapor deposition (PECVD), hot wall or cold wall reactors or any other chemical vapor deposition technique. Furthermore, pulsed CVD can be used, which is similar to ALD (discussed in greater detail below) but does not rigorously avoid intermixing of precursor and reactant gas streams. Also, for pulsed CVD, the deposition thickness is dependent on the exposure time, as opposed to ALD, which is self-limiting (discussed in more detail below).Alternatively, and preferably, the vapor deposition process employed in the methods of the present invention is a multi-cycle atomic layer deposition (ALD) process. Such a process is advantageous, in particular advantageous over a CVD process, in that it provides for improved control of atomic-level thickness and uniformity to the deposited layer (e.g., dielectric layer) by providing a plurality of deposition cycles. The self-limiting nature of ALD provides a method of depositing a film on a wide variety of reactive surfaces including, for example, surfaces with irregular topographies, with better step coverage than is available with CVD or other "line of sight" deposition methods (e.g., evaporation and physical vapor deposition, i.e., PVD or sputtering). Further, ALD processes typically expose the metal-containing compounds to lower volatilization and reaction temperatures, which tends to decrease degradation of the precursor as compared to, for example, typical CVD processes. See, for example, U.S. Application Serial No. 11/168,160 (entitled " ATOMIC LAYER DEPOSITION SYSTEMS AND METHODS INCLUDING METAL BETA- DIKETIMINATE COMPOUNDS"), filed June 28, 2005.Generally, in an ALD process each reactant is pulsed sequentially onto a suitable substrate, typically at deposition temperatures of at least 25[deg.]C, preferably at least 15O<0>C, and more preferably at least 200<0>C. Typical ALD deposition temperatures are no greater than 400<0>C, preferably no greater than 350[deg.]C, and even more preferably no greater than 250<0>C. These temperatures are generally lower than those presently used in CVD processes, which typically include deposition temperatures at the substrate surface of at least 150<0>C, preferably at least 200<0>C, and more preferably at least 25O<0>C. Typical CVD deposition temperatures are no greater than 600[deg.]C, preferably no greater than 500[deg.]C, and even more preferably no greater than 400[deg.]C.Under such conditions the film growth by ALD is typically self-limiting (i.e., when the reactive sites on a surface are used up in an ALD process, the deposition generally stops), insuring not only excellent conformality but also good large area uniformity plus simple and accurate composition and thickness control. Due to alternate dosing of the precursor compositions and/or reaction gases, detrimental vapor-phase reactions are inherently eliminated, in contrast to the CVD process that is carried out by continuous co-reaction of the precursors and/or reaction gases. (See Vehkamaki et al, "Growth of SrTiO3 and BaTiO3 Thin Films by Atomic Layer Deposition," Electrochemical and Solid-State Letters, 2(10):504-506 (1999)).A typical ALD process includes exposing a substrate (which may optionally be pretreated with, for example, water and/or ozone) to a first chemical to accomplish chemisorption of the species onto the substrate. The term "chemisorption" as used herein refers to the chemical adsorption of vaporized reactive metal- containing compounds on the surface of a substrate. The adsorbed species are typically irreversibly bound to the substrate surface as a result of relatively strong binding forces characterized by high adsorption energies (e.g., >30 kcal/mol), comparable in strength to ordinary chemical bonds. The chemisorbed species typically form a monolayer on the substrate surface. (See "The Condensed Chemical Dictionary", 10th edition, revised by G. G. Hawley, published by Van Nostrand Reinhold Co., New York, 225 (1981)). The technique of ALD is based on the principle of the formation of a saturated monolayer of reactive precursor molecules by chemisorption. In ALD one or more appropriate precursor compositions or reaction gases are alternately introduced (e.g., pulsed) into a deposition chamber and chemisorbed onto the surfaces of a substrate. Each sequential introduction of a reactive compound (e.g., one or more precursor compositions and one or more reaction gases) is typically separated by an inert carrier gas purge. Each precursor composition co- reaction adds a new atomic layer to previously deposited layers to form a cumulative solid layer. The cycle is repeated to gradually form the desired layer thickness. It should be understood that ALD can alternately utilize one precursor composition, which is chemisorbed, and one reaction gas, which reacts with the chemisorbed species.Practically, chemisorption might not occur on all portions of the deposition surface (e.g., previously deposited ALD material). Nevertheless, such imperfect monolayer is still considered a monolayer in the context of the present invention. In many applications, merely a substantially saturated monolayer may be suitable. A substantially saturated monolayer is one that will still yield a deposited monolayer or less of material exhibiting the desired quality and/or properties.A typical ALD process includes exposing an initial substrate to a first chemical species A (e.g., a metal-containing compound as described herein) to accomplish chemisorption of the species onto the substrate. Species A can react either with the substrate surface or with Species B (described below) but not with itself. Typically in chemisorption, one or more of the ligands of Species A is displaced by reactive groups on the substrate surface. Theoretically, the chemisorption forms a monolayer that is uniformly one atom or molecule thick on the entire exposed initial substrate, the monolayer being composed of Species A, less any displaced ligands. In other words, a saturated monolayer is substantially formed on the substrate surface. Practically, chemisorption may not occur on all portions of the substrate. Nevertheless, such a partial monolayer is still understood to be a monolayer in the context of the present invention. In many applications, merely a substantially saturated monolayer may be suitable. In one aspect, a substantially saturated monolayer is one that will still yield a deposited monolayer or less of material exhibiting the desired quality and/or properties. In another aspect, a substantially saturated monolayer is one that is self-limited to further reaction with precursor.The first species (e.g., substantially all non-chemisorbed molecules of Species A) as well as displaced ligands are purged from over the substrate and a second chemical species, Species B (e.g., a different metal-containing compound or reactant gas) is provided to react with the monolayer of Species A. Species B typically displaces the remaining ligands from the Species A monolayer and thereby is chemisorbed and forms a second monolayer. This second monolayer displays a surface which is reactive only to Species A. Non-chemisorbed Species B5 as well as displaced ligands and other byproducts of the reaction are then purged and the steps are repeated with exposure of the Species B monolayer to vaporized Species A. Optionally, the second species can react with the first species, but not chemisorb additional material thereto. That is, the second species can cleave some portion of the chemisorbed first species, altering such monolayer without forming another monolayer thereon, but leaving reactive sites available for formation of subsequent monolayers. In other ALD processes, a third species or more may be successively chemisorbed (or reacted) and purged just as described for the first and second species, with the understanding that each introduced species reacts with the monolayer produced immediately prior to its introduction. Optionally, the second species (or third or subsequent) can include at least one reaction gas if desired. Thus, the use of ALD provides the ability to improve the control of thickness, composition, and uniformity of metal-containing layers on a substrate. For example, depositing thin layers of metal- containing compound in a plurality of cycles provides a more accurate control of ultimate film thickness. This is particularly advantageous when the precursor composition is directed to the substrate and allowed to chemisorb thereon, preferably further including at least one reaction gas that reacts with the chemisorbed species on the substrate, and even more preferably wherein this cycle is repeated at least once.Purging of excess vapor of each species following deposition/chemisorption onto a substrate may involve a variety of techniques including, but not limited to, contacting the substrate and/or monolayer with an inert carrier gas and/or lowering pressure to below the deposition pressure to reduce the concentration of a species contacting the substrate and/or chemisorbed species. Examples of carrier gases, as discussed above, may include N2, Ar, He, etc. Additionally, purging may instead include contacting the substrate and/or monolayer with any substance that allows chemisorption byproducts to desorb and reduces the concentration of a contacting species preparatory to introducing another species. The contacting species may be reduced to some suitable concentration or partial pressure known to those skilled in the art based on the specifications for the product of a particular deposition process.ALD is often described as a self-limiting process, in that a finite number of sites exist on a substrate to which the first species may form chemical bonds. The second species might only react with the surface created from the chemisorption of the first species and thus, may also be self-limiting. Once all of the finite number of sites on a substrate are bonded with a first species, the first species will not bond to other of the first species already bonded with the substrate. However, process conditions can be varied in ALD to promote such bonding and render ALD not self-limiting, e.g., more like pulsed CVD. Accordingly, ALD may also encompass a species forming other than one monolayer at a time by stacking of a species, forming a layer more than one atom or molecule thick. The described method indicates the "substantial absence" of the second precursor (i.e., second species) during chemisorption of the first precursor since insignificant amounts of the second precursor might be present. According to the knowledge and the preferences of those with ordinary skill in the art, a determination can be made as to the tolerable amount of second precursor and process conditions selected to achieve the substantial absence of the second precursor.Thus, during the ALD process, numerous consecutive deposition cycles are conducted in the deposition chamber, each cycle depositing a very thin metal-containing layer (usually less than one monolayer such that the growth rate on average is 0.2 to 3.0 Angstroms per cycle), until a layer of the desired thickness is built up on the substrate of interest. The layer deposition is accomplished by alternately introducing (i.e., by pulsing) precursor composition(s) into the deposition chamber containing a substrate, chemisorbing the precursor composition(s) as a monolayer onto the substrate surfaces, purging the deposition chamber, then introducing to the chemisorbed precursor composition(s) reaction gases and/or other precursor composition(s) in a plurality of deposition cycles until the desired thickness of the metal-containing layer is achieved. Preferred thicknesses of the metal-containing layers of the present invention are at least 1 angstrom (A), more preferably at least 5 A, and more preferably at least 10 A. Additionally, preferred film thicknesses are typically no greater than 500 A, more preferably no greater than 400 A, and more preferably no greater than 300 A.The pulse duration of precursor composition(s) and inert carrier gas(es) is generally of a duration sufficient to saturate the substrate surface. Typically, the pulse duration is at least 0.1 , preferably at least 0.2 second, and more preferably at least 0.5 second. Preferred pulse durations are generally no greater than 5 seconds, and preferably no greater than 3 seconds.In comparison to the predominantly thermally driven CVD, ALD is predominantly chemically driven. Thus, ALD may advantageously be conducted at much lower temperatures than CVD. During the ALD process, the substrate temperature may be maintained at a temperature sufficiently low to maintain intact bonds between the chemisorbed precursor composition(s) and the underlying substrate surface and to prevent decomposition of the precursor composition(s). The temperature, on the other hand, must be sufficiently high to avoid condensation of the precursor composition(s). Typically the substrate is kept at a temperature of at least 25[deg.]C, preferably at least 150[deg.]C, and more preferably at least 200[deg.]C. Typically the substrate is kept at a temperature of no greater than 400<0>C, preferably no greater than 300<0>C, and more preferably no greater than 250<0>C, which, as discussed above, is generally lower than temperatures presently used in typical CVD processes. Thus, the first species or precursor composition is chemisorbed at this temperature. Surface reaction of the second species or precursor composition can occur at substantially the same temperature as chemisorption of the first precursor or, optionally but less preferably, at a substantially different temperature. Clearly, some small variation in temperature, as judged by those of ordinary skill, can occur but still be considered substantially the same temperature by providing a reaction rate statistically the same as would occur at the temperature of the first precursor chemisorption. Alternatively, chemisorption and subsequent reactions could instead occur at substantially exactly the same temperature. For a typical vapor deposition process, the pressure inside the deposition chamber is at least 10<~8> torr (1.3 x 10<'6> Pa), preferably at least 10<"7> torr (1.3 x 10 ,-5 Pa), and more preferably at least 10<'6> torr (1.3 x 10<"4> Pa). Further, deposition pressures are typically no greater than 10 torr (1.3 x 10<3> Pa), preferably no greater than 1 torr (1.3 x 10<2> Pa), and more preferably no greater than 10<"1> torr (13 Pa). Typically, the deposition chamber is purged with an inert carrier gas after the vaporized precursor composition(s) have been introduced into the chamber and/or reacted for each cycle. The inert carrier gas/gases can also be introduced with the vaporized precursor composition(s) during each cycle. The reactivity of a precursor composition can significantly influence the process parameters in ALD. Under typical CVD process conditions, a highly reactive compound may react in the gas phase generating particulates, depositing prematurely on undesired surfaces, producing poor films, and/or yielding poor step coverage or otherwise yielding non-uniform deposition. For at least such reason, a highly reactive compound might be considered not suitable for CVD. However, some compounds not suitable for CVD are superior ALD precursors. For example, if the first precursor is gas phase reactive with the second precursor, such a combination of compounds might not be suitable for CVD, although they could be used in ALD. In the CVD context, concern might also exist regarding sticking coefficients and surface mobility, as known to those skilled in the art, when using highly gas-phase reactive precursors, however, little or no such concern would exist in the ALD context.After layer formation on the substrate, an annealing process may be optionally performed in situ in the deposition chamber in a reducing, inert, plasma, or oxidizing atmosphere. Preferably, the annealing temperature is at least 400[deg.]C, more preferably at least 600<0>C. The annealing temperature is preferably no greater than 1000[deg.]C, more preferably no greater than 75O<0>C, and even more preferably no greater than 700[deg.]C.The annealing operation is preferably performed for a time period of at least 0.5 minute, more preferably for a time period of at least 1 minute.Additionally, the annealing operation is preferably performed for a time period of no greater than 60 minutes, and more preferably for a time period of no greater than 10 minutes.One skilled in the art will recognize that such temperatures and time periods may vary. For example, furnace anneals and rapid thermal annealing may be used, and further, such anneals may be performed in one or more annealing steps.As stated above, the use of the compounds and methods of forming films of the present invention are beneficial for a wide variety of thin film applications in semiconductor structures, particularly those using high dielectric materials. For example, such applications include gate dielectrics and capacitors such as planar cells, trench cells (e.g., double sidewall trench capacitors), stacked cells (e.g., crown, V-cell, delta cell, multi-fingered, or cylindrical container stacked capacitors), as well as field effect transistor devices.A system that can be used to perform vapor deposition processes (chemical vapor deposition or atomic layer deposition) of the present invention is shown in Figure 1. The system includes an enclosed vapor deposition chamber 10, in which a vacuum may be created using turbo pump 12 and backing pump 14. One or more substrates 16 (e.g., semiconductor substrates or substrate assemblies) are positioned in chamber 10. A constant nominal temperature is established for substrate 16, which can vary depending on the process used. Substrate 16 may be heated, for example, by an electrical resistance heater 18 on which substrate 16 is mounted. Other known methods of heating the substrate may also be utilized.In this process, precursor compositions as described herein, 60 and/or 61, are stored in vessels 62. The precursor composition(s) are vaporized and separately fed along lines 64 and 66 to the deposition chamber 10 using, for example, an inert carrier gas 68. A reaction gas 70 maybe supplied along line 72 as needed. Also, a purge gas 74, which is often the same as the inert carrier gas 68, may be supplied along line 76 as needed. As shown, a series of valves 80-85 are opened and closed as required. The following examples are offered to further illustrate various specific embodiments and techniques of the present invention. It should be understood, however, that many variations and modifications understood by those of ordinary skill in the art may be made while remaining within the scope of the present invention. Therefore, the scope of the invention is not intended to be limited by the following example. Unless specified otherwise, all percentages shown in the examples are percentages by weight.EXAMPLESEXAMPLE 1 : Synthesis and Characterization of a Ligand Source of Formula III, with R<1> = R<5> = sec-butyl; R<2> = R<4> = methyl; and R<3> =H: N-sec-butyl-(4- sec-butylimino)-2-penten-2-amine. An oven-dry 1-L Schlenk flask fitted with addition funnel was charged with 101 mL of sec-butylamine and 200 mL dichloromethane. The addition funnel was then charged with 103 mL of 2,4-pentanedione and 400 mL dichloromethane, which were then added dropwise to the solution in the Schlenk flask. The resulting solution was then stirred for 90 hours. The aqueous phase formed during the reaction was then separated and extracted with 2x50 mL portions of diethyl ether. The combined organic fractions were dried over anhydrous sodium sulfate and concentrated on a rotary evaporator. The concentrate was then distilled at 66[deg.]C, 0.7 Torr (93 Pa); the distillate was a clear colorless liquid. 108.4 g were collected for 70% yield. Gas chromato graphic/mass spectrometric (GC/MS) analysis of the distillate indicated a compound with an apparent purity of 99.9% having a mass spectrum consistent with N-5ec-butyl-4-amino-3-penten-2-one.An oven-dry 500-mL Schlenk flask was charged with 38.0 g of triehyloxonium tetrafluoroborate (0.2 mol) under argon atmosphere and fitted with an addition funnel. 200 mL of dichloromethane was added to form a clear colorless solution. A 60 mL portion of dichloromethane and 31.05 grams of N- i<r>ec-butyl-4-amino-3-penten-2-one (0.2 mol) were charged into the addition funnel and this solution was added dropwise to the solution in the Schlenk flask, and the resulting solution was then stirred for 30 minutes. A solution of 20.2 mL sec-butyl amine (0.2 mol) and 30 mL dichloromethane was charged into the addition funnel and added to the reaction solution, which was then stirred overnight. Volatiles were then removed in vacuo and the resulting yellow oily solid was washed with 60 mL aliquot of cold ethyl acetate while the flask was placed in an ice-bath. No solid precipitate was observed due to this wash; rather part of the crude product appeared to dissolve. After decanting off the ethyl acetate wash, a second 60 mL ethyl acetate wash was attempted with identical results. Combined washes and crude product were added to a mixture of 500 mL benzene and 500 mL water containing 8.0 g sodium hydroxide (0.2 mol). The mixture was stirred for three minutes and then the organic phase was separated. The aqueous phase was extracted four times, each with 100 mL diethyl ether portions. All the organic phases were combined, dried over sodium sulfate and concentrated on a rotary evaporator. The crude product was then distilled through a 20 cm glass-bead packed column and short path still head. The desired product was collected in >99% pure form at 60-63<0>C, 80 mTorr (10 Pa) pressure. The apparent purity was determined by GC/MS, where the only impurity observed was N-sec-butyl-4-amino-3-penten-2-one.EXAMPLE 2: Synthesis and Characterization of a Metal-containing Compound of Formula I, with M = Sr (n = 2); R<1> = R<5> = sec-butyl; R<2> = R<4> = methyl; R<3> = H; x = 2; and z = 0: Strontium bis(N-sec-butyl-(4-sec~butylimino)-2-penten-2- aminato). In a dry box, a 500 mL Schlenk flask was charged with 7.765g of strontium bis(hexamethyldisilazane) (19 mmol) and 50 mL toluene. A second Schlenk flask was charged with 8.000 g of N-sec-butyl-(4-sec-butylimino)-2- penten-2-amine (38 mmol) and 50 mL toluene. The ligand solution was added to the strontium solution, immediately producing an amber-colored reaction solution which was stirred for 18 hours. Volatiles were then removed in vacuo. The crude product, a brown liquid, was charged into a 50 mL round-bottom Schlenk flask fitted with short path still head and Schlenk receiver flask in the dry box. The distillation apparatus was attached to a vacuum line and evacuated further, which induced some solidification in the still pot. At full vacuum, heating of the still pot was begun. A clear liquid (approximately 0.5 g) was collected at 60<0>C; GC/MS confirmed this material to be the ligand precursor. A second receiver flask was attached and the product was distilled at 145-160<0>C at full vacuum. The "cooling lines" to the still head were filled with 9O<0>C ethylene glycol to prevent the condensing distillate from becoming too viscous and clogging the still path. The collected product formed a yellow, slightly oily solid upon cooling. 6.585 g were collected for 71.6% yield. Elemental analysis calculated for C26H50N4Sr: Sr, 17.3%. Found 16.6%. Melting point of distilled product was determined to be 44-48[deg.]C.<1>H and <13>C nuclear magnetic resonance (NMR) results were consistent with the presence of four diastereomeric forms of the compound (two enantiomeric pairs and two meso forms). <1>H NMR (C6D6, [delta]): 4.190 (m, 2H, J=2.4, 2.4 Hz, [beta]-C-H), 3.330 (m, 4H, J=6.3 Hz, N-CH(CH3)(CH2CH3)), 1.873 (d, 12H, J=2.4 Hz, CL-C-CH3), 1.506 (m, 8H, J=I.4, 6.4 Hz, N- CH(CH3)(CH2CH3)), 1.253-1.220 (d, 4 sets overlapping, 6H, J=6.15-6.45 Hz, N- CH(CH3)(CH2CH3)), 1.188-1.162 (d, 4 sets overlapping, 6H, J=6.15-6.45 Hz, N- CH(CH3)(CH2CH3)), 0.970-0.897 (d, 4 sets overlapping, 12H, J=6.2 Hz, N- CH(CH3)(CH2CH3)). <13>C NMR (C6D6, [delta]): 161.294, 161.226 ([alpha]-C-C[Eta]3); 94.80, 86.96, 86.89, 86.70 ([beta]-CH); 56.19, 56.00, 52.67, 52.58 (N-CH(CH3)(CH2CH3)); 33.61, 33.56, 32.13, 32.04 (N-CH(CH3)(CH2CH3)); 23.86, 23.78, 23.67 ([alpha]-C- CH3); 22.47, 22.39 (N-CH(CH3)(CH2CH3)); 11.63, 11.33, 10.81, 10.75 (N- CH(CH3)(CH2CH3)). As illustrated in Figure 2, the metal-containing compound having the formula (Formula I) where R<1> = R<5> = see-butyl (3 degrees of freedom for sec- butyl as quantified, for example, in Table 1) has a lower melting point (44-48<0>C) compared to disclosed melting points for the corresponding metal-containing compounds having the formula (Formula I) (see, El-Kaderi et al., Organometallics, 23 :4995-5002 (2004)) where R<1> = R<5> = isopropyl (87-89<0>C; 1 degree of freedom for isopropyl as quantified, for example, in Table 1); and where R<1> = R<5> = tert-butyl (127-129[deg.]C; 0 degrees of freedom for tert-butyl as quantified, for example, in Table 1).The complete disclosures of the patents, patent documents, and publications cited herein are incorporated by reference in their entirety as if each were individually incorporated. Various modifications and alterations to this invention will become apparent to those skilled in the art without departing from the scope and spirit of this invention. It should be understood that this invention is not intended to be unduly limited by the illustrative embodiments and examples set forth herein and that such examples and embodiments are presented by way of example only with the scope of the invention intended to be limited only by the claims set forth herein as follows.
Some embodiments include a method of forming an integrated structure. An assembly is formed to include a stack of alternating first and second levels. The first levels have insulative material, and the second levels have voids which extend horizontally. The assembly includes channel material structures extending through the stack. A first metal-containing material is deposited within the voids topartially fill the voids. The deposited first metal-containing material is etched to remove some of the first metal-containing material from within the partially-filled voids. Second metal-containingmaterial is then deposited to fill the voids.
1.A method of forming an integrated structure, comprising:Forming an assembly to include a stack of alternating first and second layers; the first layer comprising an insulating material and the second layer comprising horizontally extending voids; the assembly comprising a channel material extending through the stack structure;Depositing a first metal-containing material within the void to partially fill the void;Etching the deposited first metal-containing material to remove some of the first metal-containing material from the partially filled void; andA second metal-containing material is deposited to fill the voids.2.The method of claim 1 wherein said first metal-containing material and said second metal-containing material are the same composition.3.The method of claim 1 wherein said first and second metal-containing materials are different compositions relative to each other.4.The method of claim 1 wherein said first and second metal-containing materials comprise one or more of tungsten, titanium, ruthenium, cobalt, nickel, and molybdenum.5.The method of claim 1 wherein said first and second metal-containing materials comprise tungsten.6.The method of claim 1 wherein said first metal-containing material comprises one or more of a metal nitride, a metal silicide, a metal carbide or a metal aluminum silicide; and wherein said second metal is substantially It consists of one or more of tungsten, titanium, tantalum, cobalt, nickel and molybdenum.7.The method of claim 6 wherein the metal in the first metal-containing material comprises one or both of tungsten and titanium.8.The method of claim 1 wherein said etching utilizes one or more of phosphoric acid, acetic acid, and nitric acid.9.The method of claim 1 wherein the filled voids comprise word line layers of a three dimensional NAND memory array.10.A method of forming an integrated structure, comprising:Forming an assembly to include a vertical stack of alternating first and second layers; the first layer being a horizontally extending insulating layer and comprising an insulating material; the second layer comprising a horizontal extension between the insulating layers a void; the assembly comprising a channel material structure extending through the stack; the horizontally extending voids being arranged around the channel material structure; the assembly comprising a slit extending through the stack; a horizontally extending gap leading to the slit;Depositing a first metal-containing material through the slit and into the horizontally extending void to partially fill the horizontally extending void;Removing some of the first metal-containing material from the horizontally extending voids in the region adjacent the slit; andA second metal-containing material is deposited to fill the horizontally extending voids.11.The method of claim 10 wherein said depositing of said first metal-containing material utilizes atomic layer deposition.12.The method of claim 10 wherein said first metal containing material and said second metal containing material are the same composition.13.The method of claim 10 wherein said first and second metal-containing materials are different compositions relative to each other.14.The method of claim 10 wherein said first and second metal-containing materials comprise one or more of tungsten, titanium, tantalum, cobalt, nickel, and molybdenum.15.The method of claim 10 wherein said removing of some of said first metal-containing material utilizes one or more of phosphoric acid, acetic acid, and nitric acid.16.The method of claim 15 wherein said removing of some of said first metal-containing material is carried out at a temperature ranging from about 60 ° C to about 100 ° C.17.The method of claim 10 wherein said removing of some of said first metal-containing material utilizes a combination of phosphoric acid, acetic acid, and nitric acid.18.The method of claim 10 wherein the filled voids comprise word line layers of a three dimensional NAND memory array.19.A method of forming an integrated structure, comprising:Forming an assembly to include a vertical stack of alternating first and second layers; the first layer being a horizontally extending insulating layer and comprising an insulating material; the second layer comprising a void between the insulating layers; The assembly includes a channel material structure extending through the stack; the void having a peripheral region lined with a conductive seed material;Causing a first material along the conductive seed material to partially fill the void;Etching the first material to remove some of the first material from within the void; andA second material is grown over the first material to fill the void.20.The method of claim 19 wherein said first material and said second material are the same composition.21.The method of claim 19 wherein said first and second materials are different compositions relative to each other.22.The method of claim 19 wherein said first and second materials comprise one or more of tungsten, titanium, tantalum, cobalt, nickel, and molybdenum.23.The method of claim 19 wherein said first and second materials comprise tungsten.24.The method of claim 19 wherein said etching of said first material utilizes one or more of phosphoric acid, acetic acid, and nitric acid.25.The method of claim 24 wherein said etching of said first material is performed at a temperature ranging from about 60 ° C to about 100 ° C.26.The method of claim 19 wherein said etching of said first material utilizes a combination of phosphoric acid, acetic acid, and nitric acid.27.The method of claim 19 wherein the filled voids comprise word line layers of a three dimensional NAND memory array.
Method of forming an integrated structureTechnical fieldA method of filling a horizontally extending opening of an integrated assembly.Background techniqueThe memory provides a data storage device for the electronic system. Flash memory is a type of memory and is used extensively in modern computers and devices. For example, modern personal computers can store the BIOS on a flash memory chip. As another example, it is increasingly common for computers and other devices to utilize flash memory in the form of a solid state drive in place of a conventional hard disk drive. As yet another example, flash memory is popular in wireless electronic devices because the flash memory enables manufacturers to support the new communication protocol as new communication protocols become standardized, and enables manufacturers to provide The ability to remotely upgrade devices for enhanced features.NAND can be the basic architecture of flash memory and can be configured to include vertically stacked memory cells.Before specifically describing NAND, it may be helpful to describe the relationship of memory arrays within the integrated arrangement more generally. 1 shows a block diagram of a prior art device 100 comprising: a memory array 102 having a plurality of memory cells 103 arranged in rows and columns; and an access line 104 (eg, for conducting a signal WL0) a word line to WLm); and a first data line 106 (eg, a bit line for conducting signals BL0 to BLn). The access line 104 and the first data line 106 can be used to transfer information to and from the memory unit 103. Row decoder 107 and column decoder 108 decode address signals A0 through AX on address line 109 to determine which memory cells in memory unit 103 are to be accessed. The sense amplifier circuit 115 operates to determine the value of the information read from the memory unit 103. I/O circuit 117 transfers the value of the information between memory array 102 and input/output (I/O) line 105. Signals DQ0 through DQN on I/O line 105 may represent values of information read from memory unit 103 or to be written to memory unit 103. Other devices may communicate with device 100 via I/O line 105, address line 109, or control line 120. Memory control unit 118 utilizes signals on control line 120 to control memory operations to be performed on memory unit 103. The device 100 can receive the supply voltage signals Vcc and Vss on the first power line 130 and the second power line 132, respectively. Apparatus 100 includes a selection circuit 140 and an input/output (I/O) circuit 117. Selection circuit 140 may respond to signals CSEL1 through CSELn via I/O circuit 117 to select signals on first data line 106 and second data line 113, which may represent pending reading from memory unit 103 or to be programmed The value of the information into memory unit 103. Column decoder 108 can selectively activate the CSEL1 to CSELn signals based on the A0 to AX address signals on address line 109. Selection circuit 140 may select signals on first data line 106 and second data line 113 to effect communication between memory array 102 and I/O circuit 117 during read and program operations.Memory array 102 of FIG. 1 can be a NAND memory array, and FIG. 2 shows a block diagram of a three-dimensional NAND memory device 200 that can be used with memory array 102 of FIG. Device 200 includes a plurality of strings of charge storage devices. In a first direction (Z-Z'), each string of charge storage devices may comprise, for example, thirty-two charge storage devices stacked on top of each other, wherein each charge storage device corresponds to, for example, thirty-two rows (eg, Tier0) To one of Tier31). The respective string of charge storage devices can share a common channel region, such as a common channel region formed in a respective pillar of a semiconductor material (eg, polysilicon), the strings of charge storage devices being formed around the respective pillars. In the second direction (X-X'), each of the plurality of strings, for example, the sixteen first groups may include, for example, a plurality of (eg, thirty-two) access lines shared ( That is, the "global control gate (CG) line", also referred to as the eight strings of the word line WL). Each of the access lines can couple charge storage devices within the banks. When each charge storage device includes a unit capable of storing two bits of information, the charge storage devices coupled by the same access line (and thus corresponding to the same row) may be logically grouped into, for example, two pages, such as P0/P32, P1/P33, P2/P34, etc. In the third direction (Y-Y'), each of the plurality of strings, for example, the eight second groups, may include sixteen strings coupled by a corresponding one of the eight data lines. The size of the memory block can include 1,024 pages and a total of about 16 MB (eg, 16 word lines x 32 lines x 2 bits = 1,024 pages/block, block size = 1,024 pages x 16 KB/page = 16 MB). The number of strings, banks, access lines, data lines, first group, second group, and/or pages may be larger or smaller than those shown in FIG.3 shows a cross-sectional view of the memory block 300 of the 3D NAND memory device 200 of FIG. 2 in the XX' direction, including ten of the sixteen first groups of the strings described with respect to FIG. Five-string charge storage device. The plurality of strings of memory blocks 300 can be grouped into a plurality of subsets 310, 320, 330 (eg, tile columns), such as tile column I, tile column j, and tile column K, where each subset (eg, Tile column) includes a "local block" of memory block 300. A global drain side select gate (SGD) line 340 can be coupled to the SGD of the plurality of strings. For example, global SGD line 340 can be coupled to multiple (eg, three) sub-SGD lines 342, 344, 346 via a corresponding one of a plurality (eg, three) of sub-SGD drivers 332, 334, 336, where Each sub-SGD line corresponds to a respective subset (eg, a tile column). Each of the sub-SGD drivers 332, 334, 336 can simultaneously couple or cut SGDs of strings that are independent of corresponding partial blocks (eg, tile columns) of other those partial blocks. A global source side select gate (SGS) line 360 can be coupled to the SGS of the plurality of strings. For example, the global SGS line 360 can be coupled to a plurality of sub-SGS lines 362, 364, 366 via a corresponding one of the plurality of sub-SGS drivers 322, 324, 326, wherein each sub-SGS line corresponds to a respective subset (eg, a tile) Column). Each of the sub-SGS drivers 322, 324, 326 can simultaneously couple or sever the SGS of a string that is independent of the corresponding partial block (e.g., tile column) of other those partial blocks. A global access line (eg, global CG line) 350 can couple a corresponding row of charge storage devices corresponding to each of the plurality of strings. Each global CG line (eg, global CG line 350) may be coupled to a plurality of sub-access lines (eg, sub-CG lines) 352, 354, 356 via a corresponding one of the plurality of sub-string drivers 312, 314, and 316. Each of the substring drivers can simultaneously couple or disconnect charge storage devices corresponding to respective partial blocks and/or rows independent of those other partial blocks and/or other rows. Charge storage devices corresponding to respective subsets (eg, partial blocks) and corresponding rows may include "local banks" of charge storage devices (eg, a single "tile"). A string corresponding to a respective subset (eg, a partial block) may be coupled to a corresponding one of sub-sources 372, 374, and 376 (eg, "tile source"), where each sub-source is coupled to a respective power source.The NAND memory device 200 is instead described with reference to the schematic diagram of FIG.Memory array 200 includes word lines 2021 through 202N, and bit lines 2281 through 228M.Memory array 200 also includes NAND strings 2061 through 206M. Each NAND string includes charge storage transistors 2081 through 208N. The charge storage transistor can use a floating gate material (eg, polysilicon) to store the charge, or a charge trapping material (eg, silicon nitride, metal nanodots, etc.) can be used to store the charge.Charge storage transistor 208 is located at the intersection of word line 202 and string 206. Charge storage transistor 208 represents a non-volatile memory cell for storing data. The charge storage transistor 208 of each NAND string 206 is source-to-drain between a source select device (eg, source side select gate SGS) 210 and a drain select device (eg, drain side select gate SGD) 212. The poles are connected in series. Each source select device 210 is located at the intersection of string 206 and source select line 214, and each drain select device 212 is located at the intersection of string 206 and drain select line 215. Selection devices 210 and 212 can be any suitable access device and are generally illustrated by the blocks in FIG.The source of each source selection device 210 is coupled to a common source line 216. The drain of each source select device 210 is coupled to the source of the first charge storage transistor 208 of the corresponding NAND string 206. For example, the drain of source select device 2101 is coupled to the source of charge storage transistor 2081 of corresponding NAND string 2061. Source selection device 210 is coupled to source select line 214.The drain of each drain select device 212 is coupled to a bit line (ie, digital line) 228 at the drain contact. For example, the drain of drain select device 2121 is coupled to bit line 2281. The source of each drain select device 212 is coupled to the drain of the last charge storage transistor 208 of the corresponding NAND string 206. For example, the source of drain select device 2121 is coupled to the drain of charge storage transistor 208N of corresponding NAND string 2061.Charge storage transistor 208 includes source 230, drain 232, charge storage region 234, and control gate 236. Control gate 236 of charge storage transistor 208 is coupled to word line 202. The columns of charge storage transistors 208 are those transistors that are coupled into NAND string 206 for a given bit line 228. The rows of charge storage transistors 208 are those transistors that are typically coupled to a given word line 202.A three-dimensional integrated structure (eg, three-dimensional NAND) can have vertically stacked word line layers. It may be difficult to uniformly deposit the conductive material within the word line layer. It would be desirable to establish a method for providing a conductive material within a wordline layer.Summary of the inventionIn one aspect, the present disclosure is directed to a method of forming an integrated structure, comprising: forming an assembly to include a stack of alternating first and second layers; the first layer comprising an insulating material, and the second layer comprising a horizontally extending void; the assembly comprising a channel material structure extending through the stack; depositing a first metal-containing material within the void to partially fill the void; and depositing a first metal-containing material Etching is performed to remove some of the first metal-containing material from the partially filled voids; and a second metal-containing material is deposited to fill the voids.In another aspect, the present disclosure is directed to a method of forming an integrated structure, comprising: forming an assembly to include a vertical stack of alternating first and second layers; the first layer being a horizontally extending insulating layer and including An insulating material; the second layer includes a horizontally extending void between the insulating layers; the assembly includes a channel material structure extending through the stack; the horizontally extending void surrounding the channel a material structure arrangement; the assembly includes a slit extending through the stack; the horizontally extending void leading to the slit; depositing a first metal-containing material through the slit and into the horizontally extending a void to partially fill the horizontally extending void; removing some of the first metal-containing material from the horizontally extending void in a region adjacent the slit; and depositing a second metal-containing material to The horizontally extending voids are filled.In yet another aspect, the present disclosure is directed to a method of forming an integrated structure, comprising: forming an assembly to include a vertical stack of alternating first and second layers; the first layer is a horizontally extending insulating layer and includes An insulating material; the second layer includes a void between the insulating layers; the assembly includes a channel material structure extending through the stack; the void having a peripheral region lined with a conductive seed material Causing a first material along the electrically conductive seed material to partially fill the void; etching the first material to remove some of the first material from within the void; and Two materials are grown over the first material to fill the voids.DRAWINGS1 shows a block diagram of a prior art memory device having a memory array with memory cells.2 shows a schematic diagram of the prior art memory array of FIG. 1 in the form of a 3D NAND memory device.3 shows a cross-sectional view of the prior art 3D NAND memory device of FIG. 2 in the XX' direction.4 is a schematic diagram of a prior art NAND memory array.5 and 6 are diagrammatic cross-sectional views of example assemblies at a process stage relative to an example method for fabricating an example stacked memory cell.Figure 5A is a top plan view of the assembly of Figure 5. The cross section of Figure 5 is along line 5-5 of Figure 5A; and the view of Figure 5A is along line 5A-5A of Figure 5.7 is a diagrammatic cross-sectional view of a prior art assembly at a process stage subsequent to the process stage of FIG. 6.8 through 13 are diagrammatic cross-sectional views of example assemblies at a process stage subsequent to the process stage of FIG. 6 with respect to an example method for fabricating an example stacked memory cell.Figure 13A is a top plan view of the assembly of Figure 13. The cross section of Figure 13 is along line 13-13 of Figure 13A; and the view of Figure 13A is along line 13A-13A of Figure 13.Detailed waysSome embodiments include a new method for depositing conductive wordline material within an assembly, the assembly comprising vertically stacked memory cells (eg, a three dimensional NAND memory array). Some embodiments include new structures formed using the new methods described herein. Example embodiments are described with reference to Figures 5, 6 and 8 through 13. Figure 7 is provided to illustrate a prior art process stage for comparison with respect to the method of the present invention.Referring to Figures 5 and 5A, construction 10 (which is also referred to as an integrated assembly or referred to as an integrated structure) includes a stack 14 of alternating first and second layers 16 and 18.The first layer 16 includes an insulating material 17, and the second layer 18 includes a void 19. Layers 16 and 18 can have any suitable thickness. Layer 16 can have a different thickness than layer 18 or can have the same thickness as layer 18.Insulating material 17 can comprise any suitable composition or combination of compositions; and in some embodiments, can comprise, consist essentially of, or consist of silica.Finally, conductive word lines (discussed below) are formed within the second layer 18, and such word lines include control gates for the memory cells. In some embodiments, layer 18 can be referred to as a NAND configured memory cell layer. A NAND configuration can include a string of memory cells (so-called NAND strings), where the number of memory cells in the string is determined by the number of memory cell layers 18. The NAND string can include any suitable number of memory cell layers. For example, a NAND string can have 8 memory cell layers, 16 memory cell layers, 32 memory cell layers, 64 memory cell layers, 512 memory cell layers, 1024 memory cell layers, and the like.Structures 20a-o extend through stack 14. The structure 20a-o may be referred to as a channel material structure because it includes the channel material 22. Channel material 22 includes a semiconductor material; and can include any suitable composition or combination of compositions. For example, channel material 22 can include one or more of silicon, germanium, III/V semiconductor materials (eg, gallium phosphide), semiconductor oxides, and the like.A tunneling material (sometimes referred to as a gate dielectric layer) 24, a charge storage material 26, and a charge blocking material 28 are between the channel material 22 and the vertically stacked layers 16/18. The tunnel material, charge storage material, and charge blocking material can comprise any suitable composition or combination of compositions.In some embodiments, tunneling material 24 can include one or more of, for example, silicon dioxide, aluminum oxide, hafnium oxide, zirconium oxide, and the like.In some embodiments, charge storage material 26 can include a charge trapping material such as silicon nitride, silicon oxynitride, conductive nanodots, and the like. In an alternate embodiment (not shown), the charge storage material 26 can be configured as a floating gate material (eg, polysilicon).In some embodiments, the charge blocking material 28 can include one or more of silicon dioxide, aluminum oxide, hafnium oxide, zirconium oxide, and the like.In the illustrated embodiment, the channel material 22 is configured as an annular ring within each of the structures 20a-o. The insulating material 30 fills such an annular ring. Insulating material 30 can comprise any suitable composition or combination of compositions, such as silica. The illustrated structure 20 can be considered to include a hollow channel configuration because the insulating material 30 is disposed within a "cavity" in an annular annular channel configuration. In other embodiments (not shown), the channel material can be configured in a solid post configuration.Channel material structures 20a-o can be considered to include all combinations of materials 22, 24, 26, 28, and 30. The top view of Figure 5A shows a pattern in which the channel material structures 20a-o can be arranged in a hexagonal stack.The slit 32 extends through the stack 14. The slits 32 enable access to all of the voids 19 such that such voids can be filled with conductive material during subsequent processing (described below). In some embodiments, the void 19 can be considered to open to the slit 32.The voids 19 are arranged around the channel material structure 20a-0. Thus, all of the voids 19 along the cross-sectional view of FIG. 5 can be fully accessed through the slits 32 shown. The plane of the cross section of Figure 5 cuts through the channel material structure 20g-i. The channel material structures 20d-f are out of plane of the cross-section of Figure 5 (specifically, the rows of structures 20 immediately behind the plane), but are shown in Figure 5 to help illustrate the voids 19 being arranged in the channel material. Between the structures. Some regions of channel material structures 20d-f are behind material 17, and are shown in dashed (ie, dashed) views; while other regions of channel material structures 20d-f are visible through void 19 and are in solid lines show.The void 19 can be considered to include a peripheral zone 21 (which can also be referred to as an edge or boundary).In some embodiments, stack 14 can be considered a vertically extending stack, and the insulating layer of layer 16 can be considered to extend horizontally. The void 19 can be considered to be a horizontally extending void that is vertically between the horizontally extending insulating layers.The stack 14 is above the support substrate 12. Substrate 12 can comprise a semiconductor material; and can, for example, comprise single crystal silicon, consist essentially of or consist of single crystal silicon. Substrate 12 can be referred to as a semiconductor substrate. The term "semiconductor substrate" is meant to include any configuration of semiconductor material, including but not limited to monolithic semiconductor materials, such as semiconductor wafers (either alone or in assemblies including other materials), and (alone or in combination with other materials) A layer of semiconductor material in the device. The term "substrate" refers to any support structure, including but not limited to the semiconductor substrates described above. In some applications, substrate 12 may correspond to a semiconductor substrate that houses one or more materials associated with integrated circuit fabrication. Such materials may include, for example, one or more of a refractory metal material, a barrier material, a diffusion material, an insulator material, and the like.A gap is shown between the substrate 12 and the stack 14 to graphically indicate that there may be one or more additional materials, components, etc. disposed between the substrate 12 and the stack 14. Such additional components may include, for example, a conductive power line, a select gate, and the like.Stack 14 of Figure 5 can be formed by any suitable process. An example process can include first forming a sacrificial material within layer 18 (such sacrificial material can be silicon nitride in some example embodiments), and then removing the sacrificial material after forming channel material structure 20 and slit 32 to remain The construction of Figure 5 below.Referring to FIG. 6, insulating material 34 is deposited along peripheral boundary 21 of void 19. Material 34 can include additional charge blocking materials; and can include any suitable composition or combination of compositions. In some embodiments, material 34 may comprise a high-k material (eg, one or more of aluminum oxide, hafnium oxide, zirconium oxide, hafnium oxide, etc.), wherein the term "high k" means greater than that of silicon dioxide. The dielectric constant of the electrical constant. While the insulating material 34 is illustrated as a single homogeneous material, in other embodiments, the insulating material can include two or more than two discrete compositions. For example, in some embodiments, insulating material 34 can comprise a laminate of silicon dioxide and one or more high k materials. In some embodiments, material 34 can be considered to form a liner within void 19. In some embodiments, the charge blocking material 28 can be omitted such that the material 34 is the only charge blocking material disposed within the assembly of the construction 10.The insulating material 34 can be formed by flowing a suitable precursor through the slits 32 during a deposition process (eg, an atomic layer deposition process, a chemical vapor deposition process, etc.).A conductive lining material (seed material) 38 is disposed over the insulating material 34. In some embodiments, the electrically conductive lining material 38 can comprise a metal nitride; such as tungsten nitride, titanium nitride, or the like.In a subsequent process, a conductive material is provided to fill the voids 19 and form a conductive word line. However, it has been found that it is difficult to uniformly fill the voids using a conventional process. Figure 7 illustrates a construction 10 of a process stage of a prior art process and illustrates the problems encountered in attempting to fill the void 19 with a conductive material 36 using conventional processes. In particular, the electrically conductive material 36 can pinch the void along the region of the slit 32 before the void is completely filled with the electrically conductive material. Therefore, some areas of the void 19 are not uniformly filled. This situation may cause problems in reducing the conductance along the conductive word lines (i.e., increasing the resistance), resulting in the memory being fabricated in the assembly of construction 10 utilizing excessive power, resulting in the use of assemblies fabricated in construction 10. Excessive heat is generated during the internal memory; and may even cause device failure.Some embodiments include methods that can be used to deposit conductive material more uniformly within the voids 19 than can be achieved using conventional methods.Referring to Figure 8, a configuration 10 of a process stage subsequent to the process stage of Figure 6 is illustrated in accordance with an example embodiment. The first material 40 is deposited within the void 19 under conditions that partially fill the void. The first material 40 can be a conductive material and can be referred to as a first conductive material. The first electrically conductive material 40 can comprise any suitable electrically conductive composition, such as various metals (eg, titanium, tungsten, cobalt, nickel, platinum, rhodium, etc.), metal containing compositions (eg, metal silicides, metal nitrides, One or more of a metal carbide, a metal aluminum silicide, etc. and/or a conductive doped semiconductor material (eg, conductive doped silicon, conductive doped germanium, etc.). In some embodiments, the first electrically conductive material 40 can be a metal containing material and can be referred to as a first metal containing material. In some embodiments, material 40 can comprise, consist essentially of, or consist of one or more metals selected from the group consisting of: tungsten, Titanium, tantalum, nickel, molybdenum and cobalt. In some embodiments, material 40 can comprise one or more of tungsten, titanium, tantalum, nickel, molybdenum, and cobalt; and can further comprise one or more of nitrogen, aluminum, silicon, oxygen, carbon, and helium. In some embodiments, material 40 can comprise, consist essentially of, or consist of a metal nitride (eg, one or more of tungsten nitride, titanium nitride, etc.).The first material 40 can be formed by any suitable process. For example, in some embodiments the first material 40 can be deposited by flowing one or both of atomic layer deposition (ALD) and chemical vapor deposition (CVD) by flowing a suitable composition through the slit and into the void 19. In some embodiments, it has been found to be particularly beneficial to utilize ALD during the formation of the first material 40 within the void 19, as it has been found that the ALD can form a desired substantially uniform layer of material 40 through the void 19. In some embodiments, material 40 can be considered to grow along the exposed surface of conductive seed material 38.Referring to Figure 9, some of the first material 40 is removed from the slits 32 and from the partially filled voids 19 by etching. Thus, in the case where the material 40 begins to pinch the void in the above-described problematic manner described with reference to prior art Figure 7, the region in which such pinch-off is most likely to occur from the void (i.e., from the gap adjacent the slit) The area of 32) removes material 40 to alleviate the problem.The etching of material 40 can utilize any suitable chemical method and conditions. In some embodiments, material 40 is a metal-containing material, including one or more of tungsten, titanium, tantalum, cobalt, nickel, and molybdenum. The etching conditions may utilize one or more of phosphoric acid, acetic acid, and nitric acid; and may be performed while the etchant is at a temperature ranging from about 60 ° C to about 100 ° C. The etching can be carried out under atmospheric pressure or at any other suitable pressure. The etching may continue for a suitable duration to remove the desired amount of material 40, and such duration may be related to the particular configuration of the assembly 10, the particular dimensions of the void 19, the composition of the material 40, and the like. One of ordinary skill in the art can determine the appropriate duration of a particular assembly.Referring to Figure 10, material 42 is deposited to fill voids 19. In some embodiments, the void 19 can be considered to be partially filled with material 40 after the etch of FIG. 9 with the remaining unfilled portion. Material 42 can be considered to fill such remaining portions of void 19.Material 42 can be a conductive material and can be referred to as a second conductive material. The second electrically conductive material 42 can comprise any suitable composition, such as various metals (eg, titanium, tungsten, cobalt, nickel, platinum, rhodium, etc.), metal containing compositions (eg, metal silicides, metal nitrides, metals) One or more of a carbide, a metal aluminum silicide, etc. and/or a conductive doped semiconductor material (eg, conductive doped silicon, conductive doped germanium, etc.). In some embodiments, the second electrically conductive material 42 can be a metal containing material and can be referred to as a second metal containing material. In some embodiments, material 42 can comprise, consist essentially of, or consist of one or more metals selected from the group consisting of: tungsten, Titanium, tantalum, nickel, molybdenum and cobalt. In some embodiments, material 42 can comprise one or more of tungsten, titanium, tantalum, nickel, molybdenum, and cobalt; and can further comprise one or more of nitrogen, aluminum, silicon, oxygen, carbon, and ruthenium. In some embodiments, material 42 may comprise, consist essentially of, or consist of a metal nitride (eg, one or more of tungsten nitride, titanium nitride, etc.).In some embodiments, first material 40 and second material 42 can comprise the same composition to each other; and in other embodiments, different compositions can be included with respect to each other. For example, in some embodiments, both first material 40 and second material 42 can comprise, consist essentially of, or consist of tungsten. In some embodiments, the first material 40 may include one or more of titanium nitride, tungsten nitride, and titanium aluminum silicide, substantially one or more of titanium nitride, tungsten nitride, and titanium aluminum silicide. The composition consists of or consists of one or more of titanium nitride, tungsten nitride and titanium aluminum silicide; and the second material 42 may comprise tungsten, consist essentially of or consist of tungsten.The second material 42 can be deposited under any suitable conditions. In some embodiments, the second material 42 can be considered to grow over the first material 40. In some embodiments, the second material 42 can be deposited using one or more of ALD, CVD, and physical vapor deposition (PVD).In some embodiments, the first material 40 can be deposited using a first process selected from the group consisting of ALD, CVD, and PVD; and the second material 42 can be deposited using a second process selected from the group consisting of ALD, CVD, and PVD. The first and second processes may be identical to each other or may be different with respect to each other.Figure 10 schematically illustrates a second material 42 that is different than the first material 40 to emphasize that the first material 40 and the second material 42 may differ from one another in some embodiments. Figure 11 shows the construction 10 at a process stage equivalent to the process stage of Figure 10, but showing the materials 40 and 42 combined into a single material 40/42. Materials 40 and 42 can be combined into a single material when materials 40 and 42 comprise the same composition to each other. The configuration of Fig. 11 including the merged material 40/42 will serve as the basis for the remaining figures (Figs. 12 and 13) of this disclosure, thereby simplifying the drawing as compared to the configuration of Fig. 10 having separate materials 40 and 42. However, it should be understood that the process stages of Figures 12 and 13 can also be applied with respect to applications in which materials 40 and 42 are different from one another.The process sequence of Figures 8 through 11 describes a deposition-etch-deposit sequence in which a first material 40 is deposited, then etched, and then a second material 42 is deposited. In other embodiments, such sequences may be one iteration of a process that utilizes two or more than two in a sequence. For example, other embodiments may utilize a deposition-etch-deposition-etch-deposition process, a deposition-etch-deposition-etch-deposition-etch-deposition process, and the like.Referring to Figure 12, conductive material 40/42 is removed from slit 32 with one or more suitable etchants. The remaining conductive material 40/42 forms a conductive word line 64 along the second layer 18.Referring to Figures 13 and 13A, the slit 32 is filled with an insulating material 68. The insulating material 68 within the slit is configured as a plate 70 that extends longitudinally along the axis 5 (adjacent to the top view of Figure 13A).Word line 64 includes gate region 72 adjacent channel material structure 20g-i along the plane of FIG. 13; and the gate region along with the material in the channel material structure forms a plurality of vertically stacked along the plane of FIG. Memory cell 74 (other memory cells and gate regions are along channel material structure 20d-f of Figure 13; but such other memory cells and gate regions are outside the plane of Figure 13 and are not shown in Figure 13). Memory unit 74 can be a NAND memory unit of a three dimensional NAND memory array. In some embodiments, the fill void 19 can be considered to correspond to a word line layer of a three-dimensional NAND memory array.In some embodiments, the insulating plate 70 can be utilized to subdivide a memory array among blocks or at least partial blocks (where "blocks" correspond to a set of memory cells that are simultaneously erased in a block erase operation).The assemblies discussed above can be incorporated into an electronic system. Such electronic systems can be used, for example, in memory modules, device drivers, power modules, communication modems, processor modules, and application specific modules, and can include multiple layers, multiple slices of modules. An electronic system can be any of the following broad range of systems: for example, cameras, wireless devices, displays, chipsets, set-top boxes, games, lighting, vehicles, clocks, televisions, cellular phones, personal computers, automobiles, industrial control systems, Aircraft, etc.Unless otherwise specified, the various materials, materials, compositions, and the like described herein may be formed by any suitable method now known or to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physics. Vapor deposition (PVD), etc.The terms "dielectric" and "insulating" can be used to describe materials having insulating electrical properties. The terms are considered synonymous in this disclosure. In some instances the term "dielectric" and in other instances the term "insulating" (or "electrically insulating") may be used to provide linguistic variations within the disclosure to simplify the premise of the following claims, rather than indicating Any significant chemical or electrical difference.The particular orientation of the various embodiments in the figures is for illustrative purposes only, and in some applications embodiments may be rotated relative to the illustrated orientation. The description provided herein and the following claims refer to any structure having the described relationship between the various features, regardless of whether the structure is in a particular orientation of the drawings or relative to such orientation.Unless otherwise specified, the cross-sectional views of the accompanying description show only features in the cross-sectional plane and not the material behind the cross-sectional plane in order to simplify the drawing.When a structure is referred to as being "on another structure" or "abut another structure," the structure may be directly on another structure or may also have an insertion structure. In contrast, when a structure is referred to as being "directly on another structure" or "directly on" another structure, there is no intervening structure.Structures (eg, layers, materials, etc.) may be referred to as "vertically extending" to indicate that the structure generally extends upwardly from the underlying substrate (eg, the substrate). The vertically extending structure may extend substantially perpendicularly relative to the upper surface of the substrate or may not extend perpendicularly relative to the upper surface of the substrate.Some embodiments include a method of forming an integrated structure. The assembly is formed to include a stack of alternating first and second layers. The first layer has an insulating material and the second layer has a horizontally extending void. The assembly includes a channel material structure that extends through the stack. A first metal-containing material is deposited within the void to partially fill the void. The deposited first metal-containing material is etched to remove some of the first metal-containing material from the partially filled voids. A second metal-containing material is deposited to fill the voids.Some embodiments include a method of forming an integrated structure. The assembly is formed to comprise a vertical stack of alternating first and second layers. The first layer is a horizontally extending insulating layer and includes an insulating material. The second layer is a horizontally extending gap between the insulating layers. The assembly includes a channel material structure that extends through the stack. The horizontally extending voids are arranged around the channel material structure. The assembly includes a slit that extends through the stack. A horizontally extending gap leads to the slit. The first metal-containing material is deposited through the slit and into the horizontally extending void to partially fill the horizontally extending void. Some of the first metal-containing material is removed from the horizontally extending voids in the region adjacent the slit. A second metal-containing material is deposited to fill the horizontally extending voids.Some embodiments include a method of forming an integrated structure. The assembly is formed to comprise a vertical stack of alternating first and second layers. The first layer is a horizontally extending insulating layer and includes an insulating material. The second layer includes a gap between the insulating layers. The assembly includes a channel material structure that extends through the stack. The voids have a peripheral region lined with a conductive seed material. The first material is grown along the conductive seed material to partially fill the voids. The first material is etched to remove some of the first material from within the void. A second material is grown over the first material to fill the voids.
Described are methods of adapting FIB techniques to copper metallization, and to structures that result from the application of such techniques. A method in accordance with the invention can be used to sever copper traces without damaging adjacent material or creating conductive bridges to adjacent traces.Semiconductor devices that employ copper traces typically include a protective passivation layer that protects the copper. This passivation layer is removed to render the copper traces visible to an FIB operator. The copper surface is then oxidized, as by heating the device in air, to form a copper-oxide layer on the exposed copper. With the copper-oxide layer in place, an FIB is used to mill through the copper-oxide and copper layers of a selected copper trace to sever the trace. The copper-oxide layer protects copper surfaces away from the mill site from reactive chemicals used during the milling process. In one embodiment, a copper-oxide layer of at least 40 nanometers thick affords adequate protection.
What is claimed is: 1. A method of separating a conductive element into first and second conductive portions, wherein the conductive element includes copper and is disposed between an upper insulating layer and a lower insulating layer, the method comprising:a. removing the upper insulating layer to expose a surface of the conductive element; b. oxidizing the exposed surface of the conductive element to form a copper-oxide layer over the exposed surface; c. focusing an ion beam between the first and second conductive portions of the conductive element until the first conductive portion is electrically isolated from the second conductive portion. 2. The method of claim 1, wherein oxidizing the exposed surface of the conductive element comprises heating the conductive element.3. The method of claim 2, wherein the conductive element is exposed to air during the heating.4. The method of claim 2, wherein the conductive element is maintained above 100 degrees Celsius during the heating.5. The method of claim 4, wherein the conductive element is maintained at about 300 degrees Celsius during the heating.6. The method of claim 5, wherein the conductive element is maintained at about 300 degrees Celsius for more than 10 minutes.7. The method of claim 6, wherein the conductive element is maintained at about 300 degrees Celsius for about 60 minutes.8. The method of claim 2, wherein the conductive element is heated for more than 10 minutes.9. The method of claim 1, further comprising scanning the surface of the conductive element to locate the conductive element before focusing the ion beam between the first and second conductive portions.10. The method of claim 9, further comprising scanning the surface of the conductive element a second time to determine whether the first and second conductive portions are electrically isolated.11. The method of claim 1, wherein the conductive element is disposed between first and second insulating walls extending from the lower insulating layer, and wherein removing the upper insulating layer to expose the surface of the conductive element leaves at least a portion of the first and second insulating walls.12. The method of claim 11, wherein the first and second insulating walls are formed by etching a channel in the lower insulating layer.13. The method of claim 1, wherein the upper conductive layer comprises at least one of silicon nitride or silicon dioxide.14. The method of claim 1, wherein the lower conductive layer comprises at least one of silicon nitride and silicon dioxide.15. The method of claim 1, wherein the ion beam comprises gallium.16. The method of claim 1, wherein the ion beam is focused in the presence of at least one of bromine, iodine, or chlorine to electrically isolate the first conductive portion from the second conductive portion.17. A semiconductor device structure comprising:a. an insulating layer having a concavity; b. a conductive element comprised of copper and disposed in the concavity of the insulating layer; c. an oxide layer comprised of copper and disposed on the conductive element; d. wherein the oxide layer is at least 40 nanometers in a dimension normal to the surface of the insulating layer. 18. The structure of claim 17, further comprising an isolation cut extending through the conductive element and into the insulating layer below the conductive element.19. The structure of claim 17, wherein the isolation cut is formed by ion milling.20. The structure of claim 17, wherein the insulating layer comprises silicon dioxide and silicon nitride.21. The structure of claim 20, wherein the silicon nitride forms a first stratum within the insulating layer and the silicon dioxide forms a second stratum within the insulating layer.22. The structure of claim 17, wherein at least a portion of the conductive element extends through the insulating layer and into an underlying conductive layer.23. The structure of claim 17, wherein the conductive element comprises a seed layer disposed between the copper and the insulating layer.24. The structure of claim 23, wherein the seed layer comprises a tantalum alloy.25. The method of claim 1, wherein the upper insulating layer comprises a passivation layer.26. The method of claim 1, wherein the copper-oxide layer is at least 40 nanometers thick.27. The semiconductor device of claim 17, wherein the oxide layer consists essentially of copper and oxygen.
FIELD OF THE INVENTIONThe present invention relates to focused ion beam (FIB) methods used, for example, in failure analysis of Very Large Scale Integrated (VLSI) circuit devices. In particular, the present invention relates to methods and systems in which a FIB is used to mill copper conductors within integrated circuits.BACKGROUNDA focused ion beam (FIB) system focuses ions into a beam and scans the beam across small areas of a sample. The beam interacts with the sample to produce secondary electrons that are then collected to produce an image of the sample. Raised areas on the sample produce more secondary ions than depressed areas, and this difference provides sufficient contrast to produce high-resolution images similar to that of a scanning electron microscope (SEM).The ion beam typically employed by FIB systems uses Gallium ions. Gallium ions have sufficient energy (mass and speed) to mill sample surfaces. Thus, in addition to imaging, FIB systems can drill holes, cut metal lines, and connect metal lines (through metal deposition) in integrated circuits. These functions are often used in failure analysis. For example, drilling holes in an insulation layer can expose underlying features for test, and cutting and connecting metal lines can help to locate or confirm a failure. Such techniques can be performed in FIB systems that facilitate the identification of opens and shorts using voltage-contrast images. For a more detailed treatment of conventional FIB systems and voltage-contrast imaging, see U.S. Pat. No. 5,140,164, entitled "IC Modification With Focused Ion Beam System," by Talbot et al., and U.S. Pat. No. 5,521,516, entitled "Semiconductor Integrated Circuit Fault Analyzing Apparatus and Method Therefor," by Hanagama et al. Both of these patents are incorporated herein by reference.Conventional FIB systems work well for milling aluminum conductors on integrated circuits. However, many in the semiconductor industry are pursuing new process technologies that employ copper metallization to produce superior circuits. Technologies that employ copper metallization would benefit from FIB imaging and milling in the same manner as technologies that employ aluminum metallization. Unfortunately, milling copper using conventional FIB systems has proved difficult.Aluminum lines are typically formed on top of an insulating layer so that the top surface of the aluminum is above the top surface of the insulating layer. A passivation layer applied over the top of the aluminum and insulating layers to protect the aluminum follows the contours of the aluminum, resulting in raised areas over aluminum structures and valleys between them. This surface relief is easily identified using ion beam imaging and can therefore be used to locate lines of interest. In contrast, conventional copper metallization is formed within concavities (e.g., trenches) in an insulating layer. A subsequent planarization process, such as chemical-mechanical polishing, produces a flat surface in which the top surfaces of the copper and insulating layers are even. A passivation layer is then applied over the copper and insulating layers to protect the copper. The resulting structure is substantially flat, rendering it difficult or impossible to identify copper lines using ion-beam imaging.The natural solution to the viewing problem is to remove the passivation layer over the copper lines. However, copper reacts strongly in the presence of gases used to enhance FIB etching during the milling process. Thus, the chemistry used to mill a very small feature attacks copper surfaces in the area surrounding the feature. For example, iodine used in gas-assisted etches attacks and destroys copper metallization in the immediate area of the milled feature. Further, severed copper lines can grow back together after FIB exposure. These and other problems associated with FIB milling of copper are noted in an article entitled "The Challenges of FIB Chip Repair and Debug Assistance in the 0.25 [mu]m Copper Interconnect Millennium," by S. B. Herschbein, et al. (1998), which is incorporated herein by reference. Copper lines severed using FIB also suffer from electrical "bridging" between segments and with adjacent circuit features. This undesirable bridging is possibly the result of copper atoms displaced by the ion beam remaining in and around the mill site.FIB techniques must be adapted for use with copper metallization if FIB methodology is to retain its usefulness as copper replaces aluminum as the interconnect metallurgy of choice for high-performance integrated circuits.SUMMARYThe present invention is directed to methods of adapting FIB techniques to copper metallization, and to structures that result from the application of such techniques. A method in accordance with the invention can be used to sever copper traces without damaging adjacent material or creating conductive bridges to adjacent traces.Semiconductor devices that employ copper traces typically include a protective passivation layer that protects the copper. This passivation layer is removed to render the copper traces visible to an FIB operator. The copper surface is then oxidized, as by heating the device in air, to form a copper-oxide layer on the exposed copper. With the copper-oxide layer in place, an FIB is used to mill through the copper-oxide and copper layers of a selected copper trace to sever the trace. The copper-oxide layer protects copper surfaces away from the mill site from reactive chemicals used to enhance etching during the milling process. In one embodiment, a copper-oxide layer at least 40 nanometers thick affords adequate protection.In one embodiment, copper traces reside in concavities in an underlying insulating layer. These conventional structures are formed using a planar process, such as chemical-mechanical polishing, before applying the passivation layer. When subjected to FIB techniques in accordance with the invention, copper traces disposed within the concavities are encapsulated by the insulating and copper-oxide layers prior to milling. The milling process then produces a trench that extends through the copper-oxide layer, the copper, and into the underlying insulating layer.This summary does not purport to define the invention; the appended claims define the invention.BRIEF DESCRIPTION OF THE FIGURESFIG. 1 (prior art) is a cross-section of a portion of an integrated circuit structure 100 formed using a modern damascene copper electroplating process.FIG. 2 depicts a structure 200 formed by stripping a portion of passivation layer 115 from structure 100 of FIG. 1 to expose the surfaces of conductive elements 150 and 155.FIG. 3 depicts a structure 300 formed by exposing structure 200 of FIG. 2 to an oxidizing environment to form a protective oxide layer 305 over conductive elements 150 and 155.FIG. 4A shows a cross section of structure 300 taken along line A-A' of FIG. 3.FIG. 4B is a cross-section similar to that of FIG. 4A but including an isolation cut 400 severing conductive element 150 into two portions 150A and 150B.DETAILED DESCRIPTIONFIG. 1 (prior art) is a cross section of a portion of an integrated circuit structure 100 formed using a modern damascene copper electroplating process. Structure 100 includes an insulating layer 105 disposed between a lower metal layer 110 and a passivation layer 115. Insulating layer 105 includes two silicon dioxide layers 120 and 125 separated by a silicon nitride etch-stop layer 130. A second silicon nitride etch-stop layer 135 separates silicon dioxide layer 125 from the underlying metal layer 110. Passivation layer 115 includes a silicon nitride layer 140 and a plane oxide 145. Other embodiments use different types and numbers of layers to passivate structure 100, as will be obvious to those of skill in the art.A pair of conductive elements 150 and 155 extend into insulating layer 105. Each conductive element includes remnants of a seed layer 160, typically a tantalum alloy (e.g., TaNx), provided to facilitate electroplating of copper metal 165. For a detailed discussion of a damascene process suitable for use in conjunction with the invention, see "Damascene Copper Electroplating For Chip Interconnections," by P. C. Andricacos, et al. (1998), which is incorporated herein by reference.The following discussion assumes that conductive element 165 is a circuit trace, and further that conductive element 165 should be severed. Referring to FIG. 2, passivation layer 115 and a portion of insulating layer 105 are first stripped away to expose the surface of conductive elements 150 and 155. In one embodiment, passivation layer 115 and a portion of silicon dioxide layer 120 are removed using a reactive ion etch using a mixture of carbon tetrafluoride (CF4) and oxygen (O2) gases at respective partial pressures of 60 mtorr and 120 mtorr. The etch was carried out at 150 watts for about five minutes, leaving about fifty to sixty percent of silicon dioxide layer 120. Exposing conductive elements 150 and 155 allows an FIB system operator to view and locate conductive element 150.Conductive element 150 is to be cut using an enhanced FIB etch that employs iodine gas. Other gases, such as chlorine and bromine, can also be used to enhance the etch. The reactive gases used to enhance the etch would severely damage neighboring copper structures (e.g., conductive element 165) in the absence of some protection. Thus, referring to FIG. 3, a protective copper oxide layer 305 is grown on the expose surfaces of conductive elements 150 and 155. Experimental data suggest that oxide layer 305 should be about forty nanometers thick (or more) to afford a sufficient level of protection to sever a copper line that is approximately one micron wide and 0.75 microns thick using an enhanced FIB etch. In one embodiment, a copper oxide layer of sufficient thickness was formed by heat treating structure 100 in air on a hot plate at 300 degrees Celsius for 60 minutes.FIG. 4A shows a cross section of structure 300 taken along line A-A' of FIG. 3. FIG. 4B is also a cross-section of structure 300 taken along line A-A' of FIG. 3, but includes an isolation cut 400 severing conductive element 150 into two portions 150A and 150B. In one embodiment, isolation cut 400 is created using a conventional gallium-ion beam 410 at 12 pA in an enhanced etch mode that uses iodine (I2) gas. The FIB milling process produces a residue of milled material 415 that builds up on the walls of isolation cut 400. For a detailed discussion of an exemplary FIB system for use with the present invention, see U.S. Pat. No. 5,140,164 to Talbot et al., issued Aug. 18, 1992, which is incorporated herein by reference.While the present invention has been described in connection with specific embodiments, variations of these embodiments will be obvious to those of ordinary skill in the art. For example, the conductor severed in the above examples was a top-layer conductor; however, other metal layers can also be modified in accordance with the invention. Therefore, the spirit and scope of the appended claims should not be limited to the foregoing description.
The invention discloses techniques for performing accelerated point sampling in a texture processing pipeline. A texture processing pipeline in a graphics processing unit generates the surface appearance for objects in a computer-generated scene. This texture processing pipeline determines, at multiple stages within the texture processing pipeline, whether texture operations and texture loads may be processed at an accelerated rate. At each stage that includes a decision point, the texture processing pipeline assumes that the current texture operation or texture load can be accelerated unless specific, known information indicates that the texture operation or texture load cannot be accelerated. As a result, the texture processing pipeline increases the number of texture operations and texture loads that are accelerated relative to the number of texture operations and texture loads that are not accelerated.
1.A computer-implemented method for accessing texture memory in a graphics processing unit, the method comprising:at a first stage in the texture processing pipeline, generating a first determination that a texture memory query is eligible for acceleration within the texture processing pipeline;Based on the first determination, proceeding the texture memory query to a second stage in the texture processing pipeline;at the second stage in the texture processing pipeline, generating a second determination that the texture memory query is eligible for acceleration within the texture processing pipeline;The texture memory query is processed within the texture processing pipeline based on at least one of the first determination and the second determination.2.3. The computer-implemented method of claim 1, wherein the texture memory query is associated with a texture instruction, the texture instruction comprising a texture load of a single texel in texture memory.3.3. The computer-implemented method of claim 1, wherein the texture memory query is associated with a texture instruction including a single texel associated with a location in texture memory closest to a location specified by the texture memory query texture operations.4.4. The computer-implemented method of claim 1, wherein the texture memory query is associated with a texture memory instruction and the first determination is based on an opcode included in the texture memory instruction.5.3. The computer-implemented method of claim 1, wherein the texture memory query is associated with a texture memory instruction, and the first determination is based on identifying the texture memory instruction as a texture for a single texel in texture memory An opcode for a load where the single texel is located at the memory address specified by the texture memory query.6.The computer-implemented method of claim 1, wherein the texture memory query is associated with a texture memory instruction, and at least one of the first determination or the second determination is based on identifying the texture memory instruction as An opcode for a texture operation on a single texel in texture memory that is closest to the memory address specified by the texture memory query.7.The computer-implemented method of claim 1, wherein the texture memory query is associated with a texture memory instruction and the second determination is based on header state data or sampler state data associated with the texture memory instruction one or more of the .8.One or more non-transitory computer-readable media storing program instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps:at a first stage in the texture processing pipeline, generating a first determination that a texture memory query is eligible for acceleration within the texture processing pipeline;Based on the first determination, proceeding the texture memory query to a second stage in the texture processing pipeline;at the second stage in the texture processing pipeline, generating a second determination that the texture memory query is eligible for acceleration within the texture processing pipeline;The texture memory query is processed within the texture processing pipeline based on at least one of the first determination and the second determination.9.The one or more non-transitory computer-readable media of claim 8, wherein the texture memory query is associated with a texture memory instruction, and the first determination is based on a value included in the texture memory instruction opcode.10.8. The one or more non-transitory computer-readable media of claim 8, wherein the texture memory query is associated with a texture memory instruction, and the first determination is based on identifying the texture memory instruction as targeting An opcode for a texture load of a single texel in texture memory at the memory address specified by the texture memory query.11.The one or more non-transitory computer-readable media of claim 8, wherein the texture memory query is associated with a texture memory instruction and at least one of the first determination or the second determination An opcode based on identifying the texture memory instruction as a texture operation for a single texel in texture memory that is closest to the memory address specified by the texture memory query.12.The one or more non-transitory computer-readable media of claim 8, wherein the texture memory query is associated with a texture memory instruction, and the second determination is based on a texture memory instruction associated with the texture memory instruction One or more of header state data or sampler state data.13.9. The one or more non-transitory computer-readable media of claim 8, wherein the texture memory query is associated with a texture memory instruction, and the second determination is based on inclusion in a data as specified by header status data multiple levels of detail in the textures.14.The one or more non-transitory computer-readable media of claim 8, wherein the texture memory query is associated with a texture memory instruction and the second determination is based on texels specified by header state data the size of.15.The one or more non-transitory computer-readable media of claim 8, wherein the texture memory associated with the texture memory query is based on at least one of the first determination and the second determination Instructions are executed with a first number of threads per clock cycle or with a second number of threads per clock cycle.16.A system comprising:memory for storing instructions; anda processor coupled to the memory, the processor, when executing the instructions:at a first stage in the texture processing pipeline, generating a texture memory query for a first determination that is eligible for acceleration within the texture processing pipeline;Based on the first determination, proceeding the texture memory query to a second stage in the texture processing pipeline;at the second stage in the texture processing pipeline, generating a second determination that the texture memory query is eligible for acceleration within the texture processing pipeline;The texture memory query is processed within the texture processing pipeline based on at least one of the first determination and the second determination.17.17. The system of claim 16, wherein the texture memory query is associated with a texture instruction, the texture instruction comprising a texture load of a single texel in texture memory.18.17. The system of claim 16, wherein the texture memory query is associated with a texture instruction that includes a texture operation associated with a single texel in texture memory closest to a location specified by the texture memory query .19.17. The system of claim 16, wherein the texture memory query is associated with a texture gather operation comprising four adjacent texels in texture memory closest to a location specified by the texture memory query The associated texture operation.20.17. The system of claim 16, wherein the texture memory query is associated with a texture memory instruction and the first determination is based on an operation identifying the texture memory instruction as a texture load for a single texel in texture memory code, wherein the single texel is located at the memory address specified by the texture memory query.
Techniques for performing accelerated point sampling in the texture processing pipelinetechnical fieldVarious embodiments relate generally to parallel processing architectures and, more particularly, to techniques for performing accelerated point sampling in texture processing pipelines.Background techniqueGraphics processing units (GPUs) are used to generate three-dimensional (3D) graphics objects and two-dimensional (2D) graphics objects for a variety of applications, including feature films, computer games, virtual reality (VR) and augmented reality (AR) experiences, Mechanical design, etc. Modern GPUs include texturing hardware to generate surface appearances (referred to herein as "surface textures") for 3D objects in 3D graphics scenes. Texture processing hardware applies surface appearance to 3D objects by "wrapping" appropriate surface textures around them. The process of generating and applying surface textures to 3D objects provides a highly realistic appearance to those 3D objects in a 3D graphics scene.Texture processing hardware is configured to execute various texture-related instructions, including texture operations and texture loading. Texture processing hardware generates access texture information by generating memory references (referred to herein as "queries") to texture memory. Texture processing hardware retrieves surface texture information from texture memory in varying situations, such as when rendering object surfaces in a 3D graphics scene for display on a display device, when rendering a 2D graphics scene, or during computational operations.Surface texture information includes texels (referred to herein as "texels") used to texture or shade the surfaces of objects in a 3D graphics scene. The texture processing hardware and associated texture cache are optimized for efficient, high-throughput read-only access to support high demands on texture information during graphics rendering with little or no write support. In addition, texture processing hardware includes specialized functional units for performing various texture operations, such as level of detail (LOD) calculations, texture sampling, and texture filtering.Typically, texturing operations involve querying a number of texels in 3D space around a particular point of interest, and then performing various filtering and interpolation operations to determine the final color for that point of interest. Instead, texture loading typically queries a single texel and returns it directly to the user application for further processing. Because filtering and interpolation operations typically involve querying four or more texels per processing thread, texture processing hardware is typically constructed to accommodate generating multiple queries per thread. For example, texture processing hardware can be built to accommodate up to four texture memory queries in a single memory cycle. In this way, the texture processing hardware is able to query and receive most or all of the required texture information in one memory cycle.The disadvantage of this method for querying texture memory is that when texture processing hardware is used for texture loading, only one of four possible texture memory queries is performed in a single store cycle. As a result, only one quarter of the memory access capability of the texture processing hardware is utilized during texture loading. Furthermore, some texture operations (referred to herein as point-sampling texture operations) only require one or two texture memory queries to be performed in a given memory cycle, thus utilizing only one-quarter to one-quarter of the texture processing hardware. One-half the memory access capability. This underutilization of texture processing hardware can lead to reduced efficiency and performance when the GPU performs texture loading and point-sampling texture operations.As previously mentioned, what is needed in the art is a more efficient technique for querying texture information in a graphics processing unit.SUMMARY OF THE INVENTIONVarious embodiments of the present disclosure set forth computer-implemented methods for accessing texture memory in a graphics processing unit. The method includes, at a first stage of the texture processing pipeline, generating a first determination that a texture memory query is eligible for acceleration within the texture processing pipeline. The method also includes causing the texture memory query to proceed to a second stage in the texture processing pipeline based on the first determination. The method also includes, at a second stage in the texture processing pipeline, generating a second determination that the texture memory query is eligible for acceleration within the texture processing pipeline. The method also includes processing the texture memory query within the texture processing pipeline based on at least one of the first determination and the second determination.Other embodiments include, but are not limited to, systems implementing one or more aspects of the disclosed technology, and one or more computer-readable media including instructions for performing one or more aspects of the disclosed technology.At least one technical advantage of the disclosed technique over the prior art is that, with the disclosed technique, a greater percentage of texture memory access capacity is used during texture loading and during simple texture operations. As a result, the efficiency and performance of texture processing hardware during texture loading and texture operations is improved relative to existing methods. Another technical advantage of the disclosed technology is that the texture processing hardware includes multiple stages for determining whether the memory access capabilities of the texture processing hardware can be used more efficiently. As a result, a greater number of texture loads and texture operations can take advantage of the disclosed techniques relative to methods that only make this determination at a single stage of the texture processing hardware. These advantages represent one or more technical improvements over prior art methods.Description of drawingsIn order that the manner in which the above-described features of various embodiments may be understood in detail, the inventive concepts briefly summarized above may be described in more detail by reference to various embodiments, some of which are illustrated in the accompanying drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of the inventive concept and are therefore not to be considered limiting in scope in any way, for there are other equally effective embodiments.1 is a block diagram of a computer system configured to implement one or more aspects of the various embodiments;2 is a block diagram of a parallel processing unit (PPU) included in the parallel processing subsystem of FIG. 1, according to various embodiments;3A is a block diagram of a general-purpose processing cluster included in the parallel processing unit of FIG. 2, according to various embodiments;3B is a conceptual diagram of a graphics processing pipeline that may be implemented within the parallel processing unit of FIG. 2, according to various embodiments;4 is a conceptual diagram of a texture processing pipeline that may be implemented by configuring texture units within the general-purpose processing cluster of FIG. 3A, according to various embodiments; and5 is a flowchart of method steps for performing memory access operations in a texture processing pipeline, according to various embodiments.detailed descriptionIn the following description, numerous specific details are set forth in order to provide a more thorough understanding of various embodiments. It will be apparent, however, to one skilled in the art that the inventive concept may be practiced without one or more of these specific details.System overview1 is a block diagram of a computer system 100 configured to implement one or more aspects of various embodiments. As shown, computer system 100 includes, but is not limited to, central processing unit (CPU) 102 and system memory 104 coupled to parallel processing subsystem 112 via memory bridge 105 and communication path 113 . Memory bridge 105 is also coupled to I/O (input/output) bridge 107 via communication path 106 , and I/O bridge 107 is in turn coupled to switch 116 .In operation, I/O bridge 107 is configured to receive user input information from input device 108 , such as a keyboard or mouse, and forward the input information to CPU 102 for processing via communication path 106 and memory bridge 105 . Switch 116 is configured to provide connectivity between I/O bridge 107 and other components of computer system 100 (eg, network adapter 118 and various add-in cards 120 and 121).Also as shown, I/O bridge 107 is coupled to system disk 114 , which may be configured to store content and applications and data for use by CPU 102 and parallel processing subsystem 112 . Typically, system disk 114 provides non-volatile storage for applications and data, and may include fixed or removable hard drives, flash memory devices, and CD-ROM (Compact Disc Read Only Memory), DVD-ROM (Digital Versatile Disc) -ROM), Blu-ray, HD-DVD (High Definition DVD) or other magnetic, optical or solid state memory devices. Finally, although not explicitly shown, other components such as Universal Serial Bus or other port connections, optical drives, digital versatile disc drives, film recording devices, etc. may also be connected to I/O bridge 107 .In various embodiments, memory bridge 105 may be a north bridge chip and I/O bridge 107 may be a south bridge chip. Additionally, communication paths 106 and 113 and other communication paths within computer system 100 may be implemented using any technically suitable protocol, including but not limited to AGP (Accelerated Graphics Port), HyperTransport, or any other known in the art. Other bus or point-to-point communication protocols.In some embodiments, parallel processing subsystem 112 includes a graphics subsystem that delivers pixels to display device 110, which may be any conventional cathode ray tube, liquid crystal display, light emitting diode display, or the like. In such an embodiment, parallel processing subsystem 112 includes circuitry optimized for graphics and video processing, including, for example, video output circuitry. Such circuitry may be included in one or more parallel processing units (PPUs) included within parallel processing subsystem 112 , as described in greater detail below in FIG. 2 . In other embodiments, parallel processing subsystem 112 includes circuitry optimized for general purpose and/or computational processing. Furthermore, such circuitry may be included in one or more PPUs included in parallel processing subsystem 112 that are configured to perform such general purpose and/or computational operations. In other embodiments, one or more PPUs included in parallel processing subsystem 112 may be configured to perform graphics processing operations, general purpose processing operations, and computational processing operations. System memory 104 includes at least one device driver 103 configured to manage the processing operations of one or more PPUs within parallel processing subsystem 112 .In various embodiments, parallel processing subsystem 112 may be integrated with one or more of the other elements of FIG. 1 to form a single system. For example, parallel processing subsystem 112 may be integrated with CPU 102 and other connecting circuitry on a single chip to form a system-on-chip (SoC).It should be understood that the system shown herein is exemplary and that various changes and modifications may be made. The connection topology, including the number and arrangement of bridges, the number of CPUs 102, and the number of parallel processing subsystems 112, can be modified as needed. For example, in some embodiments, system memory 104 may be connected directly to CPU 102 rather than through memory bridge 105 , and other devices will communicate with system memory 104 via memory bridge 105 and CPU 102 . In other alternative topologies, parallel processing subsystem 112 may be connected to I/O bridge 107 or directly to CPU 102 rather than to memory bridge 105 . In other embodiments, I/O bridge 107 and memory bridge 105 may be integrated into a single chip rather than exist as one or more discrete devices. Finally, in some embodiments, one or more of the components shown in FIG. 1 may not be present. For example, the switch 116 could be omitted, and the network adapter 118 and add-in cards 120, 121 would be connected directly to the I/O bridge 107.FIG. 2 is a block diagram of a parallel processing unit (PPU) 202 included in the parallel processing subsystem 112 of FIG. 1, according to various embodiments. Although FIG. 2 depicts one PPU 202, as noted above, parallel processing subsystem 112 may include any number of PPUs 202. As shown, PPU 202 is coupled to local parallel processing (PP) memory 204 . PPU 202 and PP memory 204 may be implemented using one or more integrated circuit devices (eg, programmable processors, application specific integrated circuits (ASICs), or memory devices) or in any other technically feasible manner.In some embodiments, PPU 202 includes a graphics processing unit (GPU) that may be configured to implement a graphics rendering pipeline to perform various tasks related to generating pixel data based on graphics data provided by CPU 102 and/or system memory 104 . an operation. When processing graphics data, PP memory 204 may be used as a graphics memory storing one or more conventional frame buffers and, if desired, one or more other rendering targets. Among other things, the PP memory 204 may be used to store and update pixel data and transmit the final pixel data or display frame to the display device 110 for display. In some embodiments, PPU 202 may also be configured for general-purpose processing and computing operations.In operation, CPU 102 is the main processor of computer system 100 for controlling and coordinating the operation of other system components. Specifically, the CPU 102 issues commands that control the operation of the PPU 202 . In some embodiments, CPU 102 writes the command stream for PPU 202 to a data structure (not explicitly shown in FIG. 1 or FIG. 2 ), which may be located in system memory 104, PP memory 204, or CPU 102 and Another storage location accessible to both PPUs 202. Writes a pointer to a data structure to the pushbuffer to initiate processing of the command stream in the data structure. The PPU 202 reads the command stream from the push buffer and then executes the commands asynchronously with respect to the operation of the CPU 102 . In embodiments where multiple push buffers are generated, an execution priority may be assigned to each push buffer by the application via device driver 103 to control the scheduling of different push buffers.As also shown, PPU 202 includes an I/O (input/output) unit 205 that communicates with the rest of computer system 100 via communication path 113 and memory bridge 105 . I/O unit 205 generates data packets (or other signals) for transmission on communication path 113 and also receives all incoming data packets (or other signals) from communication path 113, directing incoming data packets to PPU 202 appropriate components. For example, commands related to processing tasks may be directed to host interface 206 , while commands related to memory operations (eg, reading from or writing to PP memory 204 ) may be directed to crossbar unit 210 . The host interface 206 reads each push buffer and sends the command stream stored in the push buffer to the front end 212 .As described above in connection with FIG. 1, the connection of PPU 202 to the rest of computer system 100 may vary. In some embodiments, parallel processing subsystem 112 including at least one PPU 202 is implemented as an add-in card that can be inserted into an expansion slot of computer system 100 . In other embodiments, PPU 202 may be integrated on a single chip along with a bus bridge such as memory bridge 105 or I/O bridge 107 . Also, in other embodiments, some or all of the elements of PPU 202 may be included with CPU 102 in a single integrated circuit or system-on-chip (SoC).In operation, front end 212 sends processing tasks received from host interface 206 to a work distribution unit (not shown) within task/work unit 207 . The work distribution unit receives pointers to processing tasks encoded as task metadata (TMD) and stored in memory. A pointer to the TMD is included in the command stream, which is stored as a push buffer and received by the front end unit 212 from the host interface 206 . Processing tasks that can be encoded as TMDs include indexes associated with the data to be processed, and state parameters and commands that define how the data is to be processed. For example, state parameters and commands can define a program to be executed on the data. The task/work unit 207 receives tasks from the front end 212 and ensures that the GPC 208 is configured to a valid state before initiating the processing task specified by each TMD. A priority can be specified for each TMD used to schedule execution of processing tasks. Processing tasks may also be received from processing cluster array 230 . Optionally, the TMD may include parameters that control whether the TMD is added to the beginning or end of the list of processing tasks (or to the list of pointers to processing tasks), thereby providing another level of control over execution priority.The PPU 202 advantageously implements a highly parallel processing architecture based on an array of processing clusters 230 that includes a set of C general processing clusters (GPCs) 208, where C≧1. Each GPC 208 is capable of executing a large number (eg, hundreds of thousands) of threads simultaneously, where each thread is an instance of a program. In various applications, different GPCs 208 may be allocated for processing different types of programs or for performing different types of computations. The allocation of GPCs 208 may vary according to the amount of work each type of program or computation produces.The memory interface 214 includes a set of D partition units 215, where D≧1. Each partition unit 215 is coupled to one or more dynamic random access memories (DRAMs) 220 that reside in the PPM memory 204 . In one embodiment, the number of partition units 215 is equal to the number of DRAMs 220 , and each partition unit 215 is coupled to a different DRAM 220 . In other embodiments, the number of partition units 215 may be different from the number of DRAMs 220 . Those skilled in the art will understand that DRAM 220 may be replaced with any other technically suitable memory device. In operation, various render targets, such as texture maps and frame buffers, may be stored in DRAM 220 , allowing partition unit 215 to write portions of each render object in parallel to efficiently use available bandwidth of PP memory 204 .A given GPC 208 can process data to be written to any DRAM 220 in the PP memory 204 . A Crossbar unit 210 is configured to route the output of each GPC 208 to the input of any partition unit 215 or to any other GPC 208 for further processing. The GPC 208 communicates with the memory interface 214 via the crossbar unit 210 to read from or write to the various DRAMs 220 . In one embodiment, the crossbar unit 210 has a connection to the I/O unit 205 in addition to the connection to the PP memory 204 via the memory interface 214, thereby enabling processing cores within different GPCs 208 to communicate with the system memory 104 or Communicate with other memories that are not local to the PPU 202. In the embodiment of FIG. 2 , the crossbar unit 210 is directly connected to the I/O unit 205 . In various embodiments, crossbar unit 210 may use virtual channels to separate traffic flow between GPC 208 and partition unit 215 .Likewise, GPC 208 can be programmed to perform processing tasks related to a wide variety of applications, including but not limited to linear and nonlinear data transformation, filtering of video data and/or audio data, modeling operations (eg, application The laws of physics determine the position, velocity, and other properties of objects), image rendering operations (eg, tessellation shaders, vertex shaders, geometry shaders, and/or pixel/fragment shader programs), general computing operations, and the like. In operation, PPU 202 is configured to transfer data from system memory 104 and/or PP memory 204 to one or more on-chip memory units, process the data and write the resulting data back to system memory 104 and/or PP memory 204. The resulting data may then be accessed by other system components, including the CPU 102 , another PPU 202 in the parallel processing subsystem 112 , or another parallel processing subsystem 112 in the computer system 100 .As mentioned above, any number of PPUs 202 may be included in parallel processing subsystem 112 . For example, multiple PPUs 202 may be provided on a single add-in card, or multiple add-in cards may be connected to the communication path 113, or one or more of the PPUs 202 may be integrated into a bridge chip. The PPUs 202 in a multiple PPU system may be the same or different from each other. For example, different PPUs 202 may have different numbers of processing cores and/or different numbers of PP memories 204 . In implementations where there are multiple PPUs 202, the PPUs may operate in parallel to process data at a higher throughput than is possible with a single PPU 202. A system containing one or more PPUs 202 may be implemented in a variety of configurations and form factors, including but not limited to desktops, laptops, handheld personal computers or other handheld devices, servers, workstations, game consoles, embedded system, etc.3A is a block diagram of a general-purpose processing cluster 208 included in the parallel processing unit 202 of FIG. 2, according to various embodiments. In operation, GPC 208 may be configured to execute a large number of threads in parallel to perform graphics operations, general processing operations, and/or computational operations. As used herein, a "thread" refers to an instance of a particular program executing on a particular set of input data. In some embodiments, single instruction, multiple data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In other embodiments, single-instruction, multi-threading (SIMT) techniques are used to support parallel execution of a large number of typically synchronized threads using a general-purpose instruction unit configured to issue instructions to a set of processing engines in GPC 208 . Unlike the SIMD execution mechanism where all processing engines typically execute the same instructions, SIMT execution allows different threads to more easily follow different execution paths through a given program. Those of ordinary skill in the art will understand that the SIMD processing mechanism represents a functional subset of the SIMT processing mechanism.Operation of GPC 208 is controlled via pipeline manager 305, which distributes processing tasks received from a work distribution unit (not shown) within task/work unit 207 to one or more streaming multiprocessors (SMs). )310. Pipeline manager 305 may also be configured to control work distribution crossbar 330 by specifying the destination of processing data output by SM 310 .In one embodiment, GPC 208 includes a set of M SMs 310, where M≧1. Additionally, each SM 310 includes a set of functional execution units (not shown), such as execution units and load store units. Processing operations specific to any functional execution unit may be pipelined, which enables a new instruction to be issued for execution before the previous instruction has completed execution. Any combination of functional execution units within a given SM 310 may be provided. In various embodiments, the functional execution unit may be configured to support a variety of different operations, including integer and floating point arithmetic (eg, addition and multiplication), comparison operations, Boolean operations (AND, OR, XOR), bit shifts Computation of bits and various algebraic functions (eg, planar interpolation and trigonometric, exponential and logarithmic functions, etc.). Advantageously, the same functional performing unit may be configured to perform different operations.In operation, each SM 310 is configured to handle one or more thread groups. As used herein, a "thread group" or "warp" refers to a group of threads executing the same program concurrently on different input data, one thread in the group being assigned to a different execution within SM 310 unit. A thread group may include fewer threads than the number of execution units in SM 310, in which case some executions may be idle during cycles in which the thread group is being processed. A thread group may also include more threads than the number of execution units in SM 310, in which case processing may occur in consecutive clock cycles. Since each SM 310 can support up to G thread groups simultaneously, up to G*M thread groups can execute in GPC 208 at any given time.Additionally, multiple groups of related threads may be simultaneously active in SM 310 (in different stages of execution). This collection of thread groups is referred to herein as a "cooperative thread array" ("CTA") or "thread array". The size of a particular CTA is equal to m*k, where k is the number of concurrently executing threads in the thread group, typically an integer multiple of the number of execution units in the SM 310, and m is the number of thread groups that are simultaneously active in the SM 310 .Although not shown in Figure 3A, each SM 310 contains a level 1 (L1) cache, or uses space in a corresponding L1 cache external to the SM 310 to, among other things, support load operations performed by execution units and Store operations. Each SM 310 also has access to a second level (L2) cache (not shown) that is shared among all GPCs 208 in the PPU 202 . L2 cache can be used to transfer data between threads. Finally, SM 310 may also access off-chip "global" memory, which may include PP memory 204 and/or system memory 104 . It is to be understood that any memory external to the PPU 202 can be used as global memory. Additionally, as shown in FIG. 3A , a level one and a half (L1.5) cache 335 may be included in GPC 208 and configured to receive and hold data requested from memory by SM 310 via memory interface 214 . These data include, but are not limited to, instructions, uniform data, and constant data. In embodiments with multiple SMs 310 within GPC 208 , SMs 310 may beneficially share common instructions and data cached in L1.5 cache 335 .Each GPC 208 may have an associated memory management unit (MMU) 320 configured to map virtual addresses to physical addresses. In various embodiments, MMU 320 may reside within GPC 208 or within memory interface 214 . The MMU 320 includes a set of page table entries (PTEs) that are used to map virtual addresses to physical addresses of tiles or memory pages and optionally to cache line indexes. MMU 320 may include an address translation lookaside buffer (TLB) or a cache that may exist in SM 310 , in one or more L1 caches, or in GPC 208 .In graphics and computing applications, GPC 208 may be configured to couple each SM 310 to texture unit 315 to perform, among other things, texture loading and texture operations (eg, determining texture sample locations, reading texture data, and filter texture data).In operation, each SM 310 sends the processed task to the work distribution crossbar 330 to provide the processed task to another GPC 208 for further processing, or to store the processed task via the crossbar unit 210 In L2 cache (not shown), parallel processing memory 204 or system memory 104 . Additionally, a pre-raster operation (preROP) unit 325 is configured to receive data from the SM 310, direct the data to one or more raster operations (ROP) units in the partition unit 215, perform optimization of color mixing, organize pixel color data and perform address translation.It should be understood that the core architecture described herein is exemplary and that changes and modifications are possible. Among other things, any number of processing units (eg, SM 310 ), texture units 315 , or preROP units 325 may be included within GPC 208 . Furthermore, as described above and in conjunction with FIG. 2, the PPU 202 may include any number of GPCs 208 that are configured to be functionally similar to each other such that execution behavior does not depend on which GPC 208 receives a particular processing task. Furthermore, each GPC 208 operates independently of the other GPCs 208 in the PPU 202 to perform tasks for one or more applications. In view of the foregoing, those of ordinary skill in the art will understand that the architectures depicted in FIGS. 1-3A in no way limit the scope of the present disclosure.Graphics Pipeline ArchitectureFIG. 3B is a conceptual diagram of a graphics processing pipeline 350 that may be implemented within parallel processing unit 202 of FIG. 2 in accordance with various embodiments. As shown, graphics processing pipeline 350 includes, but is not limited to, primitive distributor (PD) 355; vertex attribute acquisition unit (VAF) 360; vertex, tessellation, geometry processing unit (VTG) 365; viewport Scaling, culling and clipping unit (VPC) 370; tiling unit 375, setup unit (setup) 380, rasterizer (raster) 385; fragment processing unit (also known as pixel shader unit) (PS)) 390 and Raster Operations Unit (ROP) 395.PD 355 collects vertex data associated with higher order surfaces, graphics primitives, etc. from front end 212 and sends the vertex data to VAF 360 .VAF 360 retrieves the vertex attributes associated with each incoming vertex from shared memory and stores the vertex data, along with the associated vertex attributes, into shared memory.The VTG 365 is a programmable execution unit that is configured to execute vertex shader programs, tessellation programs, and geometry programs. These programs process vertex data and vertex attributes received from VAF 360 and generate graphics primitives as well as graphics primitive color values, surface normal vectors, and transparency values at each vertex for further processing in graphics processing pipeline 350 deal with. Although not explicitly shown, in some embodiments, VTG 365 may include one or more vertex processing units, tessellation initialization processing units, task generation units, task dispatchers, topology generation units, tessellation processing units and geometry processing units.The vertex processing units in the VTG 365 are programmable execution units that are configured to execute vertex shader programs, illuminate and transform vertex data specified by the vertex shader programs. For example, a vertex processing unit may be programmed to transform vertex data from an object-based coordinate representation (object space) to an alternative coordinate system (eg, natural space or normalized device coordinate (NDC) space). The vertex processing unit can read vertex data and vertex attributes stored in shared memory by the VAF, and can process vertex data and vertex attributes. The vertex processing unit 415 stores the processed vertices in shared memory.The tessellation initialization processing unit in the VTG 365 is a programmable execution unit that is configured to execute tessellation initialization shader programs. The tessellation initialization processing unit processes the vertices produced by the vertex processing unit and generates graphics primitives called patches. The tessellation initialization processing unit also generates various patch attributes. The tessellation initialization processing unit then stores the patch data and patch attributes in shared memory. In some embodiments, a tessellation initialization shader program may be referred to as a hull shader or a tessellation control shader.The task generation unit in the VTG 365 obtains vertex and patch data and attributes from shared memory. The task generation unit generates tasks for processing vertices and patches for later stage processing in the graphics processing pipeline 350 . The task dispatcher in VTG365 redistributes the tasks generated by the task generation unit. The tasks generated by the various instances of vertex shader programs and tessellation initialization programs can vary significantly from one graphics processing pipeline 350 to another. The task dispatcher redistributes the tasks so that each graphics processing pipeline 350 has approximately the same workload in subsequent pipeline stages.The topology generation unit in the VTG 365 retrieves the tasks dispatched by the task dispatcher. The topology generation unit indexes vertices, including those associated with patches, and computes (U, V) coordinates of the tessellation vertices and indexes that connect the tessellation vertices to form graphics primitives. The topology generation unit then stores the indexed vertices in shared memory.The tessellation processing unit in the VTG 365 is a programmable execution unit that is configured to execute tessellation shader programs. The tessellation processing unit reads input data from the shared memory and writes output data to the shared memory. This output data in shared memory is passed to the next shader stage (geometry processing unit 445) as input data. In some embodiments, a tessellation shader program may be referred to as a domain shader or a tessellation evaluation shader.The geometry processing unit in the VTG 365 is a programmable execution unit that is configured to execute geometry shader programs, transforming graphics primitives. Vertices are grouped to construct graphics primitives for processing, where graphics primitives include triangles, line segments, points, and the like. For example, the geometry processing unit may be programmed to subdivide graphics primitives into one or more new graphics primitives and calculate parameters, such as plane equation coefficients, for rasterizing the new graphics primitives.The geometry processing unit in the VTG 365 sends to the VPC 370 the parameters and vertices specifying the new graphics primitive. The geometry processing unit may read data stored in the shared memory for processing the geometry data. VPC 370 performs trimming, culling, perspective correction, and viewport transformation to determine which graphics primitives are potentially visible and which are potentially invisible in the final rendered image. The VPC 370 then sends the processed graphics primitives to the stitching unit 375.The tiling unit 375 is a graphics primitive ordering engine that exists between the natural space pipeline 352 and the screen space pipeline 354, as described further herein. The graphics primitives are processed in the natural space pipeline 352 and then sent to the stitching unit 375 . The screen space is divided into cache tiles, where each cache tile is associated with a portion of the screen space. For each graphics primitive, the tiling unit 375 identifies the set of cache tiles that intersect the graphics primitive, a process referred to herein as "tiling." After stitching a certain number of graphics primitives, the stitching unit 375 processes the graphics primitives based on the cache tiles, wherein the graphics primitives associated with a particular cache tile are sent to the setup unit 380 . The tiling unit 375 sends the graphics primitives to the setup unit 380 one cache tile at a time. Graphics primitives that intersect multiple cache tiles are typically processed once in the natural space pipeline 352 , but then sent to the screen space pipeline 354 multiple times.Such techniques improve cache memory locality during processing in screen space pipeline 354, where multiple memory operations associated with the first cache tile access areas of the L2 cache or any other technique A viable cache memory that can remain resident during screen space processing of the first cache tile. Once the graphics primitives associated with the first cache tile are processed by the screen space pipeline 354, the portion of the L2 cache associated with the first cache tile may be flushed and the tiling unit may send The graphics primitive associated with the second cache tile. A number of memory operations associated with the second cache tile may then access an area of the L2 cache that may remain resident during screen space processing of the second cache tile. Therefore, the total memory traffic to the L2 cache and to the render target can be reduced. In some embodiments, the natural space computation is performed only once for a given graphics primitive, regardless of the number of cache tiles in screen space that intersect the graphics primitive.Setup unit 380 receives vertex data from VPC 370 via stitching unit 375, and computes parameters associated with graphics primitives, including but not limited to edge equations, local plane equations, and depth plane equations. The setup unit 380 then sends the processed graphics primitives to the rasterizer 385 .Rasterizer 385 scan converts the new graphics primitives and sends fragment and overlay data to pixel shader unit 390. Additionally, rasterizer 385 may be configured to perform z-culling and other z-based optimizations.Pixel shading unit 390 is a programmable execution unit that is configured to execute a fragment shader program, transforming fragments received from rasterizer 385 as specified by the fragment shader program. Fragment shader programs may shade fragments at pixel-level granularity, where such shader programs may be referred to as pixel shader programs. Alternatively, a fragment shader program may shade fragments at sample-level granularity, where each pixel includes multiple samples, and each sample represents a portion of a pixel. Alternatively, the fragment shader program may shade the fragments at any other technically feasible granularity according to the programmed sample rate.In various embodiments, fragment processing unit 460 may be programmed to perform operations such as perspective correction, texture mapping, shading, blending, etc., to produce shaded fragments that are sent to ROP 395 . Pixel shading unit 390 may read data stored in shared memory.ROP 395 is a processing unit that performs raster operations such as stencils, z-tests, blending, etc., and sends pixel data as processed graphics data for storage in graphics memory via memory interface 214, where graphics memory is typically structured for one or more render targets. The processed graphics data may be stored in graphics memory, parallel processing memory 204 , or system memory 104 for display on display device 110 or for further processing by CPU 102 or parallel processing subsystem 112 . In some embodiments, ROP 395 is configured to compress z-data or color data written to memory and decompress z-data or color data read from memory. In various embodiments, ROP 395 may be located in memory interface 214 , in GPC 208 , in processing cluster array 230 outside the GPC, or in a separate unit (not shown) within PPU 202 .Graphics processing pipeline 350 may be implemented by any one or more processing elements within PPU 202 . For example, one of the SMs 310 in FIG. 3A may be configured to perform the functions of one or more of the VTG 365 and the pixel shading unit 390 . The functions of PD 355 , VAF 360 , VPC 450 , stitching unit 375 , setup unit 380 , rasterizer 385 and ROP 395 may also be performed by processing elements in certain GPCs 208 in conjunction with corresponding partitioning units 215 . Alternatively, graphics processing pipeline 350 may be implemented using dedicated fixed function processing elements for one or more of the functions listed above. In various embodiments, PPU 202 may be configured to implement one or more graphics processing pipelines 350 .In some embodiments, graphics processing pipeline 350 may be divided into natural space pipeline 352 and screen space pipeline 354 . The natural space pipeline 352 processes graphics objects in 3D space, where the position of each graphics object is known relative to other graphics objects and relative to the 3D coordinate system. Screen space pipeline 354 processes graphics objects that have been projected from a 3D coordinate system onto a 2D plane representing the surface of display device 110 . For example, natural space pipeline 352 may include pipeline stages in graphics processing pipeline 350 from PD 355 to VPC 370 . Screen space pipeline 354 may include pipeline stages in graphics processing pipeline 350 from setup unit 380 to ROP 395 . The stitching unit 375 will follow the final stage of the natural space pipeline 352 (ie, the VPC 370). The tiling unit 375 will be located before the first stage of the screen space pipeline 354 (ie the setup unit 380).In some embodiments, the natural space pipeline 352 may be further divided into alpha phase pipelines and beta phase pipelines. For example, the alpha phase pipeline may include pipeline stages in the graphics processing pipeline 350 from the PD 355 to the task generation unit. The beta phase pipeline may include pipeline stages from the topology generation unit to the graphics processing pipeline 350 of the VPC 370 . Graphics processing pipeline 350 performs a first set of operations during processing sessions in the alpha phase pipeline and performs a second set of operations during processing sessions in the beta phase pipeline. As used herein, a set of operations is defined as one or more instructions that are executed collectively by a single thread, by a group of threads, or by multiple groups of threads.In a system with multiple graphics processing pipelines 350, vertex data and vertex attributes associated with a set of graphics objects may be partitioned so that each graphics processing pipeline 350 has approximately the same amount of work throughout the alpha stage. Alpha-phase processing may greatly expand the amount of vertex data and vertex attributes, such that the amount of vertex data and vertex attributes produced by the task generation unit is significantly larger than that processed by the PD 355 and VAF 360. Furthermore, even in the case where both graphics processing pipelines 350 process the same number of attributes at the beginning of the alpha phase pipeline, the task generation unit associated with one graphics processing pipeline 350 is more likely to be associated with the other graphics processing pipeline 350 The task generation unit can generate significantly higher amounts of vertex data and vertex attributes. In such a case, the task dispatcher redistributes the attributes produced by the alpha phase pipeline so that each graphics processing pipeline 350 has approximately the same workload at the beginning of the beta phase pipeline.Note that, as used herein, references to shared memory may include any one or more technically feasible memory, including but not limited to local memory shared by one or more SMs 310 or available via memory interface 214 Accessed memory, such as cache memory, parallel processing memory 204, or system memory 104. It should also be noted that, as used herein, a reference to a cache memory may include any one or more technically feasible memories, including but not limited to L1 caches, L1.5 caches, and L2 caches.Images generated using one or more of the techniques disclosed herein may be displayed on a monitor or other display device. In some embodiments, the display device may be directly coupled to a system or processor that generates or renders images. In other embodiments, the display device may be indirectly coupled to the system or processor, eg, via a network. Examples of such networks include the Internet, mobile telecommunications networks, WIFI networks, and any other wired and/or wireless networking systems. When the display devices are indirectly coupled, images generated by the system or processor can be streamed to the display devices over a network. Such streaming allows, for example, on a server or in a data center, a video game or other application that renders images to be executed on one or more user devices (eg, computers, video game consoles, smartphones, etc.) , other mobile devices, etc.), the one or more user devices are physically separate from the server or data center. Accordingly, the techniques disclosed herein can be applied to enhance images being streamed, and to enhance services that stream images, such as NVIDIA GeForceNow (GFN), Google Stadia, and the like.Furthermore, images generated using one or more of the techniques disclosed herein can be used to train, test, or prove deep neural networks (DNNs) for recognizing objects and environments in the real world. Such images can include roads, factories, buildings, urban environments, rural environments, humans, animals, and any other physical object or real-world setting. Such images can be used to train, test or prove DNNs used in machines or robots to manipulate, process or modify physical objects in the real world. Additionally, such images can be used to train, test, or demonstrate DNNs used in autonomous vehicles to navigate and move vehicles throughout the real world. Additionally, images generated using one or more of the techniques disclosed herein can be used to convey information to users of these machines, robots, and vehicles.Texture memory queries for texture operations and texture loads4 is a conceptual diagram of a texture processing pipeline 400 that may be implemented by configuring texture units 315 within general processing cluster 208 of FIG. 3A, according to various embodiments. As shown, texture processing pipeline 400 includes texture input/output (TEXIO) unit 402, texture input (TEXIN) unit 404, level of detail (LOD) unit 406, sample control and address unit 408, tag unit 410, miss handling Unit 412 , data first in first out memory (FIFO) 414 , data unit 416 , filter weight unit 418 , filter weight FIFO 420 , filter and return unit 422 , and accelerated point sampling (APS) bypass FIFO 424 .As described in this article, a texture is a 2D picture or image composed of pixels and stored in texture memory. The pixels included in the texture are referred to herein as "texels". A texel also has a position or location that identifies where the texel is in the texture. For example, the position of the texel in the second column of the third row in the texture is (2, 3). In a texture, the column number of the texel is referred to herein as the "u coordinate", and the row number of the texel is referred to herein as the "v coordinate". When a position is represented as a pair of integer values, the position identifies a single texel in the texture. Applications execute texture instructions, which access one or more texels in texture memory. Such texture instructions include texture operations and texture loads, which are now described.Texture operations perform calculations based on one or more pixels in the texture. Typically, texture operations include floating point numbers to describe the position in the texture, such as (2.4, 3.6). A location represented as a pair of floating point or other non-integer values is within a range of four texels. For example, the position (2.4, 3.6) will be within the range of four texels located at (2, 3), (3, 3), (2, 4) and (3, 4). As a result, a texture operation for the (2.4, 3.6) position would retrieve the four texels, perform a weighted average of the color values at the four texel positions, and then calculate the final color based on the weighted average. In some embodiments, certain filtering functions may access more than four texels per thread. For example, with 16x trilinear anisotropic filtering, each thread can access up to 128 texels. The weighted average may be any technically feasible operation, including but not limited to bilinear interpolation, trilinear interpolation, and various filtering operations. In turn, applications performing texture operations will receive a single color value based on the weighted average of the four texels.In contrast, a texture load retrieves only a single texel value at an integer-addressed location in texture memory, where an integer-addressed location is a texel identified by integer coordinates. In some embodiments, certain texture loading and point sampling texture operations may employ floating point coordinates that implicitly address four different nearby texels simultaneously. With texture loading and point sampling texture operations, the texture processing pipeline 400 performs no filtering operations. Applications may prefer expired texture loads to texture operations, such as when the application performs custom weighted averaging, interpolation, and filtering operations that are not supported by the built-in weighted averaging performed by the texture processing pipeline 400. Such an application performs four texture load operations for the texels located at (2,3), (3,3), (2,4) and (3,4) and in return receives the four texels located at these four texels A single color value for the location. The application can then perform any blending or merging operations on the four texels.Typically, the texture processing pipeline 400 is optimized to perform texture operations. When a particular thread performs a texture operation, that thread will receive a color value based on a weighted average of the color values of the four individual texels. Thus, the texture processing pipeline 400 is optimized to access up to four texels simultaneously via four separate memory ports in a single memory access cycle. When a thread performs a texel load for a single texel, the texel load accesses only one texel via a single memory port in a single memory cycle. As a result, the remaining three memory ports are unused. To access four texels via a texture load, the four texture loads are performed sequentially, each accessing a single memory port at a time. As a result, a single texture operation takes about a quarter of the time to load four textures. Additionally, since the texture loading does not perform weighted averaging to calculate the final color, the portion of the texture processing pipeline 400 that performs the weighted averaging is unused, resulting in further inefficiencies.In some embodiments, the texture processing pipeline 400 can accommodate multiple threads simultaneously. For example, if texture processing pipeline 400 is configured to support four concurrent threads, texture processing pipeline 400 has sixteen memory ports to support four threads performing four texture operations, each accessing four texels. If four threads perform one texture load, then four memory ports are used to support four texture loads, and the remaining twelve memory ports are unused. As a result, texture loads are performed with four texel accesses per clock cycle, while texture operations are performed with sixteen texel accesses per clock cycle.As described further herein, texture loading for "N" different threads may be combined in texture processing pipeline 400 to increase the utilization of texel accesses, thereby improving the performance of the texture processing pipeline during texture loading. In general, "N" can be any number up to the number of threads in the warp. In some embodiments, "N" may exceed the number of threads if the entire warp-wide texture instruction is encoded with more than one texture load per thread. In such an embodiment, "N" may not exceed any number that exceeds the number of texture loads encoded in the entire warp texture instruction.Additionally, certain types of texture operations for "N" different threads may be combined in the texture processing pipeline 400 to increase the utilization of texel accesses, thereby improving the performance of the texture processing pipeline during these particular types of texture operations. These particular types of texture operations are referred to herein as point-sampled texture operations. Point-sampled texture operations are simple texture operations that request the texel closest to a particular floating point location in texture memory. For example, a texture operation that points to a point sample at position (2.4, 3.6) can return the color of the texel at (2, 4). In some embodiments, the texture processing pipeline 400 may be configured to improve the performance of texture gathering operations, where each thread performs the loading of point samples for a component from four surrounding texels. More specifically, a texture gather operation accesses the four adjacent texels in texture memory that are closest to the location specified by the texture memory query. For example, a texture gather operation for position (2.4, 3.6) may perform a load of point samples for the red component on texels at (2, 3), (2, 4), (3, 3), and (3, 4) . As further described, the texture processing pipeline 400 may be configured to combine point-sampled texture operations and/or texture gather operations for "N" different threads to improve utilization of texel accesses.If "N"=2, the portion of the texture processing pipeline 400 that was configured to access up to four texels for one thread can now access up to one texel for each of the two threads simultaneously. If the portion of the texture processing pipeline 400 performs a texture load on two threads, the texture processing pipeline 400 accesses two texels, one for each of the two threads. For example, if the texture processing pipeline 400 is configured to support four concurrent texture operations, then when performing texture loads, eight memory ports are used to support eight texture loads for eight threads, and the remaining eight memory ports are unused . As a result, texture loads are performed with eight texel accesses per clock cycle, while texture operations are performed with sixteen texel accesses per clock cycle. In some embodiments, texels may have multiple sizes. If the texels are larger than the memory ports within the texture processing pipeline 400, multiple memory ports may be employed per texel. For example, if texture pipeline 400 has 8-byte memory ports, and the texel size is 16 bytes per texel, then texture pipeline 400 employs pairs of memory ports to support 8 threads/cycle of texture loads, so Fully occupy all 16 8-byte memory ports.If "N"=4, the portion of the texture processing pipeline 400 that was configured to access up to four texel accesses for one thread can now access up to one texel access for four threads. If the portion of the texture processing pipeline 400 performs a texture load on four threads, the texture processing pipeline 400 accesses four texels, one texel for each of the four threads. For example, if the texture processing pipeline 400 is configured to support four concurrent texture operations, all sixteen memory ports are used to support sixteen texture loads for sixteen threads and no memory ports are idle when performing texture loads . As a result, texture loads are performed with sixteen texel accesses per clock cycle, and texture operations are also performed with sixteen texel accesses per clock cycle.The stages of the texture processing pipeline 400 are now described.In operation, TEXIO unit 402 processes texture instructions, including texture loads and texture operations. TEXIO unit 402 receives texture instructions from SM 310 for execution by the 32 threads in the warp. TEXIO unit 402 divides texture instructions into multiple parts, where each part includes texture instructions for a subset of the threads in the warp. TEXIO unit 402 analyzes texture instruction operation codes (also referred to herein as "opcodes"), as well as certain parameters and modifiers of texture instructions, to make decisions as to whether the texture instructions can be executed with four threads per clock cycle or with The first determination that more threads execute per clock cycle.Initially, TEXIO unit 402 assumes that texture instructions can be executed at a rate of greater than four threads per clock cycle. Accordingly, TEXIO unit 402 retrieves parameters for texture instructions from a parameter queue (not shown) in a manner that can support greater than four threads per clock, such as, but not limited to, an execution rate of eight threads per cycle. As a result, parameter packing for three-parameter texture instructions differs from parameter packing for four-parameter texture instructions. In the case of a four-parameter texture instruction, the texture instruction parameters are packed in a four-parameter packing manner. If three-parameter texture instructions are also packed with four-parameter packing, the texture processing pipeline 400 may be unable to retrieve and process parameters when executing at rates greater than four threads per clock cycle. However, parameter packing is more efficient when the parameters are packed in power-of-two groups. Therefore, in the case of a three-parameter texture instruction, the texture instruction parameters are packed alternately by one-parameter packing and two-parameter packing. In this way, parameters for a three-parameter texture instruction are packed compactly in power-of-two groups, and the texture processing pipeline can sustain an execution rate of greater than four threads per clock cycle.If the TEXIO unit 402 determines that the texture instruction cannot be executed at a higher execution rate than four threads per clock cycle, the TEXIO unit 402, this stage "veto" the execution of the texture instruction at the rate according to the current configuration. Sure. The TEXIO unit 402 then reconfigures the instruction to execute at a lower rate. For example, TEXIO unit 402 may overrule the configuration of texture instructions that execute at eight threads per clock cycle, and may reconfigure the texture instructions to execute at four threads per clock cycle.If TEXIO unit 402 determines that the texture instruction can only be executed at four threads per clock cycle, then TEXIO unit 402 splits the texture instruction into eight parts with four threads each to execute at a rate of four threads per clock cycle . If TEXIO unit 402 determines that the texture instruction can be executed at a rate greater than four threads per clock cycle, eg, with texture loads, then TEXIO unit 402 will split the texture instruction into multiple parts based on the value of "N". For example, if "N"=2, then a texture instruction that can be executed at a higher rate is split into four parts with eight threads each, and the texture instruction is executed at a rate of eight threads per clock cycle. If "N"=4, the texture instructions will be split into two parts per sixteen threads, and the texture instructions are executed at the rate of sixteen threads per clock cycle, and so on. For the following discussion, "N" is assumed to be 2. However, "N" can be any number that is technically feasible.In some cases, the TEXIO unit 402 may not be able to determine, based on the opcode, whether a texture instruction can execute with four threads per clock cycle or eight threads per clock cycle. In this case, TEXIO unit 402 makes an optimistic assumption that subsequent stages of texture processing pipeline 400 can support higher texture instruction execution rates. Subsequently, any other stage of the texture processing pipeline 400 may determine that the texture instruction cannot be executed in the current configuration. If a stage of the texture processing pipeline 400 determines that the texture instruction cannot be executed at the currently configured rate, the stage overrules the determination that the texture instruction can be executed at the currently configured rate. More specifically, when a given stage in the texture processing pipeline 400 receives a texture instruction to execute in one clock cycle, but that stage requires multiple clock cycles to execute the texture instruction, the stage stalls the pipeline, and the texture instruction Split into sub-blocks and execute each sub-block of texture instructions in turn. This stage reconfigures texture instructions to execute at a lower rate. For example, a stage may overrule the configuration of a texture instruction that executes with eight threads per clock cycle, and reconfigure the texture instruction to execute with four threads per clock cycle. As described herein, TEXIN unit 404 can overrule a determination that texture instructions can be executed at a rate according to the current configuration. More generally, any technically feasible stage of the texture processing pipeline can overrule this determination.TEXIN unit 404 receives split texture instructions from TEXIO unit 402 . TEXIN unit 404 retrieves the texture header state and texture sampler state from memory based on the texture header index and texture sampler index included in the texture instruction. The texture header state and texture sampler state are stored in memory external to the texture processing pipeline 400 . TEXIN unit 404 stores the retrieved texture header state and texture sampler state in a local memory cache (not shown). Each stage in the texture processing pipeline 400 retrieves the texture header state and texture sampler state as needed to perform the operations of that stage. Furthermore, if the subsequent texture instruction includes the same texture header index and/or texture sampler index as the previous texture instruction, the TEXIN unit 404 may access the texture header state and texture sampler state via the local memory cache. Accessing the texture header state and texture sampler state via the local memory cache avoids retrieving the texture header state and texture sampler state from external memory when the state exists in the local memory cache.The texture header index is a pointer to the texture header state data table that describes the format of the texture in memory, including but not limited to, the location of the texture in memory, the dimensions of the texture, the number of color components per texel , the number of bits per color component, and whether the texture data is compressed. The sampler header index is a pointer to a texture sampler state data table that describes how the texture is to be sampled and the type of filtering applied to texels received from the texture.TEXIN unit 404 analyzes the texture header state and texture sampler state associated with the texture instruction to make a second decision as to whether the texture instruction can be executed with eight threads per clock cycle or four threads per clock cycle. Sure. If TEXIN unit 404 determines that an incoming texture instruction, which is configured to execute at eight threads per clock cycle, can only be executed at four threads per clock cycle, then TEXIN unit 404 overrules the configuration. TEXIN unit 404 reconfigures texture instructions to execute with four threads per clock cycle.For example, TEXIN unit 404 may determine, based on the texture sampler state, whether a texture instruction is a texture operation that requests point sampling of the nearest texel. If the texture instruction is a point-sampled texture operation, then TEXIN unit 404 determines that the texture instruction can be executed with eight threads per clock cycle. On the other hand, TEXIN unit 404 may determine that texture instructions are associated with more complex sampling or filtering operations based on the texture sampler state. In this case, the TEXIN unit 404 overrules the texture instruction and reconfigures the instruction to execute with four threads per clock cycle. Similarly, if the texture header status data indicates that the texture instruction is for a texture that includes compressed data, then TEXIN unit 404 determines that the texture instruction can be executed with four threads per clock cycle.Some of the state data is the cross product of texture instructions, head state and sampler state. In this case, the TEXIN unit 404 determines whether to overrule the configuration of the texture instruction based on some combination of the texture instruction, the header state, and the sampler state. Texture processing pipeline 400 maintains separate storage for texture instructions, header state, and sampler state in order to access textures appropriately. Maintaining this separate store enables the TEXIN unit 404 and other stages of the texture processing pipeline 400 to make veto decisions based on this cross-product state data.If the texture instruction is neither a texture load nor a point sampled texture operation, then TEXIN unit 404 sends the instruction to LOD unit 406 for execution with four threads per clock cycle. If the texture instruction is a texture load or a point sampled texture operation, the TEXIN unit 404 sends the instruction to the APS bypass unit 424 for execution at "N" four threads per clock cycle. For example, if "N"=2, then TEXIN unit 404 sends the instruction to APS bypass unit 424 for execution with eight threads per clock cycle.LOD unit 406 is configured to calculate the "level of detail" of the texture to be accessed from memory based on the position and orientation of a set of texel coordinates included within the texture instruction. The four threads operating together may execute texture instructions that include coordinates that define four locations on a surface in a 3D graphics scene that define geometric primitives (eg, quads). LOD unit 406 calculates the level of detail based on the distances of the four locations from each other and selects the corresponding texture from a set of textures. Each texture in the set of textures defines the same texture image, but with a different spatial resolution or level of detail. When four locations are mapped to corresponding texel locations, LOD unit 406 selects the texture that minimizes the distance of the four texels from each other. After calculating the level of detail, LOD unit 406 sends texture instructions to sample control and address unit 408 .APS bypass FIFO 424 is a delay matching FIFO to match the delay of LOD unit 406 . In some embodiments, some texture instructions executed within texture pipeline 400 may employ LOD unit 406, while other texture instructions (eg, texture loads and point-sampled texture operations) executed within texture pipeline 400 may not employ LOD unit 406 . Texture instructions using LOD unit 406 pass through LOD unit 406 . Texture instructions that do not employ LOD unit 406 do not pass through LOD unit 406 . If LOD unit 406 is not currently processing any texture operations, and TEXIN unit 404 sends a texture instruction that does not use LOD unit 406, the texture instruction bypasses LOD unit 406. Texture instructions pass through the APS bypass FIFO 424 and arrive at the sample control and address unit 408 with little delay. If LOD unit 406 is currently processing any texture operations, and TEXIN unit 404 sends a texture instruction that does not use LOD unit 406, then the texture instruction bypasses LOD unit 406. The texture instruction enters APS bypass FIFO 424 and remains in APS bypass FIFO 424 until processing of the texture instruction by LOD unit 406 reaches sample control and address unit 408 . Subsequently, the texture instructions in the APS bypass FIFO 424 arrive at the sample control and address unit 408 . In this manner, texture instruction processing through LOD unit 406 and APS bypass FIFO 424 maintains the original order.In operation, sample control and address unit 408 receives texture instructions from LOD unit 406 and APS bypass FIFO 424 . The stream of texture instructions from LOD unit 406 is in order. Likewise, the stream of texture instructions from the APS bypass FIFO 424 is in-order. However, subsequent texture instructions from LOD unit 406 may arrive at sample control and address unit 408 before earlier texture instructions from APS bypass FIFO 424 . Likewise, subsequent texture instructions from APS bypass FIFO 424 may arrive at sample control and address unit 408 before earlier texture instructions from LOD unit 406 . Since the sample control and address unit 408 receives texture instructions from two different sources, the texture instructions may be out of order. Thus, sample control and address unit 408 detects out-of-order texture instructions and selects a texture instruction from LOD unit 406 and APS bypass FIFO 424 based on which texture instruction has an earlier timestamp. In this manner, sample control and address unit 408 orders the texture instruction stream from LOD unit 406 and APS bypass FIFO 424 in the correct order sent by TEXIN unit 404 .Sampling control and address unit 408 performs various sampling and filtering operations on certain texture instructions. Sampling control and address unit 408 also provides information on how to sample textures for certain texture instructions. Sampling control and address unit 408 also processes texture instructions that have texel coordinates that extend beyond the boundaries of a given texture or across the boundary between two textures. Sampling control and address unit 408 compares the texel coordinates to the size of the selected texture. If the texel coordinates are outside the bounds of the selected texture, the sample control and address unit 408 performs one or more operations to handle out-of-bounds texel coordinates. Sampling control and address unit 408 may clamp or constrain out-of-bounds coordinates to the boundaries of the texture. Additionally or alternatively, the sample control and address unit 408 may "wrap" the out-of-bounds coordinates to opposite sides of the texture by performing a MOD operation on the out-of-bounds coordinates. Additionally or alternatively, sample control and address unit 408 may invalidate or discard texture operations that include one or more out-of-bounds coordinates. In this manner, sample control and address unit 408 ensures that all texel coordinates are within the bounds of the associated texture.If the current texture operation is a texture load or a point sampled texture operation, these filtering operations are not performed. Conversely, if the coordinates for a texture loading or point sampling texture operation are in floating point format, the sampling control and address unit 408 converts the coordinates to integer texel coordinates. This conversion to surface load instructions enables texture loading or point-sampling texture operations to use existing surface instruction circuitry, which has been optimized to accommodate 8 threads.Additionally, sample control and address unit 408 performs various address calculations to generate labels based on texel coordinates and level of detail within the texture instruction. The tags correspond to entries in the tag table included in the tag unit 410 . The tag identifies a unique cache line in memory where the associated texel is stored. In some embodiments, a cache line may include 128 bytes. The associated offset identifies the location within the cache line of the first byte of the associated texel. In some embodiments, the tag is also associated with a value indicating the size of the associated texel. In some embodiments, the tag may be formed based on the index of the texel, the texture type, and the high order bits of the coordinates of all texels stored in the cache line. All texels in a particular cache line share certain high-order bits of texel coordinates, where these high-order bits are used in part to form tags. Sampling control and address unit 408 passes texture instructions, address calculation results, and sampling control information to tag unit 410 and filter weight unit 418 . The sample control and address unit 408 can generate up to 16 texel tag/offset/set identifier combinations per clock cycle for simultaneous retrieval of up to 16 texels.Tag unit 410 receives up to 16 texel tag/offset/set identifier combinations from sample control and address unit 408 per clock cycle and sequentially accesses up to 16 texels per clock cycle. Tag unit 410 includes a tag table that stores a set of texture header entries. Each texture header entry in tag unit 410 represents a cache line within data unit 416 . Data unit 416 may represent cache memory residing within texture unit 315 , or may represent any technically feasible cache memory associated with SM 310 . After receiving the memory access request and address calculation results from the sample control and address unit 408, the tag unit 410 determines whether the tag table includes a texture header entry corresponding to the texture data to be retrieved.A cache hit occurs when the tag table includes an entry corresponding to the texture data to be accessed, and tag unit 410 determines that the texture data to be accessed resides in data unit 416 . The tag unit 410 retrieves this entry by searching the tag table and retrieves a pointer to the data within the data unit 416 where the texture data actually resides. Tag unit 410 communicates the offset to data FIFO 414 .When the tag table does not include a texture header entry corresponding to the texture data to be accessed, a cache miss occurs, and tag unit 410 causes miss processing unit 412 to access the requested texture data from external memory.Data FIFO 414, along with filter weight FIFO 420, delays information from tag unit 410 to insert appropriate delays. As a result, the data from tag unit 410 and the corresponding data from filter weight unit 418 arrive at data unit 416 at the same time.Filter weights unit 418 prepares per-texel weights, filters and texel values in return unit 422 for interpolation and/or filtering.Filter weight FIFO 420 delays information from filter weight unit 418 to match the delay through tag unit 410 , data FIFO 414 and other relevant stages of texture processing pipeline 400 . As a result, the data from filter weight unit 418 and the corresponding data from tag unit 410 arrive at data unit 416 at the same time.The miss handling unit 412 accesses the requested texel by computing a virtual address based on the data included in the texture instruction, the texture header, and the texel coordinates calculated by the sample control and address unit 408 . The miss handling unit 412 then sends a read request to read the requested data from the physical location. In various embodiments, miss processing unit 512 may reside within texture unit 315 or within MMU 320 shown in Figure 3A. Data unit 416 receives texel data from external memory via memory interface 214 and crossbar unit 210 . Data unit 416 updates the tag table within tag unit 410 to reflect the newly cached texels.Data unit 416 receives from data FIFO 414 pointers to cache lines of one or more texels. Data unit 416 also receives corresponding filter weight values (if any) from filter weight FIFO 420 . Data unit 416 retrieves data associated with one or more texels from cache memory. Data unit 416 passes the retrieved data and associated filter weight information to filter and return unit 422 . In some cases, data unit 416 may serialize accesses to texel data over multiple clock cycles to accommodate certain access restrictions to memory caches within data unit 416 . Data unit 416 collects and deserializes such texel data until all texel data required to complete each individual request received from data FIFO 414 is accumulated. Typically, multiple requests received from data FIFO 414 are executed to complete an entire warp instruction. For example, if a warp instruction has 32 threads, and the texture filtering operation processes 4 threads at a time, then 8 or more requests received from data FIFO 414 are executed to complete the texture instruction. Additionally, the texel data stored in data unit 416 may be compressed via any technically feasible compression technique. In this case, data unit 416 may decompress the texel data for further processing.At this point, data unit 416 now has the texel data needed to complete a portion of the texture instruction, which is referred to herein as a "wavefront." Texture instructions are processed through the texture processing pipeline 400 as a series of such wavefronts, with each wavefront processing "M" threads per clock cycle. Each clock cycle passes a wavefront within the texture processing pipeline 400 from stage to stage. The wavefront used for point-sampled texture operations and texture loads can include 8 threads of data values. The wavefront used for filtering texture operations can contain data for up to 4 threads. For some texture instructions, texel data consists of one texel per thread, up to the number of available memory ports. For some other texture instructions, the texel data includes four texels for each of up to 4 threads.Using traditional methods, texture instructions including texture loads or texture operations return the same amount of data per thread. For example, current technology can return up to four 32-bit data components for each of four threads in two clock cycles, for a total of four threads multiplied by 64-bit or 256-bit. Using the disclosed technique, texture instructions can return up to a total of eight threads times 64 bits or 512 bits per clock cycle.Filter and return unit 422 receives data and associated filter weight values from data unit 416 . Filter and return unit 422 applies one or more filters to the received data, including but not limited to isotropic filters and anisotropic filters. Filter and return unit 422 computes final color values for the various parts of the texture instruction, where each part includes the final color values for a part of the 32 threads in the warp. For some texture instructions, filter and return unit 422 may compute four final color data values for four threads per clock cycle over eight clock cycles. For certain other texture instructions, filter and return unit 422 may compute eight final color data values for eight threads per clock cycle in four clock cycles. The filter and return unit 422 also includes a bypass FIFO (not shown) that bypasses the filter and associated logic for texture loading and point sampling texture operations. Filter and return unit 422 assembles the final color data for each of the 32 threads in the warp. Filter and return unit 422 sends the final color data for all 32 threads to SM 310.Typically, stages of the texture processing pipeline 400 execute at eight threads per cycle unless and until a particular stage does not have sufficient resources to execute the current texture instruction at this rate. This stage then overrules the current configuration of the texture instructions and reconfigures the texture instructions to execute with four threads per cycle. As described herein, TEXIO unit 402 generates vetoes based on texture instruction opcodes and associated instruction modifiers. The TEXIN unit 404 generates vetoes individually or in any combination based on texture instructions, header state, and sampler state. Various non-exclusive conditions leading to rejection are now described.In one example, the TEXIO unit 402 may receive a texture instruction with an opcode that is not eligible to execute with more than four threads per clock cycle. As a result, the TEXIO unit 402 overrules the configuration of the texture instruction.In another example, TEXIO unit 402 may receive a texture load and determine that the texture load may be eligible to execute with eight threads per clock cycle. Subsequently, the TEXIN unit 404 accesses the header state data and determines that the texture is made up of texels each 96 bits wide. Because the texture processing pipeline 400 is not configured to retrieve and process eight 96-bit texels in one clock cycle, the TEXIN unit 404 overrules the configuration of texture loading.In yet another example, if the texture operation performs the closest texel sampling, the TEXIO unit 402 may receive the texture operation and determine that the texture operation may be eligible to execute with eight threads per clock cycle. Subsequently, the TEXIN unit 404 accesses the sampler state data and determines whether the texture operation performs more complex sampling and/or filtering. Because the texture processing pipeline 400 is not configured to process 8 texels with complex sampling and/or filtering in one clock cycle, the TEXIN unit 404 overrules configurations that include such complex sampling and/or filtering texture operations. On the other hand, if the texture operation performs the closest texel sampling, the texture operation is eligible to execute with eight threads per clock cycle, and the TEXIN unit 404 does not overrule the configuration of the texture operation.In yet another example, TEXIO unit 402 may receive a texture operation and determine that the texture operation may be eligible to execute with eight threads per clock cycle. Subsequently, the TEXIN unit 404 accesses the header state data and determines that the texture operation is for a texture that includes compressed texture data. Because the texture processing pipeline 400 is not configured to decompress and process 8 texels in one clock cycle, the TEXIN unit 404 overrules the configuration of texture operations.In yet another example, TEXIO unit 402 may receive a texture operation and determine that the texture operation may be eligible to execute with eight threads per clock cycle. Subsequently, the TEXIN unit 404 accesses the header status data and determines that the texture operation accesses the LOD unit 406, which can only execute with four threads per clock cycle. The TEXIN unit 404 determines the number of levels of detail to include in the texture from the head state data. If the texture includes multiple levels of detail and the texture instruction specifies the calculation of LOD, then TEXIN unit 404 overrules the configuration of the texture operation and directs the texture operation to LOD unit 406 . On the other hand, if the texture includes only one level of detail, there is no need to direct texture operations to LOD unit 406 since there is no need to determine which level of detail to access. Thus, the TEXIN unit 404 does not overrule the configuration of texture operations, but instead directs the texture operations to the APS bypass FIFO 424.In yet another example, TEXIO unit 402 may receive a texture operation and determine that the texture operation may be eligible to execute with eight threads per clock cycle. Subsequently, the TEXIN unit 404 accesses the header status data and determines the addressing mode for the texture operation determined by the sample control and address unit 408. The addressing mode determines how to handle texel addresses determined by the sample control and address unit 408 that are out of range of the texture. The TEXIN unit 404 does not overrule the configuration of the texture operation if the addressing mode of the texture operation is a simple addressing mode, such as clamp to the value of the nearest boundary texel. On the other hand, if the addressing mode of the texture operation is a more complex addressing mode that requires additional processing of texels outside the boundary, then the TEXIN unit 404 overrules the configuration of the texture operation.In yet another example, TEXIO unit 402 may receive a texture operation and determine that the texture operation may be eligible to execute with eight threads per clock cycle. Subsequently, the TEXIN unit 404 accesses the header state data and determines that the texture operation is for texels converted from one color space to another. Because the texture processing pipeline 400 is not configured to process 8 texels and perform color space conversions in one clock cycle, the TEXIN unit 404 overrules the configuration of texture operations.In yet another example, TEXIO unit 402 may receive a texture operation and determine that the texture operation may be eligible to execute with eight threads per clock cycle. Subsequently, the TEXIN unit 404 accesses the header state data and determines that the texturing operation generates a final color value that needs to be upconverted before returning the final color value to the SM 310 . Because the texture processing pipeline 400 is not configured to process 8 texels and perform upconversion on the final color value in one clock cycle, the TEXIN unit 404 overrules the configuration of the texture operation.In yet another example, TEXIO unit 402 may receive a texture gather operation and determine that the texture gather operation may be eligible to execute with eight threads per clock cycle. Subsequently, the TEXIN unit 404 accesses the header state data and determines that a particular texture gathering operation cannot be performed with eight threads per clock cycle. For example, texture components accessed by texture gathering operations may have specific formats and/or alignments that the texture processing pipeline 400 cannot access at accelerated speeds. Therefore, the TEXIN unit 404 overrules the configuration of the texture collection operation.In yet another example, a stage of texture processing pipeline 400 (eg, sample control and address unit 408) may determine that a texture instruction includes floating point texel coordinates representing exact integer coordinates. This stage may determine that texture instructions with such texel coordinates are eligible for execution by eight threads per clock cycle.5 is a flowchart of method steps for performing memory access operations in texture processing pipeline 400, according to various embodiments. Although the method steps are described in conjunction with the systems of FIGS. 1-4, those of ordinary skill in the art will understand that any system configured to perform the method steps in any order is within the scope of the present disclosure.As shown, method 500 begins at step 502, where a first stage in texture processing pipeline 400 generates a first determination that a texture memory query is eligible for acceleration. In some embodiments, the first stage includes the TEXIO unit 402 of the texture processing pipeline 400 of FIG. 4 . TEXIO unit 402 processes texture instructions, which include texture loads and texture operations. TEXIO unit 402 receives texture instructions from SM 310 for execution by the 32 threads in the warp. TEXIO unit 402 divides texture instructions into multiple parts, where each part includes texture instructions for a subset of the threads in the warp. The TEXIO unit 402 analyzes the texture instruction opcode and certain parameters and modifiers of the texture instruction to make decisions as to whether the texture instruction can be executed with four threads per clock cycle or with a higher number of threads per clock cycle the first determination.Initially, TEXIO unit 402 assumes that texture instructions can be executed at a rate of greater than four threads per clock cycle. If the TEXIO unit 402 determines that the texture instruction cannot execute at an execution rate higher than four threads per clock cycle, the TEXIO unit 402, stage "overrules" the determination that the texture instruction can execute at the rate according to the current configuration. The TEXIO unit 402 then reconfigures the instruction to execute at a lower rate. For example, TEXIO unit 402 may overrule the configuration of a texture instruction that executes with eight threads per clock cycle, and reconfigure the texture instruction to execute with four threads per clock cycle.If TEXIO unit 402 determines that the texture instruction can only be executed at four threads per clock cycle, then TEXIO unit 402 splits the texture instruction into eight parts with four threads each to execute at a rate of four threads per clock cycle . If TEXIO unit 402 determines that the texture instruction can be executed at a rate greater than four threads per clock cycle, eg, with texture loads, then TEXIO unit 402 will split the texture instruction into multiple parts based on the value of "N". For example, if "N"=2, then a texture instruction that can be executed at a higher rate is split into four parts with eight threads each, and the texture instruction is executed at a rate of eight threads per clock cycle. If "N"=4, then the texture instruction will be split into two parts per sixteen threads, and the texture instruction is executed at a rate of sixteen threads per clock cycle, and so on. For the following discussion, "N" is assumed to be 2. However, "N" can be any number that is technically feasible.In some cases, the TEXIO unit 402 may not be able to determine, based on the opcode, whether a texture instruction can execute with four threads per clock cycle or eight threads per clock cycle. In this case, the TEXIO unit 402 splits the texture instruction into four parts with eight threads each, assuming that the texture instruction can be executed at a rate of eight threads per clock cycle. Subsequently, any other stage of the texture processing pipeline 400 may determine that the texture instruction cannot be executed in the current configuration. If a stage of the texture processing pipeline 400 determines that the texture instruction cannot be executed at the currently configured rate, the stage overrules the determination that the texture instruction can be executed at the currently configured rate. This stage reconfigures the instruction to execute at a lower rate. For example, a stage may overrule the configuration of texture instructions that execute at eight threads per clock cycle, and reconfigure the texture instructions to execute at four threads per clock cycle.At step 504 , the first stage of the texture processing pipeline proceeds the texture memory query to the next stage in the texture processing pipeline 400 . At step 506, the second stage in the texture processing pipeline 400 generates a second determination that the texture memory query is eligible for acceleration. In some embodiments, the second stage includes the TEXIN unit 404 of the texture processing pipeline 400 of FIG. 4 .TEXIN unit 404 receives split texture instructions from TEXIO unit 402 . TEXIN unit 404 retrieves the texture header state and texture sampler state from memory based on the texture header index and texture sampler index included in the texture instruction. The texture header state and texture sampler state are stored in memory external to the texture processing pipeline 400 . TEXIN unit 404 stores the retrieved texture header state and texture sampler state in a local memory cache. Each stage in the texture processing pipeline 400 retrieves the texture header state and texture sampler state as needed to perform the operations of that stage. Furthermore, if the subsequent texture instruction includes the same texture header index and/or texture sampler index as the previous texture instruction, the TEXIN unit 404 may access the texture header state and texture sampler state via the local memory cache. Accessing the texture header state and texture sampler state via the local memory cache avoids retrieving the texture header state and texture sampler state from external memory when the state exists in the local memory cache.TEXIN unit 404 analyzes the texture header state and texture sampler state associated with the texture instruction to make a second determination as to whether the texture instruction can be executed with eight threads per clock cycle or four threads per clock cycle . If TEXIN unit 404 determines that an incoming texture instruction that is configured to execute at eight threads per clock cycle can only be executed at four threads per clock cycle, then TEXIN unit 404 overrules the configuration. TEXIN unit 404 reconfigures texture instructions to execute with four threads per clock cycle.For example, TEXIN unit 404 may determine whether a texture instruction is a point sample texture operation requesting the closest texel based on the texture sampler state. If the texture instruction is a point sample texture operation, then TEXIN unit 404 determines that the texture instruction can be executed with eight threads per clock cycle. On the other hand, TEXIN unit 404 may determine that texture instructions are associated with more complex sampling or filtering operations based on the texture sampler state. In this case, the TEXIN unit 404 overrules the texture instruction and reconfigures the instruction to execute with four threads per clock cycle. Similarly, if the texture header state data indicates that the texture instruction is for a texture that includes compressed data, then TEXIN unit 404 determines that the texture instruction may execute with four threads per clock cycle.At step 508, one or more stages in the texture processing pipeline 400 process the texture memory query based on one or both of the first determination and the second determination. Then, method 500 terminates.In summary, various embodiments include a texture processing pipeline in a GPU that determines in the first stage of the texture processing pipeline whether texture operations and texture loads can be processed at an accelerated rate. The texture processing pipeline then re-evaluates the decision at one or more additional stages of the texture processing pipeline. At each stage, including decision points, the texture processing pipeline assumes that the current texture operation or texture load can be accelerated, unless specific, known information indicates that the texture operation or texture load cannot be accelerated. As the texture operation or texture load progresses to different stages, the texture processing pipeline obtains additional information about the texture operation or texture load. The texture processing pipeline determines in multiple stages whether texture operations and texture loads can be processed at an accelerated rate. As a result, the texture processing pipeline increases the number of texture operations and texture loads that are accelerated relative to the number of texture operations and texture loads that are not accelerated.At least one technical advantage of the disclosed technique over the prior art is that with the disclosed technique, a greater percentage of texture memory access capacity is used during texture loading and during simple texture operations. As a result, the efficiency and performance of the texture processing hardware during texture loading and texture operations is improved relative to existing methods. Another technical advantage of the disclosed technology is that the texture processing hardware includes multiple stages for determining whether the memory access capabilities of the texture processing hardware can be used more efficiently. As a result, a greater number of texture loads and texture operations can take advantage of the disclosed techniques relative to methods that only make this determination at a single stage of the texture processing hardware. These advantages represent one or more technical improvements over prior art methods.[Combination of claims added prior to Artegis application]In any way, any and all combinations of any claim element recited in any claim and/or any element described in this application are intended to be within the scope of this disclosure and protection.The description of various embodiments has been presented for purposes of illustration, and is not intended to be exhaustive or limited to the disclosed embodiments. Numerous modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.Aspects of the present embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or incorporating software commonly referred to herein as a "module" or "system" or in the form of an embodiment in hardware. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code thereon.Any combination of one or more computer-readable media may be utilized. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. The computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer readable storage media would include the following: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM) ), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium can be any tangible medium that can contain or store a program for use by or in connection with the instruction execution system, apparatus, or device.Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that enables execution of the instructions via the processor of the computer or other programmable data processing apparatus to implement the flowcharts and/or or functions/acts specified in a block or blocks of a block diagram. Such processors may be, but are not limited to, general purpose processors, special purpose processors, application specific processors, or field programmable.The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by dedicated hardware-based systems or dedicated hardware-based systems that perform the specified functions or actions. A combination of hardware and computer instructions is implemented.Although the foregoing has been directed to embodiments of the present disclosure, other and further embodiments of the present disclosure can be devised without departing from the essential scope of the present disclosure, as determined by the appended claims .
A method and system to combine multiple register units within a microprocessor, such as, for example, a digital signal processor, are described. A first register unit and a second register unit are retrieved from a register file structure within a processing unit, the first register unit and the second register unit being non-adjacently located within the register file structure. The first register unit and the second register unit are further combined during execution of a single instruction to form a resulting register unit. Finally, the resulting register unit is stored within the register file structure for further processing. Alternatively, a first half word unit from the first register unit and a second half word unit from the second register unit are retrieved. The first half word unit and the second half word unit are further input into corresponding high and low portions of a resulting register unit to form the resulting register unit during execution of a single instruction. Finally, the resulting register unit is stored within the register file structure for further processing.
CLAIMS What is claimed is: 1. A computer readable medium comprising: an instruction of a plurality of executable instructions contained within said medium, which, when executed in a processing system, causes said processing system to combine selectively a first register unit and a second register unit from a register file structure to form a resulting register unit, said first register unit and said second register unit being non-adjacently located within said register file structure. 2. The computer readable medium according to Claim 1, wherein data residing within said first register unit and data residing within said second register unit are stored into corresponding portions of said resulting register unit. 3. The computer readable medium according to Claim 2, wherein said first register unit and said second register unit are 32-bit wide register units and said resulting register unit is a 64-bit wide register unit. 4. The computer readable medium according to Claim 1, wherein a first half word unit of said first register unit and a second half word unit of said second register unit are stored into corresponding portions of said resulting register unit. 5. The computer readable medium according to Claim 4, wherein said first half word unit and said second half word unit are 16-bit wide units and said resulting register unit is a 32-bit wide register unit. 6. A method comprising: receiving an executable instruction; and executing said instruction to combine selectively a first register unit and a second register unit from a register file structure to form a resulting register unit, said first register unit and said second register unit being non-adjacently located within said register file structure. 7. The method according to Claim 6, wherein said executing further comprises: storing data residing within said first register unit and data residing within said second register unit into corresponding portions of identical width within said resulting register unit. 8. The method according to Claim 7, wherein said first register unit and said second register unit are 32-bit wide register units and said resulting register unit is a 64- bit wide register unit. 9. The method according to Claim 6, wherein said executing further comprises: storing a first half word unit of said first register unit and a second half word unit of said second register unit into corresponding portions of identical width within said resulting register unit. 10. The method according to Claim 9, wherein said first half word unit and said second half word unit are 16-bit wide units and said resulting register unit is a 32-bit wide register unit. 11. The method according to Claim 6, wherein said executing further comprises: retrieving data associated with said first register unit and said second register unit from a memory; storing said data within said respective first and second register units; and selectively combining said data into said resulting register unit. 12. A method comprising : retrieving a first register unit and a second register unit from a register file structure within a processing unit, said first register unit and said second register unit being non-adjacently located within said register file structure; selectively combining said first register unit and said second register unit to form a resulting register unit during execution of a single instruction; and storing said resulting register unit within said register file structure for further processing. 13. The method according to Claim 12, wherein said combining further comprises: receiving said instruction to combine said first register unit and said second register unit; and executing said instruction within said processing unit. 14. The method according to Claim 12, wherein said combining further comprises: storing data residing within said first register unit and data residing within said second register unit into corresponding portions of identical width within said resulting register unit. 15. The method according to Claim 14, wherein said first register unit and said second register unit are 32-bit wide register units and said resulting register unit is a 64- bit wide register unit. 16. The method according to Claim 12, wherein said combining further comprises: storing a first half word unit of said first register unit and a second half word unit of said second register unit into corresponding portions of identical width within said resulting register unit. 17. The method according to Claim 16, wherein said first half word unit and said second half word unit are 16-bit wide units and said resulting register unit is a 32-bit wide register unit. 18. The method according to Claim 12, further comprising: retrieving data associated with said first register unit and said second register unit from a memory; storing said data within said respective first and second register units; and selectively combining said data into said resulting register unit. 19. A computer readable medium containing executable instructions, which, when executed in a processing system, cause said processing system to perform a method comprising: retrieving a first register unit and a second register unit from a register file structure within a processing unit, said first register unit and said second register unit being non-adjacently located within said register file structure; selectively combining said first register unit and said second register unit to form a resulting register unit during execution of a single instruction; and storing said resulting register unit within said register file structure for further processing. 20. The computer readable medium according to Claim 19, wherein said combining further comprises: receiving said instruction to combine said first register unit and said second register unit; and executing said instruction within said processing unit. 21. The computer readable medium according to Claim 19, wherein said combining further comprises: storing data residing within said first register unit and data residing within said second register unit into corresponding portions of identical width within said resulting register unit. 22. The computer readable medium according to Claim 21, wherein said first register unit and said second register unit are 32-bit wide register units and said resulting register unit is a 64-bit wide register unit. 23. The computer readable medium according to Claim 19, wherein said combining further comprises: storing a first half word unit of said first register unit and a second half word unit of said second register unit into corresponding portions of identical width within said resulting register unit. 24. The computer readable medium according to Claim 23, wherein said first half word unit and said second half word unit are 16-bit wide units and said resulting register unit is a 32-bit wide register unit. 25. The computer readable medium according to Claim 19, wherein said method further comprises: retrieving data associated with said first register unit and said second register unit from a memory; storing said data within said respective first and second register units; and selectively combining said data into said resulting register unit. 26. An integrated circuit comprising: a memory to store packets comprising one or more instructions; and a processor coupled to said memory, said processor further comprising a processing unit and a register file structure coupled to said processing unit; said processing unit to retrieve a first register unit and a second register unit from said register file structure, said first register unit and said second register unit being non-adjacently located within said register file structure, to combine selectively said first register unit and said second register unit to form a resulting register unit during execution of a single instruction, and to store said resulting register unit within said register file structure for further processing. 27. The circuit according to Claim 26, wherein said processing unit further receives said instruction to combine said first register unit and said second register unit from said memory and executes said instruction. 28. The circuit according to Claim 26, wherein said processing unit further stores data residing within said first register unit and data residing within said second register unit into corresponding portions of identical width within said resulting register unit. 29. The circuit according to Claim 28, wherein said first register unit and said second register unit are 32-bit wide register units and said resulting register unit is a 64- bit wide register unit. 30. The circuit according to Claim 26, wherein said processing unit further stores a first half word unit of said first register unit and a second half word unit of said second register unit into corresponding portions of identical width within said resulting register unit. 31. The circuit according to Claim 30, wherein said first half word unit and said second half word unit are 16-bit wide units and said resulting register unit is a 32-bit wide register unit. 32. The circuit according to Claim 26, wherein said memory further stores data associated with said first register unit and said second register unit, said register file structure further retrieves said data and stores said data within said respective first and second register units, and said processing unit further combines selectively said data into said resulting register unit. 33. An apparatus comprising: means for retrieving a first register unit and a second register unit from a register file structure within a processing unit, said first register unit and said second register unit being non-adjacently located within said register file structure; means for selectively combining said first register unit and said second register unit to form a resulting register unit during execution of a single instruction; and means for storing said resulting register unit within said register file structure for further processing. 34. The apparatus according to Claim 33, further comprising: means for receiving said instruction to combine said first register unit and said second register unit; and means for executing said instruction within said processing unit. 35. The apparatus according to Claim 33, further comprising: means for storing data residing within said first register unit and data residing within said second register unit into corresponding portions of identical width within said resulting register unit. 36. The apparatus according to Claim 35, wherein said first register unit and said second register unit are 32-bit wide register units and said resulting register unit is a 64- bit wide register unit. 37. The apparatus according to Claim 33, further comprising: means for storing a first half word unit of said first register unit and a second half word unit of said second register unit into corresponding portions of identical width within said resulting register unit. 38. The apparatus according to Claim 37, wherein said first half word unit and said second half word unit are 16-bit wide units and said resulting register unit is a 32-bit wide register unit. 39. The apparatus according to Claim 33, further comprising: means for retrieving data associated with said first register unit and said second register unit from a memory; means for storing said data within said respective first and second register units; and means for selectively combining said data into said resulting register unit.
METHOD AND SYSTEM TO COMBINE MULTIPLE REGISTER UNITS WITHIN A MICROPROCESSORBACKGROUND Field of the InventionThe present invention relates generally to microprocessors and, more specifically, to a method and system to combine multiple register units within a microprocessor, such as, for example, a digital signal processor.BackgroundTypically, computer systems include one or more microprocessor devices, each microprocessor device being configured to perform operations on values stored within a memory of the computer system and to manage the overall operation of the computer system. These computer systems may also include various multimedia devices, such as, for example, sound cards and/or video cards, each multimedia device further including one or more processors, such as, for example, digital signal processors (DSPs), which perform complex mathematical computations within each respective multimedia device.A digital signal processor (DSP) typically includes hardware execution units specifically configured to perform such mathematical calculations, such as, for example, one or more arithmetic logic units (ALU), one or more multiply-and-accumulate units (MAC), and other functional units configured to perform operations specified by a set of instructions within the DSP. Such operations may include, for example, arithmetic operations, logical operations, and other data processing operations, each being defined by an associated set of instructions. Generally, the execution units within the DSP read data and operands from a register file coupled to the memory and to the execution units, perform the instruction operations, and store the results into the register file. The register file includes multiple register units, each register unit being accessible as a single register or as aligned pairs of two adjacent register units. However, certain specific operations, such as, for example, operations to add or subtract data, require data from separate register units within the register file to be properly aligned for execution of the instructions. Thus, what is needed is a method and system to combine multiple non-adjacent register units within a DSP during execution of a single instruction in order to enable proper alignment of data stored within such register units. SUMMARYA method and system to combine multiple register units within a microprocessor, such as, for example, a digital signal processor, are described. In one embodiment, a first register unit and a second register unit are retrieved from a register file structure within a processing unit, the first register unit and the second register unit being non-adjacently located within the register file structure. The first register unit and the second register unit are further combined during execution of a single instruction to form a resulting register unit. Finally, the resulting register unit is stored within the register file structure for further processing.In an alternate embodiment, subsequent to the retrieval of the first and second register units, a first half word unit from the first register unit and a second half word unit from the second register unit are retrieved. The first half word unit and the second half word unit are further input into corresponding high and low portions of a resulting register unit to form the resulting register unit during execution of a single instruction. Finally, the resulting register unit is stored within the register file structure for further processing. BRIEF DESCRIPTION OF THE DRAWINGS [0007] FIG. 1 is a block diagram of a digital signal processing system within which a set of instructions may be executed; [0008] FIG. 2 is a block diagram illustrating one embodiment of a general register structure within the digital signal processing system; [0009] FIG. 3 is a block diagram illustrating one embodiment of a Very LongInstruction Word (VLIW) digital signal processing system architecture; [0010] FIG. 4 is a flow diagram illustrating one embodiment of a method to combine register units within the digital signal processing system; [0011] FIG. 5 is a block diagram illustrating the method to combine register units described in connection with FIG. 4;] FIG. 6 is a flow diagram illustrating an alternate embodiment of a method to combine register units within the digital signal processing system; [0013] FIG. 7 is a block diagram illustrating the method to combine register units described in connection with FIG. 6. DETAILED DESCRIPTIONA method and system to combine multiple register units within a microprocessor, such as, for example, a digital signal processor, are described. Although the system described below enables a digital signal processor (DSP) to combine the register units, it is to be understood that the system may be implemented using a microprocessor device, or any other processing unit capable of combining multiple register units into a resulting larger register unit during execution of a single instruction.Generally, execution units within the DSP read data and operands from a register file, perform instruction operations, and store the results into the register file. The register file includes multiple register units, each register unit being accessible as a single register or as aligned pairs of two adjacent register units. However, certain specific operations, such as, for example, operations to add or subtract data, require data from separate register units within the register file to be properly aligned for execution of the instructions. The embodiments described in detail below facilitate the combination/concatenation of multiple non-adjacent register units within a DSP during execution of a single instruction in order to enable proper alignment of data stored within such register units in preparation for subsequent vector operations.In one embodiment, a first register unit and a second register unit are retrieved from a register file structure within a processing unit, the first register unit and the second register unit being non-adjacently located within the register file structure. The first register unit and the second register unit are further combined during execution of a single instruction to form a resulting larger register unit. Finally, the resulting register unit is stored within the register file structure for further processing.In an alternate embodiment, subsequent to the retrieval of the first and second register units, a first half word unit from the first register unit and a second half word unit from the second register unit are retrieved. The first half word unit and the second half word unit are further input into corresponding high and low portions of a resulting register unit to form the resulting register unit during execution of a single instruction. Finally, the resulting register unit is stored within the register file structure for further processing.FIG. 1 is a block diagram of a digital signal processing system within which a set of instructions may be executed. As illustrated in FIG. 1, the digital signal processing system 100 includes a processing unit 110, a memory 150, and one or more buses 160 coupling the processing unit 110 to the memory 150.The memory 150 stores data and instructions, such as, for example, in the form of Very Long Instruction Word (VLIW) packets produced by a VLIW compiler, each VLIW packet comprising one or more instructions. Each instruction of a packet is typically of a predetermined width and has a particular address in the memory 150, such that a first instruction in a packet typically has a lower memory address than a last instruction of the packet. Addressing schemes for a memory are well known in the art and are not discussed in detail here. Instructions in the memory 150 are loaded into the processing unit 110 via buses 160.The processing unit 110 further comprises a central processing unit core 130 coupled to one or more register file structures 120 via one or more pipelines 140. The processing unit 110 may further comprise one or more microprocessors, digital signal processors, or the like.The register file 120 further comprises a set of general register units, which support general purpose computations, and which are described in further detail below in connection with FIG. 2, and a set of control register units, which support special- purpose functionality, such as, for example, hardware loops, predicates, and other special operands.FIG. 2 is a block diagram illustrating one embodiment of a general register structure within the digital signal processing system. As illustrated in FIG. 2, in one embodiment, the general register file structure 200 within the register file 120 includes multiple register units, such as, for example, thirty two 32-bit wide register units 210, each register unit being accessible as a single register or as aligned pairs 220 of two adjacent register units 210.The general register units 210 can be referred to by multiple names based on the appropriate instruction. For example, register units 210 may be individually referred to as Ro, Ri,..., R30, and R31. In addition, register units Ro and Ri may form a 64-bit register pair 220 referred to as R1 :0. Similarly, register units R2 and R3 may form a 64- bit register pair 220 referred to as R3:2, register units R28 and R29 may form a 64-bit register pair 220 referred to as R29:28, and register units R30 and R31 may form a 64-bit register pair 220 referred to as R31:30.In one embodiment, general register units 210 are used for general computational purposes, such as, for example, address generation, scalar arithmetic, and vector arithmetic, and provide all operands for instructions, including addresses for load/store instructions, data operands for numeric instructions, and vector operands for vector instructions.FIG. 3 is a block diagram illustrating one embodiment of a Very LongInstruction Word (VLIW) digital signal processing system architecture. The VLIW system architecture 300 includes a memory 310 coupled to a digital signal processor (DSP) 330 via an instruction load bus 320, a data load bus 322, and a data load/store bus 324.In one embodiment, the memory 310 stores data and instructions, for example in the form of VLIW packets having one to four instructions. Instructions stored within the memory 310 are loaded to the DSP 330 via the instruction load bus 320. In one embodiment, each instruction has a 32-bit word width which is loaded to the DSP 330 via a 128-bit instruction load bus 320 having a four word width. In one embodiment, the memory 310 is a unified byte-addressable memory, has a 32-bit address space storing both instructions and data, and operates in little-endian mode.In one embodiment, the DSP 330 comprises a sequencer 335, four pipelines 340 for four processing or execution units 345, a general register file structure 350 (comprising a plurality of general register units), such as, for example, the general register file structure 200 described in detail in connection with FIG. 2, and a control register file structure 360. The sequencer 335 receives packets of instructions from the memory 310 and determines the appropriate pipeline 340 and respective execution unit 345 for each instruction of each received packet using the information contained within the instruction. After making this determination for each instruction of a packet, the sequencer 335 inputs the instructions into the appropriate pipeline 340 for processing by the appropriate execution unit 345. [0028] In one embodiment, the execution units 345 further comprise a vector shift unit, a vector MAC unit, a load unit, and a load/store unit. The vector shift unit 345 executes, for example, S-type (Shift Unit) instructions, such as Shift & Add/Sub operations, Shift & Logical operations, Permute operations, Predicate operations, Bit Manipulation, and Vector Halfword/Word shifts, A64-type (64-bit Arithmetic) instructions, such as 64-bit Arithmetic & Logical operations, 32-bit Logical operations, Permute operations, A32-type (32-bit Arithmetic) instructions, such as 32-bit Arithmetic operations, J-type (Jump) instructions, such as Jump/Call PC-relative operations, and CR-type (Control Register) instructions, such as Control Register transfers, Hardware Loop setup. The vector MAC unit 345 executes, for example, M- type (Multiply Unit) instructions, such as Single Precision, Double Precision, Complex, and Vector Byte/Halfword instructions, A64-type instructions, A32-type instructions, J- type instructions, and JR-type (Jump Register) instructions, such as Jump/Call Register operations. The load unit 345 loads data from the memory 310 to the general register file structure 350 and executes, for example, load-type and A32-type instructions. The load/store unit 345 loads and stores data from the general register file structure 350 back to the memory 310 and executes, for example, load-type, store-type, and A32-type instructions.Each execution unit 345 that receives an instruction performs the instruction using the general register file structure 350 that is shared by the four execution units 345. Data needed by an instruction is loaded to the general register file structure 350 via the 64-bit data load bus 322. After the instructions of a packet are performed by the execution units 345, the resulting data is stored to the general register file structure 350 and then loaded and stored to the memory 310 via the 64-bit data load/store bus 324. Typically, the one to four instructions of a packet are performed in parallel by the four execution units 345 in one clock cycle, where a maximum of one instruction is received and processed by a pipeline 340 for each clock cycle.In one embodiment, an execution unit 345 may also use the control register file structure 360 to execute a corresponding instruction. The control register file structure 360 comprises a set of special register units, such as, for example, modifier, status, and predicate register units.FIG. 4 is a flow diagram illustrating one embodiment of a method to combine register units within the digital signal processing system 100. As illustrated in the embodiment of FIG. 4, at processing block 410, an instruction to combine/concatenate register units within the digital signal processing system 300 is received. In one embodiment, an execution unit 345 within the DSP 330 receives the instruction and executes the instruction, as described below, to combine predetermined register units stored in the general register file structure 350. In one embodiment, the predetermined register units are non-adjacently located within the general register file structure.At processing block 420, the predetermined register units, such as, for example, a first 32-bit wide register unit and a second 32-bit wide register unit, are identified. In one embodiment, the execution unit 345 communicates with the general register file structure 350 and identifies the register units requested to be combined. In one embodiment, the memory 310 then loads data needed by the instruction to the general register file structure 350 via the 64-bit data load bus 322. Alternatively, data may already be stored within the identified first and second register units. [0033] At processing block 430, the identified register units and associated data are retrieved. In one embodiment, the execution unit 345 retrieves the identified register units and associated data from the general register file structure 350.At processing block 440, the retrieved register units are combined/concatenated within a resulting larger register pair. In one embodiment, the execution unit 345 combines the retrieved register units, such as the first and second 32-bit wide register units, and their associated data into a resulting 64-bit wide register pair unit, such that the first register unit and its associated data are input into a high portion of the resulting register unit and the second register unit and its associated data are input into a low portion of the resulting register unit.Finally, at processing block 450, the resulting register pair is stored for further processing. In one embodiment, the execution unit 345 outputs the resulting register unit to the general register file structure 350 and stores the resulting register unit for further processing of additional instructions.FIG. 5 is a block diagram illustrating the method to combine register units described in connection with FIG. 4. As illustrated in FIG. 5, source register units Rs 510 and RT 520 are identified and further retrieved from the general register file structure 350.In one embodiment, the instruction to combine/concatenate source register unitsRs 510 and RT 520 into a resulting larger destination register unit RD 530 is:RD = combine (Rs, R[tau])Upon execution of the instruction, register units Rs 510 and R[tau] 520 are combined/concatenated into the resulting larger destination register unit RD 530, such that data residing into the register unit Rs 510 is input into the high portion of the register unit RD 530 and data residing into the register unit RT 520 is input into the low portion of the register unit RD 530. If, for example, Rs 510 and RT 520 are both 32-bit wide register units, then the resulting destination register unit RD 530 is a 64-bit wide register.FIG. 6 is a flow diagram illustrating an alternate embodiment of a method to combine register units within the digital signal processing system 300. As illustrated in the embodiment of FIG. 6, at processing block 610, an instruction to combine/concatenate register units within the digital signal processing system 300 is received. In one embodiment, an execution unit 345 within the DSP 330 receives the instruction and executes the instruction to combine predetermined register units stored in the general register file structure 350. In one embodiment, the predetermined register units are non-adjacently located within the general register file structure.At processing block 620, the predetermined register units, such as, for example, a first 32-bit wide register unit and a second 32-bit wide register unit, are identified. In one embodiment, the execution unit 345 communicates with the general register file structure 350 and identifies the register units requested to be combined. In one embodiment, the memory 310 then loads data needed by the instruction to the general register file structure 350 via the 64-bit data load bus 322. Alternatively, data may already be stored within the identified first and second register units.At processing block 630, the identified register units and associated data are retrieved. In one embodiment, the execution unit 345 retrieves the identified register units and associated data from the general register file structure 350.At processing block 640, a first half word unit is retrieved from the first register unit and is input into a resulting register unit. In one embodiment, the execution unit 345 further retrieves a first 16-bit wide half word unit from the first register unit, which may, in one embodiment, be the high half word unit of the first register unit, or, in the alternative, may be the low half word unit of the first register unit, and inputs the first half word unit into a high portion of a resulting register unit.At processing block 650, a second half word unit is retrieved from the second register unit and is input into the resulting register unit. In one embodiment, the execution unit 345 further retrieves a second 16-bit wide half word unit from the second register unit, which may, in one embodiment, be the high half word unit of the second register unit, or, in the alternative, may be the low half word unit of the second register unit, and inputs the second half word unit into a low portion of the resulting register unit, thus obtaining a 32-bit wide resulting register unit.Finally, at processing block 660, the resulting register unit is stored for further processing. In one embodiment, the execution unit 345 outputs the resulting register unit to the general register file structure 350 and stores the resulting register unit for further processing of additional instructions.FIG. 7 is a block diagram illustrating the method to combine register units described in connection with FIG. 6. As illustrated in FIG. 7, source register units Rs 540 and Rx 550 are identified and retrieved from the general register file structure 200.In one embodiment, the instruction to combine/concatenate source register unitsRs 540 and RT 550 into a resulting destination register unit RD 560 is:RD = combine (Rx. [HL], Rs. [HL])where Rx [HL] is a source register unit Rx having a high half word H and a low half word L, and where Rs is a source register unit Rs having a high half word H and a low half word L. [0050] As shown in FIG. 7, upon execution of the instruction, the high half word R[chi]i of the source register unit R[tau] 550, or, in the alternative, the low half word R[chi]2 of the source register unit R[tau] 550, is input into the high portion of the register unit RD 560 via a multiplexer 555 and the high half word Rsi of the source register unit Rs 540, or, in the alternative, the low half word Rs2 of the source register unit Rs 540, is input into the low portion of the register unit RD 560 via a multiplexer 545. If, for example, Rs 540 and RT 550 are both 32-bit wide register units, then, in one embodiment, the high half word R[chi]i of the source register unit RT 550 is 16-bit wide, the lower half word Rs2 of the source register unit Rs 540 is also 16-bit wide, and, thus, the resulting destination register unit RD 560 is a 32-bit wide register.Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in software executed by a processor, or in a combination of the two. It is to be understood that these embodiments may be used as or to support software programs, which are executed upon some form of processor or processing core (such as the CPU of a computer), or otherwise implemented or realized upon or within a machine or computer readable medium. A machine readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine readable medium includes RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor may read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
A semiconductor device is described that comprises a gate dielectric and a metal gate electrode that comprises an aluminide.
1.A semiconductor device includingGate dielectric; andA metal gate electrode, which is formed on a gate dielectric and includes an aluminide.2.The semiconductor device of claim 1, wherein the gate dielectric comprises a high-k gate dielectric and the metal gate electrode comprises an aluminide having a composition MxAly, where M is a transition metal.3.The semiconductor device of claim 2, wherein the high-k gate dielectric comprises a material selected from the group consisting of hafnium oxide, hafnium oxide, lanthanum oxide, lanthanum aluminum oxide, zirconia, zirconia silicon, titanium oxide, tantalum oxide, barium strontium titanium oxide, A material selected from the group consisting of barium titanium oxide, titanium strontium oxide, yttrium oxide, aluminum oxide, lead tantalum oxide, and lead zinc niobate.4.The semiconductor device according to claim 2, wherein M includes an element selected from the group consisting of zirconium, tungsten, tantalum, hafnium, and titanium.5.The semiconductor device of claim 1, wherein the metal gate electrode has a work function lower than about 4.3 eV.6.The semiconductor device according to claim 1, wherein the metal gate electrode is thermally stable at 400 ° C.7.A semiconductor device includingHigh-k gate dielectric; andAn NMOS metal gate electrode including an aluminide having a composition MxAly, where M is a transition metal.8.The semiconductor device according to claim 7, wherein:High-k gate dielectrics include materials from hafnium oxide, hafnium oxide, lanthanum oxide, lanthanum aluminum, zirconia, zirconia silicon, titanium oxide, tantalum oxide, barium strontium titanium oxide, barium titanium oxide, titanium strontium oxide, yttrium oxide, Materials selected from the group consisting of alumina, lead tantalum oxide, and lead zinc niobate; andM includes an element selected from the group consisting of zirconium, tungsten, tantalum, hafnium, and titanium.9.The semiconductor device of claim 7, wherein the NMOS metal gate electrode has a work function between about 3.9 eV and about 4.3 eV, and is thermally stable at 400 ° C.10.The semiconductor device according to claim 7, wherein the NMOS metal gate electrode further comprises a filler metal formed on the aluminide.11.The semiconductor device according to claim 10, wherein the filler metal is selected from the group consisting of titanium nitride, tungsten, titanium, aluminum, tantalum, tantalum nitride, cobalt, copper, and nickel.12.A CMOS semiconductor device includes:High-k gate dielectric;An NMOS metal gate electrode including an aluminide having a composition MxAly, where M is a transition metal; andPNMOS metal gate electrode not including aluminide.13.The CMOS semiconductor device according to claim 12, wherein:The high-k gate dielectric includes a material selected from the group consisting of hafnium oxide, zirconia, and alumina;M includes an element selected from the group consisting of zirconium, tungsten, tantalum, hafnium, and titanium; andThe PMOS metal gate electrode includes a material selected from the group consisting of ruthenium, palladium, platinum, cobalt, nickel, and a conductive metal oxide.14.The CMOS semiconductor device of claim 12, wherein the NMOS metal gate electrode has a work function between about 3.9eV and about 4.3eV, and the PMOS metal gate electrode has a work function between about 4.9eV and about 5.2eV. .15.The CMOS semiconductor device of claim 12, wherein the aluminide has a composition MxAly, wherein M is a transition metal, x is between 1 and 4, and y is between 1 and 4.16.The CMOS semiconductor device according to claim 15, wherein the aluminide is selected from the group consisting of ZrAl, ZrAl2, ZrAl3, WAl4, TaAl, HfAl, TiAl, TiAl2, TiAl3, and Ti3Al.17.The CMOS semiconductor device of claim 12, wherein the NMOS metal gate electrode further comprises a filler metal formed on the aluminide.18.The CMOS semiconductor device according to claim 17, wherein the filler metal is selected from the group consisting of titanium nitride, tungsten, titanium, aluminum, tantalum, tantalum nitride, cobalt, copper, and nickel.19.The CMOS semiconductor device according to claim 12, wherein:The high-k gate dielectric is formed using an atomic layer chemical vapor deposition process and is between about 5 angstroms and about 40 angstroms thick, andThe aluminide is between about 100 Angstroms and about 300 Angstroms thick.20.The CMOS semiconductor device according to claim 12, wherein the NMOS metal gate electrode and the PMOS metal gate electrode are both thermally stable at 400 ° C.
Semiconductor device with high-k gate dielectric and metal gate electrodeTechnical fieldThe present invention relates to semiconductor devices, and more particularly to those semiconductor devices including high-k gate dielectrics and metal gate electrodes.Background techniqueMOS field-effect transistors with very thin gate dielectrics made of silicon dioxide may experience unacceptable gate leakage currents. Replacing silicon dioxide with a specific high-k dielectric material to form a gate dielectric can reduce gate leakage. However, since this dielectric may be incompatible with polysilicon, the use of a metal gate electrode in a device including a high-k gate dielectric may be satisfactory. Certain metals with a work function below 4.3 eV can be used to make metal gate electrodes for NMOS transistors. However, those metals may be thermally unstable at temperatures above 400 ° C, causing them to react adversely with high-k gate dielectrics.Therefore, there is a need for a semiconductor device having a high-k gate dielectric and a NMOS metal gate electrode that is thermally stable at 400 ° C with a work function below 4.3 eV. The present invention provides such a semiconductor device.BRIEF DESCRIPTION OF THE DRAWINGS1a-1i show cross sections of structures that can be formed when an embodiment of a replacement gate method that can be used to make a semiconductor device of the present invention is implemented.The features shown in these figures are not meant to be drawn to scale.detailed descriptionDescribe semiconductor devices. The semiconductor device includes a gate dielectric and a metal gate electrode including an aluminide. In the following description, numerous details are set forth to provide a thorough understanding of the invention. However, it will be apparent to those skilled in the art that the present invention can be implemented in many ways other than those explicitly described herein. The invention is therefore not limited by the specific details disclosed below.One embodiment of the present invention includes a high-k gate dielectric on which an NMOS metal gate electrode including aluminide is formed. The high-k gate dielectric may include hafnium oxide, hafnium oxide, lanthanum oxide, lanthanum aluminum oxide, zirconia, zirconia silicon, titanium oxide, tantalum oxide, barium strontium titanium oxide, barium titanium oxide, titanium strontium oxide, yttrium oxide, Alumina, lead tantalum oxide, and lead zinc niobate. Particularly preferred are hafnium oxide, zirconia, and alumina. Although several examples of materials that can be used to form such high-k gate dielectrics are described herein, the dielectrics can also be made from other materials that are used to reduce gate leakage.The aluminide used to make the NMOS metal gate electrode is an ordered intermetallic alloy. The atomic arrangement of this alloy is different from that of conventional metal alloys. Unlike conventional aluminum alloys, when kept below the critical ordering temperature, the alloy atoms in the aluminide are periodically arranged, forming a superlattice crystal structure. When compared with conventional aluminum alloys, aluminides can exhibit enhanced structural stability and resistance to high temperature deformation.In a preferred embodiment of the semiconductor device of the present invention, the aluminide has a composition MxAly, where M is a transition metal, and the ratio of x to y represents the relative atomic percentage of the transition metal to aluminum contained in the aluminide. The aluminide having this composition may, for example, include zirconium, tungsten, tantalum, hafnium, titanium, and other transition metals that, when combined with aluminum, produce a composition having a desired work function and thermal stability. The aluminide included in the semiconductor of the present invention may also include a variety of transition metals bound in a superlattice crystal structure with an aluminum alloy, such as an alloy including aluminum doped with a relatively small amount of boron or magnesium.When used to form NMOS metal gate electrodes, these aluminides preferably have a composition MxAly, where x is between 1 and 4 and y is between 1 and 4. Particularly preferred aluminides for making NMOS metal gate electrodes include ZrAl, ZrAl2, ZrAl3, WAl4, TaAl, HfAl, TiAl, TiAl2, TiAl3, and Ti3Al. The resulting NMOS metal gate electrode may have a work function of less than 4.3 eV, and it is preferably between about 3.9 eV and about 4.3 eV, and more preferably between about 4.0 eV and about 4.2 eV.The aluminide used to form the NMOS metal gate electrode should be thick enough to ensure that any material formed on it will not significantly affect its work function. Preferably, this aluminide is between about 20 angstroms and about 2,000 angstroms, and more preferably between about 100 angstroms and about 300 angstroms. This NMOS metal gate electrode is preferably thermally stable at 400 ° C.When the semiconductor of the present invention is a CMOS device, in addition to the NMOS metal gate electrode including aluminide, it may also include a PMOS metal gate electrode not including aluminide. Such a PMOS metal gate electrode may be formed on a high-k gate dielectric, and may include a p-type metal such as ruthenium, palladium, platinum, cobalt, nickel, or a conductive metal oxide such as ruthenium oxide. Although several examples of metals that can be used to form p-type metal layers are described herein, these layers can be made from many other materials.When used to form a PMOS metal gate electrode, this p-type metal preferably has a work function between about 4.9 eV and about 5.2 eV. Their thickness is preferably between about 20 Angstroms and about 2,000 Angstroms, and more preferably between about 100 Angstroms and about 300 Angstroms. Similar to the aluminide used to make the NMOS metal gate electrode, the p-type metal used to make the PMOS metal gate electrode should be thermally stable at 400 ° C.1a-1i illustrate structures that can be formed when implementing an embodiment of a replacement gate method that can be used to fabricate a semiconductor device of the present invention. Figure 1a shows an intermediate structure that can be formed when manufacturing a CMOS device. The structure includes a first portion 101 and a second portion 102 of the substrate 100. The isolation region 103 separates the first portion 101 and the second portion 102. A first polysilicon layer 104 is formed over the dielectric layer 105 and a second polysilicon layer 106 is formed over the dielectric layer 107. The first polysilicon layer 104 is bracketed with sidewall spacers 108 and 109, and the second polysilicon layer 106 is bracketed with sidewall spacers 110 and 111. The dielectric layer 112 separates the layers 104 and 106.The substrate 100 may include any material that can be used as a basis on which a semiconductor device can be built. The isolation region 103 may include silicon dioxide, or other materials that may separate the active region of the transistor. The dielectric layers 105 and 107 may each include silicon dioxide, or other materials that may insulate the substrate from other substances. In this embodiment, the first polysilicon layer 104 is a doped n-type, and the second polysilicon layer 106 is a doped p-type. The first and second polysilicon layers 104 and 106 may be between approximately 100 Angstroms and approximately 2,000 Angstroms thick, and preferably between approximately 500 Angstroms and approximately 1,600 Angstroms thick. The spacers 108, 109, 110, and 111 preferably include silicon nitride, and the dielectric layer 112 may include silicon dioxide or a low-k material.It will be apparent to those skilled in the art that the structure of FIG. 1a can be formed using conventional process steps, materials, and equipment. As shown, the dielectric layer 112 may be polished backward to expose the first and second polysilicon layers 104 and 106, for example, by a conventional chemical mechanical polishing ("CMP") step. Although not shown, the structure of FIG. 1a may include many other features (eg, silicon nitride etch stop, source and drain regions, and one or more buffer layers), which may be formed using conventional processes.When the source and drain regions are formed using conventional ion implantation and annealing processes, it may be desirable to form a hard mask on the polysilicon layers 104 and 106-and an etch stop layer on the hard mask-to cover with silicide. The source and drain regions are protective layers 104 and 106. Such a hard mask may include silicon nitride. Such an etch stop layer may include silicon, an oxide (for example, silicon dioxide or hafnium dioxide), or a carbide (for example, silicon carbide).When the dielectric layer 112 is polished, such etch stop layers and silicon nitride hard masks can be polished from the surfaces of the layers 104 and 106-while those layers will achieve their purpose through this stage in the process. FIG. 1a shows a structure in which any hard mask or etch stop layer that can be previously formed on the layers 104 and 106 has been removed from the surface of those layers. When the source and drain regions are formed using an ion implantation process, the layers 104 and 106 may be doped while the source and drain regions are implanted.After the structure of FIG. 1a is formed, the first polysilicon layer 104 is removed. In a preferred embodiment, the layer is removed by exposing it at a sufficient temperature to an aqueous solution of between about 2% and about 30% ammonium hydroxide by volume for a sufficient time to substantially remove layer 104 Without removing a considerable amount of the second polysilicon layer 106. During this exposure step, it may be desirable to apply acoustic energy at a frequency between about 10 KHz and about 2,000 KHz while consuming between about 1 and about 10 Watts / cm2. As an example, if the n-type polysilicon layer 104 is approximately 1,350 Angstroms thick, it can be removed by exposing it to approximately 25 ° C. in a solution that includes approximately 15% ammonium hydroxide by volume at approximately 30 ° C. Simultaneous application of acoustic energy at approximately 1,000 KHz per minute-consumed at approximately 5 watts / cm2.After the first polysilicon layer 104 is removed, the dielectric layer 105 is removed. When the dielectric layer 105 includes silicon dioxide, it can be removed using an etching process selective to silicon dioxide. This etching process may include exposing the layer 105 to a solution including about one percent of HF in deionized water. The time that the layer 105 is exposed should be limited because the etch process used to remove the layer can also remove a portion of the dielectric layer 112. Considering the foregoing, if a one-percent HF-based solution is used to remove the layer 105, the device should preferably be exposed to the solution for less than about 60 seconds, and more preferably about 30 seconds or less. As shown in FIG. 1 b, the removal of the dielectric layer 105 forms a trench 113 between the sidewall spacers 108 and 109 within the dielectric layer 112.After the dielectric layer 105 is removed, a high-k gate dielectric 115 may be formed in the trench 113 and over the substrate 100, which may include one of the materials specified above. The high-k gate dielectric 115 may be formed on the substrate 100 using a conventional atomic layer chemical vapor deposition ("CVD") process. In this process, a metal oxide precursor (e.g., metal chloride) and steam can be fed into the CVD reactor at a selected flow rate, and then it is operated at a selected temperature and pressure to operate on the substrate 100 and the high-k gate. Atomic smooth interfaces are formed between the dielectrics 115. The CVD reactor should be run long enough to form a dielectric with a desired thickness. In most applications, the high-k gate dielectric 115 should be less than about 60 Angstroms thick, and more preferably between about 5 Angstroms and about 40 Angstroms thick.As shown in FIG. 1C, when a high-k gate dielectric 115 is formed using an atomic layer CVD process, the dielectric will be formed on the side of the trench in addition to the bottom of the trench 113, and will be formed on the dielectric. Electrical layer 112. If the high-k gate dielectric 115 includes an oxide, it can exhibit oxygen vacancies and unacceptable levels of impurities at arbitrary surface locations, depending on the process used to make it. After the dielectric 115 is deposited, it may be desirable to remove impurities from the dielectric and oxidize it to form a dielectric having a nearly idealized metal: oxidation stoichiometric relationship.To remove impurities from the high-k gate dielectric 115 and increase the oxygen content of the dielectric, the high-k gate dielectric 115 may be exposed to an aqueous hydrogen peroxide solution comprised between about 2% and about 30% by volume. In a particularly preferred embodiment, the high-k gate dielectric 115 is exposed at a temperature of about 25 ° C. to an aqueous solution including about 6.7% H 2 O 2 by volume for about ten minutes. During this exposure step, it may be desirable to apply acoustic energy at a frequency of about 1,000 KHz while consuming at about 5 Watts / cm2.In the illustrated embodiment, the first metal layer 116 is formed directly on the high-k gate dielectric 115 to form the structure of FIG. 1d. Similar to the high-k gate dielectric 115, a portion of the first metal layer 116 lines the trench 113, while a portion of the layer overflows onto the dielectric layer 112. As shown above, the first metal layer 116 includes an aluminide, preferably one having a composition MxAly, where M is a transition metal. This aluminide can be formed on the high-k gate dielectric 115 using a conventional physical vapor deposition ("PVD") process. In this process, an alloy target (or multiple pure targets) can be sputtered onto the high-k gate dielectric 115. Alternatively, the aluminide may be formed using a CVD process using multiple precursors. Additionally, ultra-thin aluminum and transition metal layers can be alternately deposited using nanolaminate technology (which relies on PVD, CVD, or atomic layer CVD processes), which will crystallize in a desired manner to form aluminide 116.In this embodiment, after the first metal layer 116 is formed on the high-k gate dielectric 115, a second metal layer 121 is formed on the first metal layer 116. As shown in FIG. 1 e, the second metal layer 121 fills the remaining portion of the trench 113 and covers the dielectric layer 112. The second metal layer 121 preferably includes a material that can be easily polished, and is preferably deposited over the entire device using a conventional metal deposition process. Such a filling metal may include titanium nitride, tungsten, titanium, aluminum, tantalum, tantalum nitride, cobalt, copper, nickel, or any other metal, which may be polished and it may satisfactorily fill the trench 113. When the filler metal covers the first metal layer 116, the first metal layer 116 is preferably between about 20 Angstroms and about 300 Angstroms thick, and more preferably between about 25 Angstroms and about 200 Angstroms thick. When the filler metal does not cover the aluminide 116, such as when the aluminide completely fills the trench 113, the first metal layer 116 may be up to 2,000 Angstroms thick. As mentioned above, the first metal layer 116 preferably has a work function between about 3.9 eV and about 4.3 eV.After the structure of FIG. 1e is formed, the second metal layer 121, the first metal layer 116, and the high-k gate dielectric 115 are removed from the upper dielectric layer 112 to form the structure of FIG. 1f. A CMP step may be applied to remove those materials from the dielectric layer 112 above. Alternatively, the second metal layer 121 may be removed using a CMP step, and a subsequent dry etching step (and, optionally, an additional wet etching step) is applied to remove the first metal layer 116 from the above dielectric layer 112 And high-k gate dielectric 115.After the second metal layer 121, the first metal layer 116 and the high-k gate dielectric 115 are removed from the above dielectric layer 112, and the p-type polysilicon layer 106 is removed. This can be done by exposing it to a solution of TMAH comprised between about 20 and about 30% by volume in deionized water at a sufficient temperature (e.g., between about 60 ° C and about 90 ° C) for a sufficient time while applying sound The layer 106 can be selectively removed from the second metal layer 121.After the second polysilicon layer 106 is removed, the dielectric layer 107 is removed, for example, by using the same process used to remove the dielectric layer 105. As shown in FIG. 1g, the dielectric layer 107 is removed to form a trench 114. After the dielectric layer is removed, a high-k gate dielectric 117 is formed within the trench 114 and on the dielectric layer 112. The same process steps and materials used to form the high-k gate dielectric 115 can be used to form the high-k gate dielectric 117.In this embodiment, a third metal layer 120 is then deposited on the high-k gate dielectric 117. The third metal layer 120 may include one of the p-type metals determined above, and may be formed on the high-k gate dielectric 117 using a conventional PVD or CVD process. In this embodiment, the third metal layer 120 is preferably between about 20 Angstroms and about 300 Angstroms thick, and more preferably between about 25 Angstroms and about 200 Angstroms thick. The third metal layer 120 may have a work function between about 4.9 eV and about 5.2 eV.After the third metal layer 120 is formed on the high-k gate dielectric 117, a fourth metal layer 118, such as a second filler metal, may be formed on the third metal layer 120 to form the structure of FIG. 1h. The same process steps and materials used to form the second metal layer 121 can be used to form the fourth metal layer 118. The portions of the fourth metal layer 118, the third metal layer 120, and the high-k gate dielectric 117 that cover the dielectric layer 112 can then be removed to form the structure of FIG. 1i. The same CMP and / or etching steps used to remove the first fill metal 121, aluminide 116, and high-k gate dielectric 115 from the above dielectric layer 112 may be used to remove the second fill metal from the above dielectric layer 112 118. The third metal layer 120 and the high-k gate dielectric 117.After the fourth metal layer 118, the third metal layer 120, and the high-k gate dielectric 117 are removed from the above dielectric layer 112, a cover dielectric layer (not shown) may be deposited on the resulting structure using a conventional deposition process. After the deposition of the overlying dielectric layer, the process steps used to complete the device, such as forming the device's contacts, metal interconnects, and passivation layers, are well known to those skilled in the art, and here No longer described.The semiconductor device of the present invention includes an NMOS metal gate electrode, which has a work function below 4.3 eV and is thermally stable at 400 ° C. Such a metal gate electrode can provide structure and temperature stability characteristics to an NMOS transistor, which makes it suitable for high-capacity manufacturing of semiconductor devices.Although the foregoing description has described specific materials that can be used to form the semiconductor device of the present invention, those skilled in the art will appreciate that various modifications and substitutions can be made. Accordingly, all such modifications, variations, substitutions and additions are intended to be considered to fall within the spirit and scope of the invention as defined by the appended claims.
Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is disclosed. In this regard, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches comprising a plurality of buffers. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. In some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying and refilling the free memory list cache may be minimized.
claimed is:A memory system, comprising:a compression circuit configured to store compressed data in a memory block of a plurality of memory blocks of a compressed data region of a system memory; anda free memory list storing a plurality of pointers to a corresponding plurality of free memory blocks of the plurality of memory blocks;the compression circuit comprising:a free memory list cache comprising a plurality of buffers and configured to cache one or more pointers of the plurality of pointers; and a low threshold value indicating a minimum number of pointers for the free memory list cache;the compression circuit configured to, upon allocation of a free memory block corresponding to a pointer cached in the free memory list cache:remove the pointer from the free memory list cache;determine whether a number of pointers of the free memory list cache is below the low threshold value; andresponsive to determining that a number of pointers of the free memory list cache is below the low threshold value:read a plurality of pointers, corresponding in size to a buffer of the plurality of buffers, om the free memory list; and replenish an empty buffer of the plurality of buffers with the plurality of pointers.The memory system of claim 1 , wherein:the compression circuit further comprises a high threshold value indicating a maximum number of pointers for the free memory list cache; and the compression circuit is further configured to, upon deallocation of a memory block:determine whether a number of pointers of the free memory list cache exceeds the high threshold value; and responsive to determining that a number of pointers of the free memory list cache exceeds the high threshold value:write a plurality of pointers from a full buffer of the plurality of buffers to the free memory list; andempty the full buffer of the plurality of buffers.3. The memory system of claim 1, wherein the free memory list comprises one of a plurality of free memory lists of the system memory, each free memory list of the plurality of free memory lists corresponding to a different size of the plurality of memory blocks of the compressed data region of the system memory.4. The memory system of claim 3, wherein the plurality of free memory lists comprises:a free memory list corresponding to a plurality of available 64 byte memory blocks of the plurality of memory blocks of the compressed data region of the system memory;a free memory list corresponding to a plurality of available 48 byte memory blocks of the plurality of memory blocks of the compressed data region of the system memory;a free memory list corresponding to a plurality of available 32 byte memory blocks of the plurality of memory blocks of the compressed data region of the system memory; anda free memory list corresponding to a plurality of available 16 byte memory blocks of the plurality of memory blocks of the compressed data region of the system memory.5. The memory system of claim 1, wherein a size of each buffer of the plurality of buffers corresponds to a size of a memory granule of the system memory.6. The memory system of claim 1 , wherein each buffer of the plurality of buffers is sized to store 24 pointers each 21 bits in size.7. The memory system of claim 1 integrated into a processor-based system.8. The memory system of claim 1 integrated into a system-on-a-chip (SoC) comprising a processor.9. The memory system of claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server; a computer; a portable computer; a mobile computing device; a wearable computing device (e.g., a smart watch, a health or fitness tracker, eyewear, etc.); a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; an automobile; a vehicle component; avionics systems; a drone; and a multicopter.10. A memory system for reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems, comprising:a means for allocating a free memory block of a plurality of memory blocks of a compressed data region of a system memory, wherein:the free memory block corresponds to a pointer cached in a free memory list cache; andthe free memory list cache comprises a plurality of buffers, and is configured to cache one or more pointers of a plurality of pointers of a free memory list;a means for removing the pointer from the free memory list cache, responsive to allocating the free memory block corresponding to the pointer cached in the free memory list cache; a means for determining whether a number of pointers of the free memory list cache is below a low threshold value indicating a minimum number of pointers for the free memory list cache;a means for reading a plurality of pointers, corresponding in size to a buffer of the plurality of buffers, from the free memory list, responsive to determining that a number of pointers of the free memory list cache is below the low threshold value; anda means for replenishing an empty buffer of the plurality of buffers with the plurality of pointers.11. The memory system of claim 10, further comprising:a means for deallocating a memory block of the plurality of memory blocks of the compressed data region of the system memory;a means for determining whether a number of pointers of the free memory list cache exceeds a high threshold value indicating a maximum number of pointers for the free memory list cache, responsive to deallocating the memory block of the plurality of memory blocks of the compressed data region of the system memory;a means for writing a plurality of pointers from a full buffer of the plurality of buffers to the free memory list, responsive to determining that a number of pointers of the free memory list cache exceeds the high threshold value; anda means for emptying the full buffer of the plurality of buffers.12. A method for reducing bandwidth consumption in a compressed memory scheme employing free memory lists, comprising:allocating, by a compression circuit of a memory system, a free memory block of a plurality of memory blocks of a compressed data region of a system memory, wherein:the free memory block corresponds to a pointer cached in a free memory list cache; and the free memory list cache comprises a plurality of buffers, and is configured to cache one or more pointers of a plurality of pointers of a free memory list; andresponsive to allocating the free memory block corresponding to the pointer cached in the free memory list cache:removing the pointer from the free memory list cache;determining whether a number of pointers of the free memory list cache is below a low threshold value indicating a minimum number of pointers for the free memory list cache; and responsive to determining that a number of pointers of the free memory list cache is below the low threshold value:reading a plurality of pointers, corresponding in size to a buffer of the plurality of buffers, from the free memory list; and replenishing an empty buffer of the plurality of buffers with the plurality of pointers.13. The method of claim 12, further comprising:deallocating a memory block of the plurality of memory blocks of the compressed data region of the system memory; andresponsive to deallocating the memory block of the plurality of memory blocks of the compressed data region of the system memory:determining whether a number of pointers of the free memory list cache exceeds a high threshold value indicating a maximum number of pointers for the free memory list cache; and responsive to determining that a number of pointers of the free memory list cache exceeds the high threshold value:writing a plurality of pointers from a full buffer of the plurality of buffers to the free memory list; andemptying the full buffer of the plurality of buffers.14. The method of claim 12, wherein the free memory list comprises one of a plurality of free memory lists of the system memory, each free memory list of the plurality of free memory lists corresponding to a different size of the plurality of memory blocks of the compressed data region of the system memory.15. The method of claim 14, wherein the plurality of free memory lists comprises: a free memory list corresponding to a plurality of available 64 byte memory blocks of the plurality of memory blocks of the compressed data region of the system memory;a free memory list corresponding to a plurality of available 48 byte memory blocks of the plurality of memory blocks of the compressed data region of the system memory;a free memory list corresponding to a plurality of available 32 byte memory blocks of the plurality of memory blocks of the compressed data region of the system memory; anda free memory list corresponding to a plurality of available 16 byte memory blocks of the plurality of memory blocks of the compressed data region of the system memory.16. The method of claim 12, wherein a size of each buffer of the plurality of buffers corresponds to a size of a memory access granule of the system memory.17. The method of claim 12, wherein each buffer of the plurality of buffers is sized to store 24 pointers each 21 bits in size.18. A non- transitory computer-readable medium having stored thereon computer- executable instructions which, when executed by a processor, cause the processor to: allocate a free memory block of a plurality of memory blocks of a compressed data region of a system memory, wherein:the free memory block corresponds to a pointer cached in a free memory list cache; andthe free memory list cache comprises a plurality of buffers, and is configured to cache one or more pointers of a plurality of pointers of a free memory list; and responsive to allocating the free memory block corresponding to the pointer cached in the free memory list cache:remove the pointer from the free memory list cache;determine whether a number of pointers of the free memory list cache is below a low threshold value indicating a minimum number of pointers for the free memory list cache; and responsive to determining that a number of pointers of the free memory list cache is below the low threshold value:read a plurality of pointers, corresponding in size to a buffer of the plurality of buffers, from the free memory list; and replenish an empty buffer of the plurality of buffers with the plurality of pointers.19. The non- transitory computer-readable medium of claim 18 having stored thereon computer-executable instructions which, when executed by a processor, further cause the processor to:deallocate a memory block of the plurality of memory blocks of the compressed data region of the system memory; andresponsive to deallocating the memory block of the plurality of memory blocks of the compressed data region of the system memory:determine whether a number of pointers of the free memory list cache exceeds a high threshold value indicating a maximum number of pointers for the free memory list cache; and responsive to determining that a number of pointers of the free memory list cache exceeds the high threshold value:write a plurality of pointers from a full buffer of the plurality of buffers to the free memory list; andempty the full buffer of the plurality of buffers.
REDUCING BANDWIDTH CONSUMPTION WHEN PERFORMING FREE MEMORY LIST CACHE MAINTENANCE IN COMPRESSED MEMORY SCHEMES OF PROCESSOR-BASED SYSTEMSPRIORITY APPLICATION[0001] The present application claims priority to U.S. Patent Application Serial No. 15/426,473 filed on February 7, 2017 and entitled "REDUCING BANDWIDTH CONSUMPTION WHEN PERFORMING FREE MEMORY LIST CACHE MAINTENANCE IN COMPRESSED MEMORY SCHEMES OF PROCESSOR- BASED SYSTEMS," the contents of which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0002] The technology of the disclosure relates generally to computer memory systems, and more particularly to compressed memory systems configured to compress and decompress data stored in and read from compressed system memory.II. Background[0003] As applications executed by conventional processor-based systems increase in size and complexity, memory bandwidth may become a constraint on system performance. While available memory bandwidth may be increased through the use of wider memory communications channels, this approach may incur penalties in terms of increased cost and/or additional area required for the memory on an integrated circuit (IC). Thus, one approach to increasing memory bandwidth in a processor-based system without increasing the width of memory communication channels is through the use of data compression. A data compression system can be employed in a processor-based system to store data in a compressed format, thus increasing effective memory capacity without increasing physical memory capacity.[0004] In this regard, some conventional data compression schemes provide a compression engine to compress data to be written to a main system memory. After performing compression, the compression engine writes the compressed data to the system memory, along with metadata that maps a virtual address of the compressed data to a physical address in the system memory where the compressed data is actually stored. The data compression scheme may also maintain lists of free memory blocks (i.e., free memory lists) in the system memory to track areas of memory in which compressed data can be stored. Each free memory list holds pointers to available memory blocks within a compressed data region of the system memory. The contents of the free memory lists may be cached in a free memory list cache of the compression engine.[0005] However, some implementations of free memory list caches may give rise to conditions in which excessive bandwidth is consumed during maintenance of the cached free memory lists. Accordingly, it is desirable to reduce the memory bandwidth required to maintain the free memory list cache.SUMMARY OF THE DISCLOSURE[0006] Aspects of the present disclosure involve reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems. In this regard, in exemplary aspects disclosed herein, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using free memory list caches made up of a plurality of buffers (e.g., two buffers, as a non-limiting example). When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of the plurality of buffers is refilled from a system memory. Additionally, in some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying the free memory list cache to the system memory and refilling the free memory list cache from the system memory may be minimized, thus conserving memory bandwidth.[0007] In another aspect, a memory system is provided. The memory system comprises a compression circuit configured to store compressed data in a memory block of a plurality of memory blocks of a compressed data region of a system memory. The memory system also comprises a free memory list storing a plurality of pointers to a corresponding plurality of free memory blocks of the plurality of memory blocks. The compression circuit comprises a free memory list cache comprising a plurality of buffers, and is configured to cache one or more pointers of the plurality of pointers. The compression circuit further comprises a low threshold value indicating a minimum number of pointers for the free memory list cache. The compression circuit is configured to, upon allocation of a free memory block corresponding to a pointer cached in the free memory list cache, remove the pointer from the free memory list cache, and determine whether a number of pointers of the free memory list cache is below the low threshold value. The compression circuit is further configured to, responsive to determining that a number of pointers of the free memory list cache is below the low threshold value, read a plurality of pointers, corresponding in size to a buffer of the plurality of buffers, from the free memory list. The compression circuit is also configured to replenish an empty buffer of the plurality of buffers with the plurality of pointers.[0008] In another aspect, a memory system for reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems is provided. The memory system comprises a means for allocating a free memory block of a plurality of memory blocks of a compressed data region of a system memory, wherein the free memory block corresponds to a pointer cached in a free memory list cache, and the free memory list cache comprises a plurality of buffers, and is configured to cache one or more pointers of a plurality of pointers of a free memory list. The memory system further comprises a means for removing the pointer from the free memory list cache, responsive to allocating the free memory block corresponding to the pointer cached in the free memory list cache. The memory system also comprises a means for determining whether a number of pointers of the free memory list cache is below a low threshold value indicating a minimum number of pointers for the free memory list cache. The memory system additionally comprises a means for reading a plurality of pointers, corresponding in size to a buffer of the plurality of buffers, from the free memory list, responsive to determining that a number of pointers of the free memory list cache is below the low threshold value. The memory system further comprises a means for replenishing an empty buffer of the plurality of buffers with the plurality of pointers. [0009] In another aspect, a method for reducing bandwidth consumption in a compressed memory scheme employing free memory lists is provided. The method comprises allocating, by a compression circuit of a memory system, a free memory block of a plurality of memory blocks of a compressed data region of a system memory, wherein the free memory block corresponds to a pointer cached in a free memory list cache, and the free memory list cache comprises a plurality of buffers, and is configured to cache one or more pointers of a plurality of pointers of a free memory list. The method further comprises, responsive to allocating the free memory block corresponding to the pointer cached in the free memory list cache, removing the pointer from the free memory list cache. The method also comprises determining whether a number of pointers of the free memory list cache is below a low threshold value indicating a minimum number of pointers for the free memory list cache. The method additionally comprises, responsive to determining that a number of pointers of the free memory list cache is below the low threshold value, reading a plurality of pointers, corresponding in size to a buffer of the plurality of buffers, from the free memory list. The method also comprises replenishing an empty buffer of the plurality of buffers with the plurality of pointers.[0010] In another aspect, a non-transitory computer-readable medium is provided, having stored thereon computer-executable instructions. When executed by a processor, the computer-executable instructions cause the processor to allocate a free memory block of a plurality of memory blocks of a compressed data region of a system memory, wherein the free memory block corresponds to a pointer cached in a free memory list cache, and the free memory list cache comprises a plurality of buffers, and is configured to cache one or more pointers of a plurality of pointers of a free memory list. The computer-executable instructions further cause the processor to, responsive to allocating the free memory block corresponding to the pointer cached in the free memory list cache, remove the pointer from the free memory list cache. The computer-executable instructions also cause the processor to determine whether a number of pointers of the free memory list cache is below a low threshold value indicating a minimum number of pointers for the free memory list cache. The computer-executable instructions additionally cause the processor to, responsive to determining that a number of pointers of the free memory list cache is below the low threshold value, read a plurality of pointers, corresponding in size to a buffer of the plurality of buffers, from the free memory list. The computer-executable instructions further cause the processor to replenish an empty buffer of the plurality of buffers with the plurality of pointers.BRIEF DESCRIPTION OF THE FIGURES[0011] Figure 1 is a schematic diagram of an exemplary processor-based system that includes a compressed memory system configured to compress cache data from an evicted cache entry in a cache memory, and read metadata used to access a physical address in a compressed system memory to write the compressed cache data;[0012] Figures 2A-2B are block diagrams illustrating how conventional free memory list caches may incur additional bandwidth when caching pointers for free memory blocks within a compressed region of a system memory;[0013] Figure 3 is a block diagram of an exemplary compression circuit employing free memory list caches that provide a plurality of buffers (in this example, two buffers) for caching pointers for free memory blocks;[0014] Figures 4A-4B are block diagrams illustrating how the free memory list caches of Figure 3 may operate to reduce bandwidth consumption when caching pointers for free memory blocks within a compressed region of a system memory;[0015] Figure 5 is a flowchart illustrating exemplary operations of the compression circuit of Figure 3 for reducing bandwidth consumption when allocating free memory blocks of a compressed memory region;[0016] Figure 6 is a flowchart illustrating exemplary operations of the compression circuit of Figure 3 for reducing bandwidth consumption during deallocation of memory blocks of a compressed memory region; and[0017] Figure 7 is a block diagram of an exemplary processor-based system, such as the processor-based system in Figure 1, that includes a memory system, such as the memory system in Figure 1, configured to use multiple -buffer free memory list caches to reduce bandwidth consumption in managing the free memory list cache. DETAILED DESCRIPTION[0018] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0019] Aspects of the present disclosure involve reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems. In this regard, in exemplary aspects disclosed herein, a memory system including a compression circuit is provided. The compression circuit includes a compress circuit that is configured to cache free memory lists using a multiple-buffer free memory list cache. When a number of pointers cached within the free memory list cache falls below a low threshold value, an empty buffer of a plurality of buffers is refilled from a system memory. Additionally, in some aspects, when a number of pointers of the free memory list cache exceeds a high threshold value, a full buffer of the free memory list cache is emptied to the system memory. In this manner, memory access operations for emptying the free memory list cache to the system memory and refilling the free memory list cache from the system memory may be minimized, thus conserving memory bandwidth.[0020] Before discussing examples of processor-based systems that reduce bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes, a processor-based system that is configured to provide data compression is first described. In this regard, Figure 1 is a schematic diagram of an exemplary processor-based system 100 that includes a compressed memory system 102. The processor-based system 100 is configured to store cache data 104(0)-104(N) in uncompressed form in cache entries 106(0)- 106(N) in a cache memory 108. The cache entries 106(0)-106(N) may be cache lines. For example, as shown in Figure 1, the cache memory 108 may be a level 2 (L2) cache memory included in a processor 110. The cache memory 108 may be private to a processor core 112 in the processor 110 or shared between multiple processor cores, including the processor core 112 in the processor 110. The compressed memory system 102 includes a system memory 114 that includes a compressed data region 116 configured to store data in a memory entry 118(0)- 118(E) (which may be memory lines) in compressed form, which is shown in Figure 1. For example, the system memory 114 may include a double data rate (DDR) static random access memory (SRAM). The processor 110 is configured to access the system memory 114 during read and write operations to execute software instructions and perform other processor operations.[0021] Providing the ability to store compressed data in the compressed data region 116 increases the memory capacity of the processor-based system 100 over the physical memory size of the system memory 114. In some aspects, the processor 110 uses virtual addressing wherein a virtual-to-physical address translation is performed to effectively address the compressed data region 116 without being aware of the compression scheme and compression size of the compressed data region 116. In this regard, a compression circuit 122 is provided in the compressed memory system 102 to compress uncompressed data from the processor 110 to be written into the compressed data region 116, and to decompress compressed data received from the compressed data region 116 to provide such data in uncompressed form to the processor 110. The compression circuit 122 includes a compress circuit 124 configured to compress data from the processor 110 to be written into the compressed data region 116. As non- limiting examples, as shown in Figure 1, the compress circuit 124 may be configured to compress 64-byte (64B) data words down to 48-byte (48B) compressed data words, 32- byte (32B) compressed data words, or 16-byte (16B) compressed data words, which can be stored in respective memory blocks 125(64B), 125(48B), 125(32B), 125(16B), each having a smaller size than an entire memory entry 118(0)- 118(E). If uncompressed data from the processor 110 cannot be compressed down to the next lower sized memory block 125 configured for the compressed memory system 102, such uncompressed data is stored uncompressed over the entire width of a memory entry 118(0)- 118(E). For example, the width of the memory entry 118(0)- 118(E) may be 64B, and thus can store 64B memory blocks 125(64B). The compression circuit 122 also includes a decompress circuit 127 configured to decompress compressed data from the compressed data region 116 to be provided to the processor 110.[0022] However, to provide for faster memory access without the need to compress and decompress, the cache memory 108 is provided. The cache entries 106(0)-106(N) in the cache memory 108 are configured to store the cache data 104(1)-104(N) in uncompressed form. Each of the cache entries 106(0)- 106(N) may be the same width as each of the memory entries 118(0)- 118(E) for performing efficient memory read and write operations. The cache entries 106(0)- 106(N) are accessed by a respective virtual address (VA) tag 126(0)-126(N), because as discussed above, the compressed memory system 102 provides more addressable memory space to the processor 110 than the physical address space provided in the compressed data region 116. When the processor 110 issues a memory read request for a memory read operation, a VA of the memory read request is used to search the cache memory 108 to determine if the VA matches a VA tag 126(0)-126(N) of a cache entry 106(0)-106(N). If so, a cache hit occurs, and the cache data 104(0)-104(N) in the hit cache entry 106(0)-106(N) is returned to the processor 110 without the need to decompress the cache data 104(0)- 104(N). However, because the number of cache entries 106(0)-106(N) is less than the number of memory entries 118(0)- 118(E), a cache miss can occur where the cache data 104(0)- 104(N) for the memory read request is not contained in the cache memory 108.[0023] Thus, with continuing reference to Figure 1, in response to a cache miss, the cache memory 108 is configured to provide the VA of the memory read request to the compression circuit 122 to retrieve data from the compressed data region 116. In this regard, the compression circuit 122 may first consult a metadata cache 128 that contains metadata cache entries 130(0)-130(C) each containing metadata 132(0)-132(C) indexed by a VA. The metadata cache 128 is faster to access than the compressed data region 116. The metadata 132(0)-132(C) is data, such as a pointer, used to access a physical address (PA) in the compressed data region 116 to access the memory entry 118(0)- 118(E) containing the compressed data for the VA. If the metadata cache 128 contains metadata 132(0)- 132(C) for the memory read operation, the compression circuit 122 uses the metadata 132(0)-132(C) to access the correct memory entry 118(0)-118(E) in the compressed data region 116 to provide the corresponding compressed data region 116 to the decompress circuit 127. If the metadata cache 128 does not contain metadata 132(0)- 132(C) for the memory read request, the compression circuit 122 provides the VA for the memory read request to a metadata circuit 134 that contains metadata 136(0)-136(V) in corresponding metadata entries 138(0)-138(V) for all of the VA space in the processor-based system 100. Thus, the metadata circuit 134 can be linearly addressed by the VA of the memory read request. The metadata 136(0)-136(V) is used to access the correct memory entry 118(0)- 118(E) in the compressed data region 116 for the memory read request to provide the corresponding compressed data region 116 to the decompress circuit 127.[0024] With continuing reference to Figure 1, the decompress circuit 127 receives the compressed data region 116 in response to the memory read request. The decompress circuit 127 decompresses the compressed data region 116 into uncompressed data 140, which can then be provided to the processor 110. The uncompressed data 140 is also stored in the cache memory 108. However, if the cache memory 108 did not have an available cache entry 106(0)- 106(N), the cache memory 108 may evict an existing cache entry 106(0)- 106(N) to the compressed data region 116 to make room for storing the uncompressed data 140.[0025] To do so, the cache memory 108 first sends the VA and the uncompressed cache data 104 of the evicted cache entry 106(0)-106(N) to the compress circuit 124. The compress circuit 124 receives the VA and the uncompressed cache data 104 for the evicted cache entry 106(0)- 106(N). The compress circuit 124 initiates a metadata read operation to the metadata cache 128 to obtain metadata 132 associated with the VA. During, before, or after the metadata read operation, the compress circuit 124 compresses the uncompressed cache data 104 into compressed data to be stored in the compressed data region 116. If the metadata read operation to the metadata cache 128 results in a miss, the metadata cache 128 issues a metadata read operation to the metadata circuit 134 in the system memory 114 to obtain the metadata 136 associated with the VA. The metadata cache 128 is then stalled. Because accesses to the compressed data region 116 can take much longer than the processor 110 can issue memory access operations, uncompressed data 140 received from the processor 110 for subsequent memory write requests may be buffered in a memory request buffer 142.[0026] After the metadata 136 comes back from the compressed data region 116 to update the metadata cache 128, the metadata cache 128 provides the metadata 136 as metadata 132 to the compress circuit 124. The compress circuit 124 determines whether the new compression size of the compressed data region 116 fits into the same memory block size in the compressed data region 116 as used to previously store data for the VA of the evicted cache entry 106(0)- 106(N). For example, the processor 110 may have updated the cache data 104(0)-104(N) in the evicted cache entry 106(0)-106(N) since being last stored in the compressed data region 116. If a new memory block 125 is needed to store the compressed data region 116 for the evicted cache entry 106(0)- 106(N), the compress circuit 124 recycles a pointer 144 to the current memory block 125 in the compressed memory system 102 associated with the VA of the evicted cache entry 106(0)-106(N) to one of free memory lists 148(0)-148(L) of pointers 144 to available memory blocks 125 in the compressed data region 116. The compress circuit 124 then obtains a pointer 144 from the free memory list 148(0)-148(L) to a new, available memory block 125 of the desired memory block size in the compressed data region 116 to store the compressed data region 116 for the evicted cache entry 106(0)- 106(N). The compress circuit 124 then stores the compressed data region 116 for the evicted cache entry 106(0)- 106(N) in the memory block 125 in the compressed data region 116 associated with the VA for the evicted cache entry 106(0)-106(N) determined from the metadata 132.[0027] If a new memory block 125 was assigned to the VA for the evicted cache entry 106(0)- 106(N), the metadata 132(0)-132(C) in the metadata cache entry 130(0)- 130(C) corresponding to the VA tag 126(0)-126(N) of the evicted cache entry 106(0)- 106(N) is updated based on the pointer 144 to the new memory block 125. The metadata cache 128 then updates the metadata 136(0)-136(V) in the metadata entry 138(0)-138(V) corresponding to the VA in the metadata cache 128 is based on the pointer 144 to the new memory block 125.[0028] In some aspects, memory bandwidth consumption by the compression circuit 122 may be reduced through the use of free memory list caches 150(0)-150(L), corresponding to the free memory lists 148(0)- 148(L). The free memory list caches 150(0)- 150(L) may be used by the compression circuit 122 to stock pointers read from the corresponding free memory lists 148(0)-148(L). When the compress circuit 124 allocates a free memory block 125 and needs to obtain a pointer to a new index to the free memory block 125 of the desired memory block size in the compressed data region 116, the compress circuit 124 may retrieve a cached pointer from the free memory list caches 150(0)-150(L) corresponding to the desired memory block size, rather than accessing the free memory lists 148(0)- 148(L) directly. This may enable the compress circuit 124 to avoid accessing the system memory 114, thus conserving memory bandwidth. Similarly, when the compress circuit 124 deallocates a memory block 125, the pointer to the memory block 125 may be "recycled" and stored in the free memory list cache 150(0)-150(L) corresponding to the size of the memory block 125.[0029] In some aspects, the size of each of the free memory list caches 150(0)- 150(L) corresponds to a memory granule size of the system memory 114 (i.e., a smallest unit of memory that can be read from or written to in the system memory 114). As a non-limiting example, where the memory granule size of the system memory 114 is 64 bytes, each of the free memory list caches 150(0)-150(L) may also be 64 bytes in size. In some aspects, each 64-byte free memory list cache 150(0)-150(L) may store a maximum of 24 pointers of 21 bits each.[0030] However, when using the free memory list caches 150(0)-150(L) as described above, there may arise conditions in which unnecessary memory bandwidth may be consumed during maintenance of the free memory list caches 150(0)- 150(L). To better illustrate one such scenario, Figures 2A and 2B are provided. In Figures 2A and 2B, the contents of the free memory list caches 150(0) and 150(2), corresponding to the free memory list 148(0) for 64-byte memory blocks 125 and the free memory list 148(2) for 32-byte memory blocks 125, are shown. Each of the free memory list caches 150(0), 150(2) has 24 available slots respectively, in which pointers may be stored. As seen in Figure 2A, the free memory list cache 150(0) for 64-byte memory blocks 125 is fully occupied by pointers 200(0)-200(23), while the free memory list cache 150(2) for 32-byte memory blocks 125 currently stores only a pointer 202(0).[0031] Now, consider a scenario in which the compression circuit 122 of Figure 1 performs an operation that results in allocation of a new memory block 125(32B) and deallocation of a memory block 125(64B), followed by an operation that results in deallocation of a memory block 125(32B) and allocation of a new memory block 125(64B). For example, consider a scenario in which a first previously compressed memory block 125(64B) is compressed to a smaller size (i.e., the stored compressed data was 64 bytes, but has been recompressed to 32 bytes) followed by a second previously compressed memory block 125(32B) being expanded to a larger size (i.e., the stored compressed data was 32 bytes, but has been expanded to 64 bytes).[0032] When the first previously compressed memory block 125(64B) is deallocated, the currently used 64-byte memory block 125(64B) is freed, so the compression circuit 122 needs to add a pointer to the free memory list cache 150(0). However, as seen in Figure 2A, the free memory list cache 150(0) is already full, so the 24 pointers 200(0)-200(23) stored therein must be written to the free memory list 148(0) before the new pointer 200(0) is stored in the free memory list cache 150(0). To allocate the 32-byte memory block 125(32B), the last pointer 202(0) of the free memory list cache 150(2) is consumed, so 24 new pointers 202(0)-202(23) must be read from the free memory list 148(2) to replenish the free memory list cache 150(2). The contents of the free memory list caches 150(0), 150(2) after completion of these operations is illustrated in Figure 2B.[0033] Referring now to Figure 2B, when the second previously compressed memory block 125(32B) is deallocated, a similar sequence of pointer reads and writes occurs. The compression circuit 122 needs to add a pointer to the free memory list cache 150(2), but, as seen in Figure 2B, the free memory list cache 150(2) is now full. Thus, the 24 pointers 202(0)-202(23) stored therein are written back to the free memory list 148(2) before the new pointer 202(0) is stored in the free memory list cache 150(0). To allocate the 64-byte memory block 125(64B), the pointer 200(0) of the free memory list cache 150(0) is consumed, requiring 24 new pointers to be read from the free memory list 148(0) to replenish the free memory list cache 150(0). After the free memory list caches 150(0), 150(2) have been updated, the contents of the free memory list caches 150(0), 150(2) revert back to those illustrated in Figure 2A.[0034] The operations described above for writing and reading pointers to the system memory 114 consume memory bandwidth. As a result, they may cause other operations of the compression circuit 122 to stall while the full free memory list cache 150(0) is sending data to the system memory 114 and/or while the empty free memory list cache 150(2) is being refilled with data from the system memory 114. Moreover, if a series of similar, sustained operations takes place, operations of the compression circuit 122 may be stalled on every memory access attempt.[0035] In this regard, Figure 3 illustrates a compression circuit 300 including a compress circuit 302 for reducing bandwidth consumption when performing free memory list cache maintenance. It is to be understood that the compression circuit 300 and the compress circuit 302 of Figure 3 correspond in functionality to the compression circuit 122 and the compress circuit 124, respectively, of Figure 1, and that some elements of the compress circuit 302 and the compression circuit 300 are omitted from Figure 3 for the sake of clarity. The compress circuit 302 includes free memory list caches 304(0)-304(3), which, like the free memory list caches 150(0)-150(L) of Figure 1, correspond to the free memory lists 148(0)-148(L) of Figure 1.[0036] However, unlike the free memory list caches 150(0)-150(L) of Figure 1, the free memory list caches 304(0)-304(3) include a plurality of buffers (in this example, first buffers 306(0)-306(3) and second buffers 308(0)-308(3)). In the example of Figure 3, the size of each of the first buffers 306(0)-306(3) and the second buffers 308(0)-308(3) corresponds to a memory granule size of the system memory 114. Thus, when the memory granule size of the system memory 114 is 64 bytes, each of the first buffers 306(0)-306(3) and the second buffers 308(0)-308(3) is also 64 bytes in size. Note that some aspects may provide that the plurality of buffers 306(0)-306(3), 308(0)- 308(3) provided by the free memory list caches 304(0)-304(3) may have sizes that do not correspond to the memory granule size of the system memory 114.[0037] The compress circuit 302 provides a low threshold value 310, which indicates a minimum number of pointers that may be stored in each of the free memory list caches 304(0)-304(3) before a refilling operation is triggered. Similarly, in some aspects, the compress circuit 302 may also provide a high threshold value 312 that indicates a maximum number of pointers that may be stored in each of the free memory list caches 304(0)-304(3) before an emptying operation is triggered. In exemplary operation, the compress circuit 302 is configured to perform a refill operation on the free memory list cache 304(0), for example, by refilling whichever of the first buffer 306(0) or the second buffer 308(0) is empty. Likewise, the compress circuit 302 according to some aspects may also perform an emptying operation on the free memory list cache 304(0) by emptying whichever of the first buffer 306(0) or the second buffer 308(0) is full.[0038] To illustrate how the plurality of buffers 306(0)-306(3), 308(0)-308(3) of the free memory list caches 304(0)-304(3) reduce memory bandwidth consumption, Figures 4A and 4B are provided. Figures 4A and 4B illustrate the contents of the free memory list caches 304(0), 304(2), corresponding to the free memory list 148(0) for 64-byte memory blocks 125 and the free memory list 148(2) for 32-byte memory blocks 125 of Figure 1 , in a scenario analogous to that described above with respect to Figures 2A and 2B. At the start, the second buffer 308(0) of the free memory list cache 304(0) contains 22 pointers 400(0)-400(21), while the first buffer 306(0) of the free memory list cache 304(0) is completely full with 24 pointers 402(0)-402(23). In contrast, the free memory list cache 304(2) only stores six (6) pointers 404(0)-404(5). In this example, the low threshold value 310 of Figure 3, indicating a minimum number of pointers stored in the free memory list caches 304(0), 304(2), has a value of six (6). It is also assumed that the high threshold value 312 of Figure 3, indicating a maximum number of pointers 400, 402 stored in the free memory list caches 304(0), 304(2), has a value of 46.[0039] In Figure 4A, it is assumed that a first previously compressed memory block 125(64B) is compressed to a smaller size (i.e., the stored compressed data was 64 bytes, but has been recompressed to 32 bytes) followed by a second previously compressed memory block 125(32B) being expanded to a larger size (i.e., the stored compressed data was 32 bytes, but has been expanded to 64 bytes). Thus, when the first previously compressed memory block 125(64B) is deallocated, the currently used 64-byte memory block 125(64B) is freed, so the compression circuit 300 needs to add a pointer to the free memory list cache 304(0). Because the free memory list cache 304(0) already contains 46 pointers 400(0)-400(21), 402(0)-402(23), adding another pointer to the free memory list cache 304(0) will exceed the value of the high threshold value 312. Accordingly, the 24 pointers 400(0)-400(23) stored in the full second buffer 308(0) are written to the free memory list 148(0) before a new pointer is stored in the second buffer 308(0). The contents of the first buffer 306(2) of the free memory list cache 304(0) remain unchanged. To allocate the 32-byte memory block 125(32B), the pointer 404(5) of the first buffer 306(0) of the free memory list cache 304(2) is consumed. After the pointer 404(5) is consumed, the compress circuit 302 determines that the number of remaining pointers 404(0)-404(4) in the free memory list cache 304(2) is below the low threshold value 310, so 24 new pointers are read from the free memory list 148(2) and used to replenish the empty second buffer 308(2) of the free memory list cache 304(2). The contents of the free memory list caches 304(0), 304(2) after completion of these operations is illustrated in Figure 4B.[0040] Referring now to Figure 4B, when the second previously compressed memory block 125(32B) is deallocated, the compress circuit 302 needs to add a new pointer to the free memory list cache 304(2). As seen in Fig ure 4B, the free memory list cache 304(2) has plenty of room to store the new pointer alongside pointers 404(0)- 404(4) and 406(0)-406(23) without requiring a memory access to the system memory 114. Similarly, to allocate the 64-byte memory block 125(64B), the pointer 400(22) of the free memory list cache 304(0) is consumed. However, because the free memory list cache 304(0) still stores 22 pointers 400(0)-400(21), there is no need to access the system memory 114 to replenish the free memory list cache 304(0).[0041] To illustrate exemplary operations of the compression circuit 300 for reducing bandwidth consumption during allocation of free memory blocks 125, Figure 5 is provided. For the sake of clarity, elements of Figures 1, 3, and 4A-4B are referenced in describing Figure 5. In Figure 5, operations begin with the compression circuit 300 allocating a free memory block 125 of a plurality of memory blocks 125 of a compressed data region 116 of a system memory 114 (block 500). Accordingly, the compression circuit 300 may be referred to herein as "a means for allocating a free memory block of a plurality of memory blocks of a compressed data region of a system memory." The compression circuit 300 then removes a pointer 404(5) from the free memory list cache 304(2) (block 502). In this regard, the compression circuit 300 may be referred to herein as "a means for removing the pointer from the free memory list cache, responsive to allocating the free memory block corresponding to the pointer of the first buffer of the free memory list cache."[0042] The compression circuit 300 next determines whether a number of pointers 404(0)-404(4) of the free memory list cache 304(2) is below a low threshold value 310 indicating a minimum number of pointers 404(0)-404(4) for the free memory list cache 304(2) (block 504). The compression circuit 300 thus may be referred to herein as "a means for determining whether a number of pointers of the free memory list cache is below a low threshold value indicating a minimum number of pointers for the free memory list cache." If the compression circuit 300 determines at decision block 504 that the number of pointers 406(0)-406(4) of the free memory list cache 304(2) is below the low threshold value 310, the compression circuit 300 reads a plurality of pointers 406(0)-406(23), corresponding in size to the second buffer 308(2), from the free memory list 148(2) (block 506). Accordingly, the compression circuit 300 may be referred to herein as "a means for reading the plurality of pointers, corresponding in size to a buffer of the plurality of buffers, from the free memory list, responsive to determining that a number of pointers of the free memory list cache is below the low threshold value." The compression circuit 300 then replenishes an empty buffer (i.e., the second buffer 308(2)) with the plurality of pointers 406(0)-406(23) (block 508). In this regard, the compression circuit 300 may be referred to herein as "a means for replenishing an empty buffer of the plurality of buffers with the plurality of pointers." Processing then continues at block 510. If the compression circuit 300 determines at decision block 504 that the number of pointers 406(0)-406(4) of the free memory list cache 304(2) is not below the low threshold value 310, processing continues at block 510.[0043] Figure 6 is provided to illustrate exemplary operations of the compression circuit 300 for reducing bandwidth consumption during deallocation of memory blocks 125. Elements of Figures 1, 3, and 4A-4B are referenced in describing Figure 6 for the sake of clarity. Operations in Figure 6 begin with the compression circuit 300 deallocating a memory block 125 of the plurality of memory blocks 125 of the compressed data region 116 of the system memory 114 (block 600). The compression circuit 300 thus may be referred to herein as "a means for deallocating a memory block of the plurality of memory blocks of the compressed data region of the system memory."[0044] The compression circuit 300 then determines whether a number of pointers 400, 402 of the free memory list cache 304(0) exceeds the high threshold value 312 (block 602). Accordingly, the compression circuit 300 may be referred to herein as "a means for determining whether a number of pointers of the free memory list cache is exceeds a high threshold value indicating a maximum number of pointers for the free memory list cache, responsive to deallocating the memory block of the plurality of memory blocks of the compressed data region of the system memory." If the compression circuit 300 determines at decision block 602 that a number of pointers 400(0)-400(21), 402(0)-402(23) of the free memory list cache 304(0) exceeds the high threshold value 312, , the compression circuit 300 writes a plurality of pointers 402(0)- 402(23) of a full buffer (i.e., the first buffer 306(0)) to the free memory list 148(0) (block 604). In this regard, the compression circuit 300 may be referred to herein as "a means for writing a plurality of pointers from a full buffer of the plurality of buffers to the free memory list, responsive to determining that a number of pointers of the free memory list cache exceeds the high threshold value." The compression circuit 300 next empties the first buffer 306(0) of the free memory list cache 304(0) (block 606). The compression circuit 300 thus may be referred to herein as "a means for emptying a full buffer of the plurality of buffers." Processing then continues at block 608. If the compression circuit 300 determines at decision block 602 that a number of pointers 400, 402of the free memory list cache 304(0) does not exceed the high threshold value 312, processing resumes at block 608.[0045] Reducing bandwidth consumption when performing free memory list cache maintenance in compressed memory schemes of processor-based systems may be provided in or integrated into any processor-based device. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a global positioning system (GPS) device, a mobile phone, a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a tablet, a phablet, a server, a computer, a portable computer, a mobile computing device, a wearable computing device (e.g., a smart watch, a health or fitness tracker, eyewear, etc.), a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, an automobile, a vehicle component, avionics systems, a drone, and a multicopter.[0046] In this regard, Figure 7 illustrates an example of a processor-based system 700 that includes a processor 702, including one or more processor cores 704. The processor-based system 700 is provided in an integrated circuit (IC) 706. The IC 706 may be included in or provided as a system-on-a-chip (SoC) 708 as an example. The processor 702 includes a cache memory 710 that includes metadata 712 for its uncompressed cache entries for use in mapping evicted cache entries to physical addresses in a compressed system memory 714 as part of a compression memory 716 in a compressed memory system 718. For example, the processor 702 may be the processor 110 in Figure 1, the cache memory 710 may be the cache memory 108 in Figure 1, and the compressed data region 116 in Figure 1 may be the compressed memory system 718, as non-limiting examples. A compression circuit 720 is provided for compressing and decompressing data to and from the compressed memory system 718. The compression circuit 720 may be provided in the processor 702 or outside of the processor 702 and communicatively coupled to the processor 702 through a shared or private bus. The compression circuit 720 may be the compression circuit 300 in Figure 3 as a non-limiting example.[0047] The processor 702 is coupled to a system bus 722 to intercouple master and slave devices included in the processor-based system 700. The processor 702 can also communicate with other devices by exchanging address, control, and data information over the system bus 722. Although not illustrated in Figure 7, multiple system buses 722 could be provided, wherein each system bus 722 constitutes a different fabric. For example, the processor 702 can communicate bus transaction requests to the compressed memory system 718 as an example of a slave device. Other master and slave devices can be connected to the system bus 722. As illustrated in Figure 7, these devices can include one or more input devices 724. The input device(s) 724 can include any type of input device, including but not limited to input keys, switches, voice processors, etc. The input device(s) 724 may be included in the IC 706 or external to the IC 706, or a combination of both. Other devices that can be connected to the system bus 722 can also include one or more output devices 726 and one or more network interface devices 728. The output device(s) 726 can include any type of output device, including but not limited to audio, video, other visual indicators, etc. The output device(s) 726 may be included in the IC 706 or external to the IC 706, or a combination of both. The network interface device(s) 726 can be any devices configured to allow exchange of data to and from a network 730. The network 730 can be any type of network, including but not limited to a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 728 can be configured to support any type of communications protocol desired.[0048] Other devices that can be connected to the system bus 722 can also include one or more display controllers 732 as examples. The processor 702 may be configured to access the display controller(s) 732 over the system bus 722 to control information sent to one or more displays 734. The display controller(s) 732 can send information to the display(s) 734 to be displayed via one or more video processors 736, which process the information to be displayed into a format suitable for the display(s) 734. The display controller(s) 732 and/or the video processor(s) 736 may be included in the IC 706 or external to the IC 706, or a combination of both.[0049] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer-readable medium and executed by a processor or other processing device, or combinations of both. The master devices and slave devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0050] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[0051] The aspects disclosed herein may be embodied in hardware and in computer- executable instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of non-transitory computer-readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0052] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flow chart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0053] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Embodiments of the disclosure are drawn to apparatuses and methods for determining extremum numerical values. Numerical values may be stored in files of a stack, with each bit of the numerical value stored in a content addressable memory (CAM) cell of the file. Each file may be associated with an accumulator circuit, which provides an accumulator signal. An extremum search operation may be performed where a sequence of comparison bits are compared in a bit-by-bit fashion to each bit of the numerical values. The accumulator circuits each provide an accumulator signal which indicates if the numerical value in the associated file is an extremum value or not. Examples of extremum search operations include finding a maximum of the numerical values and a minimum of the numerical values.
1.A device including:A stack including a plurality of files, wherein each of the plurality of files is configured to store a value including a plurality of stored bits;A control logic circuit configured to provide a comparison signal to one of the stored bits in each of the plurality of files, wherein each of the plurality of files is configured to provide One of a plurality of matching signals, each of the plurality of matching signals indicating whether the comparison signal matches the plurality of stored values in the associated one of the plurality of files Said one of the positions;A plurality of accumulator circuits, each associated with a respective one of the plurality of files, each of the plurality of accumulator circuits includes a latch circuit configured to store an accumulator signal, wherein each An accumulator circuit is configured to maintain the accumulator signal at a first logic level if the match signal from the associated one of the plurality of files indicates a match, and when the If the match signal does not indicate a match, the accumulator signal is changed to the second logic level, unless the accumulator signal is at the first logic level in any one of the files None of the matching signals indicate a match, in which case the accumulator signal is kept at the first logic level.2.The apparatus of claim 1, wherein the plurality of accumulator circuits are commonly coupled to a signal line carrying any matching signal, and wherein each of the plurality of accumulator circuits is configured to interact with the accumulator circuit The matching signal associated with the converter circuit indicates that the voltage of the signal line is changed from the first voltage to the second voltage in the case of matching.3.The apparatus according to claim 2, wherein the control logic circuit collectively provides a sampling signal to the plurality of accumulator circuits, and wherein in response to the sampling signal, the plurality of accumulator circuits respond to the matching signal If the associated one in does not indicate a match, the state of the accumulator signal is changed from the first level to the second level, and wherein the control logic circuit is responsive to the signal The line is at the second voltage and does not provide the sampling signal.4.The apparatus of claim 1, further comprising a memory array including a plurality of word lines, wherein each of the plurality of files is configured to store a row address associated with one of the plurality of word lines, and The numerical value is associated with the number of accesses to the one of the plurality of word lines.5.4. The apparatus of claim 4, wherein the victim associated with the row address stored in the plurality of files is refreshed based on the logic level of the accumulator signal associated with the file address.6.The apparatus of claim 1, wherein the control logic circuit is configured to provide a signal indicating which of the plurality of files is associated with an accumulator signal at the first logic level.7.A method including:Providing a comparison bit sequence to a content addressable memory CAM stack including a plurality of files, wherein each comparison bit is provided to one CAM unit in each of the plurality of files;Perform a comparison operation for each bit in the comparison bit sequence, where the comparison operation includes:Determining whether the comparison bit matches the stored bit in the CAM unit for each bit in the comparison bit sequence;For each bit in the sequence of comparison bits, determine whether the comparison bit matches any of the stored bits in any of the files of the CAM stack; andSet the value of a plurality of accumulator signals each associated with one of the files, where when the comparison bit does not match the stored bit and when the comparison bit matches all of the CAM stacks When the accumulator signal is at at least one file of the first level, changing the value of the corresponding one of the plurality of accumulator signals from the first level to the second level; andBased on the value of the plurality of accumulator signals, it is determined which of the plurality of files contains the extreme value.8.The method according to claim 7, wherein the sequence of comparison bits is provided bit by bit from the most significant bit to the least significant bit.9.The method according to claim 7, wherein each comparison bit is provided as being at a high logic level and the extreme value is a maximum value.10.8. The method of claim 7, wherein each comparison bit is provided as being at a low logic level and the extreme value is a minimum value.11.The method of claim 7, further comprising keeping one of the accumulator signals at the level at the first level and adding the accumulator at the first level The rest of the detector signals are set to the second level to resolve a tie.12.8. The method of claim 7, further comprising, after each comparison operation, determining whether the extreme value has been identified and not performing the rest of the comparison operations in response to the determination.13.8. The method of claim 7, further comprising writing a new value to a selected file of the plurality of files that is associated with the accumulator signal at the first level.14.8. The method of claim 7, further comprising setting the plurality of accumulator signals to a first state before providing the sequence of comparison bits.15.A device including:A first stack, which includes a plurality of content addressable memory CAM cells configured to store a first value, wherein each bit of the first value is stored in one of the plurality of CAM cells;A second stack, which includes a plurality of content addressable memory CAM cells configured to store a second value, wherein each bit of the second value is stored in one of the plurality of CAM cells;A first accumulator circuit associated with the first stack, wherein the first accumulator circuit is configured to provide a first accumulator bit;A second accumulator circuit associated with the second stack, wherein the second accumulator circuit is configured to provide a second accumulator bit; andA control logic circuit configured to provide a sequence of comparison bits to the first stack and the second stack, wherein the first accumulator circuit and the second accumulator circuit are each configured to be based on the first stack The comparison of a value and the second value with the comparison bit changes the state of the corresponding first accumulator bit and the second accumulator bit, wherein the control logic circuit is further configured to be based on the The first accumulator bit and the second accumulator bit determine the extreme value between the first value and the second value.16.The apparatus of claim 15, wherein each of the plurality of CAM cells in the first stack and the second stack includes configured to store the first value or the second value A latch portion of the bit, and a comparator portion configured to change the state of the corresponding first or second match signal based on the comparison of the comparison bit with the stored bit.17.The apparatus of claim 15, wherein the control logic is configured to provide the sequence of comparison bits such that the comparison bits are selected from the most significant bit to the least significant bit provided to the first plurality of CAM cells Bits and corresponding selected bits of the second plurality of CAM cells.18.The apparatus of claim 15, wherein the first accumulator circuit and the second accumulator circuit each include a lock configured to store the corresponding first accumulator bit or the second accumulator bit Memory circuit.19.A device including:A memory array, which includes a plurality of word lines; andAn attacker detector circuit configured to receive a row address associated with one of the plurality of word lines, the attacker detector circuit comprising:A stack including a plurality of files, each of the plurality of files being configured to store a row address and a count value associated with each of the stored row addresses;A plurality of accumulator circuits, each associated with one of the plurality of files, wherein the plurality of accumulator circuits are configured to store accumulator bits associated with the count value; andA control logic circuit configured to provide a sequence of comparison bits to the plurality of files, wherein each of the plurality of accumulator circuits is configured to change the comparison bit based on a comparison of the associated count value with the comparison bit The state of the corresponding accumulator bit, wherein the control logic circuit is further configured to determine the extreme value of the count value in the plurality of files based on the associated accumulator bit.20.The apparatus of claim 19, wherein the control logic is configured to determine the extreme value as the minimum value, and wherein when the attacker detector circuit receives the row address, it responds to the plurality of files Replace the row address associated with the minimum value for full.21.The device of claim 19, further comprising a refresh address generator configured to determine a victim address based on the received matching address, wherein the control logic is configured to determine an extreme value as a maximum value, and wherein The row address associated with the maximum value serves as the matching address.22.The apparatus of claim 19, wherein each of the plurality of files includes a plurality of content addressable memory CAM cells each configured to store a bit of the row address or the count value.23.The apparatus of claim 22, wherein each of the plurality of CAM cells includes a latch portion configured to store the bit and configured to compare the provided comparison bit with the stored bit and The comparator part that changes the state of the matching signal based on the comparison.
Equipment, system and method for determining extreme valueTechnical fieldThe present invention relates generally to semiconductor devices, and more specifically to semiconductor components for storing bits.Background techniqueSemiconductor logic devices can generally operate in binary logic, where signals and information are stored as one or more bits, each of which can be at a high logic level or a low logic level. There may be several applications where it is useful to store several values coded as binary numbers (where each digit of the binary number is stored as a bit). For example, the memory device may store a numerical value as a count of access operations on the word line of the memory. In many applications, it may be further desirable to determine the maximum (and/or minimum) value of the stored value. For example, to identify the word line that has been accessed the most times.Summary of the inventionOne aspect relates to a device. The device includes: a stack including a plurality of files, wherein each of the plurality of files is configured to store a value including a plurality of stored bits; and a control logic circuit configured to provide a comparison signal to One of the stored bits in each of the plurality of files, wherein each of the plurality of files is configured to provide one of a plurality of matching signals, the plurality Each of the matching signals indicates whether the comparison signal matches the one of the plurality of stored bits of the value in the associated one of the plurality of files; a plurality of accumulator circuits , Each of which is associated with a respective one of the plurality of files, each of the plurality of accumulator circuits includes a latch circuit configured to store an accumulator signal, wherein each accumulator circuit is Configured to keep the accumulator signal at a first logic level if the matching signal from the associated one of the plurality of files indicates a match, and when the matching signal does not indicate a match In the case of changing the accumulator signal to the second logic level, unless any of the matching signals of any of the multiple files having the accumulator signal at the first logic level Neither indicates a match, in which case the accumulator signal is kept at the first logic level.Another aspect involves a method. The method includes: providing a sequence of comparison bits to a content addressable memory (CAM) stack including a plurality of files, wherein each comparison bit is provided to one CAM unit in each of the plurality of files; Performing a comparison operation for each bit in the comparison bit sequence, where the comparison operation includes: for each bit in the comparison bit sequence, determining whether the comparison bit matches the stored bit in the CAM unit; For each bit in the comparison bit sequence, determine whether the comparison bit matches any of the stored bits in any of the files of the CAM stack; and set each bit to The value of multiple accumulator signals associated with one of the files, where when the comparison bit does not match the stored bit and when the comparison bit matches the CAM stack, where the accumulator signal is at At least one file of the first level, changing the value of the corresponding one of the plurality of accumulator signals from the first level to the second level; and based on the plurality of accumulator signals To determine which of the plurality of files contains the extreme value.Another aspect relates to a device. The device includes: a first stack including a plurality of content addressable memory (CAM) cells configured to store a first value, wherein each bit of the first value is stored in the plurality of CAM cells One of; a second stack, which includes a plurality of content addressable memory (CAM) cells configured to store a second value, wherein each bit of the second value is stored in the plurality of CAM cells In one of; a first accumulator circuit, which is associated with the first stack, wherein the first accumulator circuit is configured to provide a first accumulator bit; a second accumulator circuit, which is associated with the first Two stacks are associated, wherein the second accumulator circuit is configured to provide a second accumulator bit; and a control logic circuit configured to provide a sequence of comparison bits to the first stack and the second stack, Wherein the first accumulator circuit and the second accumulator circuit are each configured to change the corresponding first accumulator bit based on the comparison of the first value and the second value with the comparison bit And the state of the second accumulator bit, wherein the control logic circuit is further configured to determine the first value and the second value based on the first accumulator bit and the second accumulator bit The extreme value between.Another aspect relates to a device. The device includes: a memory array including a plurality of word lines; and an attacker detector circuit configured to receive a row address associated with one of the plurality of word lines, the attacker detector The circuit includes: a stack including a plurality of files, each of the plurality of files being configured to store a row address and a count value associated with each of the stored row addresses; a plurality of accumulator circuits, each Associated with one of the plurality of files, wherein the plurality of accumulator circuits are configured to store accumulator bits associated with the count value; and a control logic circuit configured to compare a sequence of bits Provided to the plurality of files, wherein each of the plurality of accumulator circuits is configured to change the state of the respective accumulator bit based on a comparison of the associated count value with the comparison bit, wherein the control logic The circuit is further configured to determine the extreme value of the count value in the plurality of files based on the associated accumulator bit.Description of the drawingsFigure 1 is a block diagram of a stack according to an embodiment of the invention.Fig. 2 is a flowchart of a method for performing an extreme value search operation according to an embodiment of the present invention.Fig. 3 is a schematic diagram of a CAM unit according to an embodiment of the present invention.Fig. 4 is a schematic diagram of an accumulator circuit according to an embodiment of the present invention.FIG. 5 is a block diagram showing the overall configuration of a semiconductor device according to at least one embodiment of the present invention.Fig. 6 is a block diagram of a refresh address control circuit according to an embodiment of the present invention.Fig. 7 is a block diagram of an attacker detector circuit according to the present invention.Detailed waysThe following description of certain embodiments is merely exemplary in nature and is in no way intended to limit the present invention or the scope of its application or use. In the following detailed description of the embodiments of the system and method of the present invention, reference is made to the accompanying drawings that form a part of the present invention and illustrate by way of illustration specific embodiments in which the described system and method can be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the currently disclosed systems and methods, and it should be understood that other embodiments may be utilized and structures may be made without departing from the spirit and scope of the present invention And logic changes. In addition, for the purpose of clarity, when certain features are obvious to those skilled in the art, detailed descriptions thereof will not be discussed, so as not to obscure the description of the embodiments of the present invention. Therefore, the following detailed description should not be regarded as limiting, and the scope of the present invention is only defined by the appended claims.Information in a semiconductor device may generally be represented by one or more binary bits, where each bit is at a high logic level (for example, 1) or a low logic level (for example, 0). Each bit can be stored in a memory cell (such as a latch circuit). The memory cell can store specific bits of information, which can be retrieved later and/or overwritten by new bits of information to be stored. Groups of memory cells can be organized together to form a file (or register) that stores information (e.g., data) containing several bits. Several files (e.g., registers) can be organized into stacks (e.g., data storage units) to store multiple pieces of information (e.g., each file can have N latch circuits to store information containing N bits and the stack There can be M files). The number of files in the stack can generally be referred to as the depth of the stack. The number of latch circuits in a register can generally be referred to as the width of the stack.The stack can contain values stored in different registers. The numerical value can be expressed as a binary number, where each bit of the binary number is stored in a different memory cell. It may be desirable to search the stack to determine which of the stored values has extreme values (e.g., maximum and/or minimum). However, the circuit that reads all the values to the interpreted value and searches for the maximum/minimum value in the register stack can be a time-consuming and/or power-consuming process. It may be desirable to search the stack to find extreme values by searching for values on a bit-by-bit basis.The present invention relates to a device, system and method for determining the maximum and minimum values stored in a content addressable memory (CAM) unit. The stack may contain several files, each of which consists of several CAM units. The value (expressed as a binary number) can be stored in one or more of the files, where each bit of the value is in one of the CAM units of the file. Each CAM cell may have a latch part to store a bit and a comparator part to return a matching signal based on a comparison between the stored bit and an external bit. Each file can be coupled to an accumulator circuit that stores accumulator signals. During the extreme value search operation, the comparison bit can be compared with each bit in each of the files. The most significant bit can start to provide the comparison bit and work to the least significant bit. For example, the comparison bits can be compared with each of the most significant bits in each of the files, followed by the next most significant bit in each of the files, and so on. In some embodiments, the comparison bit can be compared with each of the most significant bits in all files at the same time. The accumulator circuit may change the state of the accumulator signal based on the matching signal from the file in the stack when the comparison bit is provided. The extreme value (for example, the maximum or minimum value) stored in the stack can be identified based on the state of the accumulator signal.Figure 1 is a block diagram of a stack according to an embodiment of the invention. The stack 100 includes a number of files 102, and each of the files 102 includes a number of content addressable memory (CAM) units 104. The CAM unit 104 in the file 102 is commonly coupled to a signal line carrying the corresponding matching signal Match based on the comparison of the stored bit Q in the CAM unit 104 with the external signal Compare. Each file 102 is coupled to an accumulator circuit 106, which stores the accumulator signal MaxMin and determines the state of the accumulator signal MaxMin. The stack 100 also includes a control logic circuit 110 that can provide a signal to perform a comparison operation as part of the extreme value search operation to determine the file 102 containing the extreme value value. Various actions can be performed based on the extreme value search operation. For example, the control logic may provide a signal YMaxMin indicating an index/numbers of indexes of one or more of the files 102 containing extreme values (eg, maximum or minimum).The stack 100 includes a number of files 102, and the file 102 can store data in one or more fields of the file 102. Each field may include a number of CAM units 104, and each of the CAM units 104 stores a bit of data in the field. For the sake of brevity and clarity, the file 102 of FIG. 1 is shown as containing only a single field containing a value. Other example embodiments may contain multiple fields per file 102, some of the fields may be numeric values (for example, count values) and some of the fields may be non-numeric data (for example, row addresses, flags) . Generally, the CAM unit 104 of each field is commonly coupled to different signal lines carrying the matching signal of the field. Since the example of FIG. 1 only includes a single field, only a single matching signal line is shown and discussed.The stack 100 may have a certain number of files 102, which is generally referred to as the depth of the stack 100. The example stack 100 of FIG. 1 includes n files 102 that can be generally designated by index Y. Therefore, a given file 102 can be called File(Y), where Y is any value between 0 and n (including 0 and n). Each file 102 of the stack 100 may include a certain number of CAM units 104, which is generally referred to as the width of the stack 100. The example stack 100 of FIG. 1 includes m different CAM cells 104 that can be generally designated by an index X. Therefore, a given CAM unit 104 in the specified file 102 can be referred to as Cell(X), where X is any value between 0 and m (including 0 and m). Each CAM cell 104 can store a bit of information, which can be generally referred to as a stored bit Q(X), where X is any value between 0 and m (including 0 and m). Therefore, the stack 100 may have n x m CAM cells 104 in total.All CAM cells 104 in a given file 102 may be commonly coupled to a signal line that provides a matching signal Match. Since the number of signals Match is as large as the number of files 102, the matching signal can be generally referred to as Match(Y), where Y is any number between 0 and n (including 0 and n). Each file 102 can receive external information, which can be compared with one or more designated bits Q stored in the CAM unit 104 of the file 102. Each of the CAM units 104 within a given File(Y) 102 may be able to change the state of the corresponding signal Match(Y) based on the comparison between the stored bit Q and the provided external information.In some embodiments, the CAM unit 104 may use dynamic logic to determine the state of the associated signal Match (Y). For example, all signals Match (Y) may be precharged to a first logic level indicating a match between the stored bit Q and the provided external bit. The signal Match (Y) can be precharged by one or more driver circuits (not shown). In some embodiments, the accumulator circuit 106 coupled to the signal line Match (Y) may include a driver circuit and may precharge the signal line in response to a control signal from the control logic 110. If the CAM unit 104 determines that there is no match, it may change the state of the associated signal Match (Y) to a second logic level indicating that there is no match. The operation of the example CAM unit 104 is described in more detail in FIG. 3.Each of the signals Match (Y) is coupled to the accumulator circuit 106. There may be a certain number of n accumulator circuits, which matches the depth of the stack 100. Each accumulator circuit 106 has an accumulator latch 108 that stores the accumulator signal MaxMin(Y). At the beginning of the extreme value search operation, all accumulator signals MaxMin can be set to a first level (for example, a high logic level). When the comparison signal Compare(X) is provided, the state of the accumulator signal MaxMin can be changed to the second level to indicate that the associated file 102 has been disqualified and does not contain extreme values. After the control logic circuit 110 performs the extreme value search operation, the state of the accumulator signal MaxMin(Y) indicates whether the associated file 102 is an extreme value. The structure and operation of the example accumulator circuit is described in more detail in FIG. 4.The control logic circuit 110 may perform an extremum search operation by performing a sequence of comparison operations. During one of these comparison operations, the control logic 110 may provide a comparison signal Compare(X). The signal Compare(X) may be a signal provided to all CAM units 104 with a specific index X in all files 102. The state of the match signal Match can be used to determine whether the state of the stored bit Q(X) in any one of the files 102 matches the state of the compare signal Compare(X). For example, Compare(X) can be provided at a high logic level. Therefore, if the bit Q(X) in a given File(Y) is at a high logic level, then the matching signal Match(Y) can be at a high logic level, and otherwise, the matching signal Match(Y) can be at a low logic level. level. The state of the signal Compare(X) can determine the type of extreme value search operation. If the signal Compare(X) is provided at a high logic level, the extreme value search operation may be a maximum value search operation. If the signal Compare(X) is provided at a low logic level, the extreme value search operation may be a minimum value search operation.In some embodiments, during the extremum search operation, the control logic circuit 110 can perform a comparison operation from the most significant bit Q(m) to the least significant bit Q(0) in the file 102 (for example, provide Compare(X)) . The accumulator circuit 106 can be set so that during the extreme value search operation, the state of all accumulator signals MaxMin(Y) can start the extreme value search operation in the first state, and if Match(Y) indicates that there is no match during the current comparison operation , Then change to the second state, unless all signals Match indicate that there is no match in a given comparison operation, in which case the accumulator signal MaxMin(Y) can remain in the same state regardless of the associated match signal Match(Y) What is the status of.The accumulator circuit 106 may be commonly coupled to the signal AnyMatchF, which indicates whether any of the matching signals Match (Y) indicates a match after providing each of the comparison signals Compare (X). The signal AnyMatchF may indicate whether there is a match for any of the accumulator circuits in which the accumulator signal MaxMin(Y) is still in the initial state (eg, high logic level). The accumulator circuit 106 in which the accumulator signal MaxMin (Y) has changed to the second state (for example, a low logic level) may not be used to determine the state of the signal AnyMatchF.The accumulator circuit 106 can use the signal AnyMatchF to partially determine whether to change the state of the accumulator signal MaxMin(Y). In some example embodiments, the control logic circuit 110 may receive all matching signals Match, and may provide the signal AnyMatchF based on the state of the signal Match (Y) and its associated accumulator signal MaxMin (Y). In some example embodiments, the accumulator circuit 106 may be commonly coupled to the signal line carrying AnyMatchF, and each of the accumulator circuits 106 may be based on the state of the matching signal Match(Y) coupled to the accumulator circuit 106 And the state of the accumulator signal MaxMin(Y) stored in the accumulator circuit 106 changes the state of the signal AnyMatchF on the signal line. During a given extremum search operation, once the accumulator signal MaxMin(Y) has changed to the second state, it may not change back to the first logic level until after the extremum search operation ends. The process of performing the extreme value search operation is described in more detail in Figure 2.In some embodiments, after all comparison operations as part of the extremum search operation, the control logic circuit 110 and/or the accumulator circuit 106 can resolve any tie. For example, if the extreme value search operation is to search for the maximum value, it may happen that more than one of the files 102 contains the same value as the maximum value. The stack 100 can maintain the accumulator signal MaxMin(Y) associated with the file 102 with the lowest index (e.g., File(Y) with the lowest value of Y in the file 102 containing the maximum value that caused a tie) at high power, for example Tie and set the accumulator signal MaxMin of other accumulator circuits 106 to a low level to resolve the tie.In some embodiments, the accumulator circuits 106 may be coupled together in a'daisy chain' manner, so that a given accumulator (Y) receives the accumulator signal MaxMin (Y-1) from the previous accumulator (Y-1) and converts it The accumulator signal MaxMin(Y) is provided to the next accumulator (Y+1). Additional control signals (not shown) can also be coupled between the accumulator circuits 106 in a'daisy chain' manner. These daisy-chain signals may allow the accumulator circuit 106 to resolve a tie, so that only a single one of the accumulator signals MaxMin(Y) remains at a high level after the extreme value search operation.In some embodiments, the control logic 110 may determine whether the extreme value is not found, and may provide one or more signals (not shown) indicating that the extreme value search operation did not return a value. For example, if all files 102 contain a value of 0, the extreme value search operation may not return a result. In some embodiments, in addition to the signal indicating that the extreme value is not found (or instead of the signal), the control logic 110 may still indicate a specific file (for example, with the signal YMaxMin).In addition to providing the comparison signal Compare(X) and the signal AnyMatchF, the control logic 110 may also provide other control signals denoted as the signal Control in FIG. 1. The control signal Control can be used to operate the accumulator circuit 106 during the extreme value search operation. The control logic circuit 110 may be a state machine that provides different control signal sequences to operate the accumulator circuit 106. For example, one of the control signals can be used to indicate that the extreme value search operation is about to start and the state of all accumulator signals MaxMin(Y) should be set to the first state. The control signal Control may be commonly provided to the accumulator circuit 106 in common. The different example control signals and their operation are discussed in more detail in Figure 4.For clarity, the stack 100 of FIG. 1 is only shown as a signal coupled to the control logic circuit 110 and used in the extreme value search operation. The stack 100 can also be coupled to input data and a write signal, which can be used to rewrite one or more of the bits Q in the file 102 with the associated bits of the input data. The stack 100 may also provide one or more of the stored bits Q from one or more of the files 102. For example, a given file 102 may provide its stored value (eg, bits Q (0 to m)) to a counter circuit, which may update the value (eg, increment it) and then provide a write signal , So that the updated value is written back to the CAM unit 104 in the file 102. In some embodiments, the state of the write signal that determines whether new data can be written to the file 102 may be determined in part by the accumulator circuit 106 based on the state of the accumulator signal MaxMin.Fig. 2 is a flowchart of a method for performing an extreme value search operation according to an embodiment of the present invention. In some embodiments, the method 200 may be implemented by the stack 100 of FIG. 1.The method 200 may generally begin with block 205, which describes setting the accumulator signal to a first state. The accumulator signal (for example, MaxMin(Y) of FIG. 1) may be stored in an accumulator latch in the accumulator circuit (for example, the accumulator latch 108 in the accumulator circuit 106 of FIG. 1). In some embodiments, all accumulator signals can be set to the first state. In some embodiments, the accumulator signal may be a one-bit signal (eg, accumulator bit) and the first state may be at a high logic level. In some embodiments, the control logic circuit (for example, 110 of FIG. 1) may send an initialization signal. In response to receiving the initialization signal, the accumulator circuit may store the high logic level as an accumulator signal in its accumulator latch. In some embodiments, block 205 may also include setting various other signals to an initial state. For example, the previous matching signal (described in more detail in FIG. 4) can also be set to the initial inactive level.Block 205 may generally follow block 210, which describes providing a first comparison bit from a sequence of comparison bits. Each comparison bit (e.g., Compare(X) of FIG. 1) may be provided to all bits in a given position in each of the files of the stack (e.g., file 102 of FIG. 1). For example, if the given comparison bit in the sequence is the Xth bit, then the comparison bit can be provided collectively to the bit Q(X) in all files. In some embodiments, the sequence may start with the most significant bit (eg, Q(m)) and then count down to the least significant bit (eg, Q(0)) bit by bit. Therefore at block 210, the first comparison bit may be provided to bit Q(i) and then the next comparison bit may be provided to bit Q(i-1), etc.The state of the compare bit determines the type of extreme value search operation being performed. For example, if the extreme value search operation is to search for the largest value in the stack, then the comparison bit may be provided to be at a high logic level. If the extreme value search operation is to search for the smallest value in the stack, the comparison bit can be provided to be at a low logic level.Block 210 may generally follow block 215, which describes precharging the matching signal to the first state. The signal lines carrying matching signals (for example, Match (Y) of FIG. 1) can each be charged to a voltage level representing a high logic level. The control logic may send the precharge signal to the driver circuit (which may be located in the accumulator circuit). The pre-charge signal can activate the driver circuit to pre-charge the signal line.Block 215 can generally be followed by block 220, which describes comparing the comparison bit with the corresponding stored bit in each file. As previously discussed, the compare bit Compare(X) can be provided collectively to all bits Q(X) with a given index X in all files. Each CAM cell storing bit Q(X) can compare the state of the stored bit Q(X) with the compare bit Compare(X). If there is a match (for example, the bits have the same state), the match signal Match (Y) of the file may remain in the first state (for example, a high logic level). If there is no match, the CAM unit may change the state of the match signal Match (Y) to the second state (for example, a low logic level).Block 220 may generally follow block 225, which describes determining whether there is a match for any of the files in which the accumulator signal is at a high level. The determination may be made based on the state of the matching signal after the comparison operation described in block 220. There may be a signal indicating whether any of the matching signals indicates a match in a file with an accumulator signal at a high logic level (for example, AnyMatchF of FIG. 1). Based on this determination, the file associated with the accumulator signal at a low logic level can be disqualified. For example, each of the accumulator circuits may be commonly coupled to a signal line carrying AnyMatchF that is precharged to the first level as part of block 225. Any one of the accumulator circuits can change the state of the signal line (and therefore AnyMatchF) based on its associated matching signal, as long as the accumulator signal stored in the accumulator circuit is at a high logic level . If the accumulator signal is low, it may not affect the state of the signal AnyMatchF, regardless of whether there is a match. The state of the signal line may therefore indicate whether there is at least one match in the file associated with the accumulator signal at a high logic level. If there is not at least one match (eg, the signal AnyMatchF is high), then block 225 may generally be followed by block 240, as described in more detail herein.If there is at least one match (for example, the signal AnyMatchF is low), then block 225 may generally be followed by block 230, which describes setting the previous match signal to the active level. The control logic may contain a previous match signal having an active level indicating that there is at least one match between the stored bit and the comparison bit at least once during the current extreme value search operation. In some embodiments, the previous matching signal may be a one-bit signal (eg, a flag), where the active level is a high logic level.Block 230 may generally follow block 235, which describes changing the state of the accumulator signal based on a match between the associated stored bit and the comparison bit of the accumulator signal. For each file, the accumulator signal can be changed based on whether the bit Q(X) in the file matches the compare bit Compare(X). If there is no match, the accumulator signal can be changed from the first state to the second state. If there is a match, the accumulator signal can be kept in its current state. Note that once the accumulator signal is in the second state, it usually does not reset to the first state until a new extremum search operation is performed. In some embodiments, the state of the Match signal can be written to the accumulator latch to change the state of the accumulator signal, and logic (eg, feedback) can be used to prevent the accumulator signal from being in its second state currently In case of returning to the first state. Block 235 may generally follow block 250, as described in more detail herein.Returning to block 225, if there is no match between the comparison bit in any one of the files and the stored bit Q(X), then block 225 can usually be followed by block 240, which describes keeping all accumulator signals at Its current level. When there is no match in any of the files (for example, as indicated by the status of any match signal), there is even no match between the match signal Match(Y) and the associated accumulator signal MaxMin(Y) Down, all accumulator signals can be kept in their current state.Block 240 may generally follow block 250. Block 235 may also generally follow block 250. Block 250 describes determining whether the last comparison bit from the comparison bit sequence has been provided. For example, in block 250, it may be determined whether the newly provided comparison bit is the least significant bit Compare(0). If the last comparison bit has been provided, it may indicate that the method 200 has completed providing the comparison bit, and block 250 may generally follow block 270.If the last comparison bit has not been provided, then block 250 may generally be followed by block 255, which describes providing the next comparison bit in the sequence. For example, if the sequence counts down from the most significant bit Q(m) to the least significant bit Q(0) and the previous comparison bit is Compare(X), then at block 255, the comparison bit Compare( X-1). Block 255 usually follows block 215. The loop from block 215 to block 255 may generally continue until the method 200 finishes providing comparison bits.Once the method 200 has completed providing the compare bits, the method 200 may proceed to block 270, which describes determining whether the previous matching signal is at the active level. If the previous match signal is not at the active level, it may indicate that there is no operation in which there is at least one match for the comparison bit in one of the files. For example, if all files contain numbers that are 0 (for example, and therefore all of their bits are low) and the extreme value search operation is to find the maximum value, then no bit will match the comparison bit.If the previous matching signal is at the active level (eg, there is at least one match), then block 270 may generally be followed by optional block 260 or may be followed by block 265 if optional block 260 is not performed. Block 260 describes resolving any tie if more than one accumulator signal is in the first state. The accumulator signal in the first state may indicate that the associated file contains extreme values. In some applications, it may be desirable to identify only a single file as containing extreme values. Therefore, if the plurality of accumulator signals are in the first state, during block 260, the control logic circuit and/or the accumulator circuit can select one of them and can change the other accumulator signals to the second state. In one example criterion for selecting a single accumulator signal (e.g., for tie-breaking), the file with the highest index can be selected (e.g., File(Y), where Y is closest to the maximum value Y=m). Other criteria can be used in other examples.Block 260 may generally be followed by block 265, which describes determining a file with extreme value based on the state of the extreme signal. The control logic circuit may identify files containing extreme values based on which of the accumulator signals (or more, without performing block 260) is in the first state. In some embodiments, the control logic circuit may provide a signal indicating which file contains the index of the extreme value (for example, YMaxMin in FIG. 1). In some embodiments, various actions can be performed on files containing extreme values (or on files not containing extreme values). For example, after finding the smallest value in the stack, the stack can receive new data and write signals. Only the accumulator circuit in which the accumulator signal is still high (e.g. minimum) can pass the write signal to the CAM unit of the file and therefore can only rewrite the minimum value.Returning to block 270, if the previous matching signal is not at the active level (e.g., there is no match), then block 270 may generally be followed by block 275, where the description of block 275 indicates that an extreme value has not been determined. For example, block 275 may involve determining that there are no extreme values because all count values are equal. In some embodiments, block 275 may involve providing a signal indicating that the extreme value search operation was not successfully completed (e.g., because all files store equal count values). In some embodiments, the file may still be indicated as a placeholder, for example, by following a procedure similar to that described in block 260.Fig. 3 is a schematic diagram of a CAM unit according to an embodiment of the present invention. In some embodiments, the CAM unit 300 may implement the CAM unit 104 of FIG. 1. The CAM unit 300 includes a latch part 312 and a comparator part 314. The CAM cell 300 may generally use voltage to represent the value of various bits. The CAM cell 300 may include a conductive element (for example, a node, a conductive line) that carries a voltage representing the logical value of the bit. For example, the high logic level may be represented by a first voltage (for example, the system voltage, such as VPERI), and the low logic level may be represented by a second voltage (for example, the ground voltage, such as VSS).The latch part of the CAM unit 300 can store signals Q and QF representing the state of the stored bit (for example, Q(X) in FIG. 1). When the CAM unit 300 receives the bit of the input data represented by the signals D and DF and the write signal Write, the value of the input data can rewrite the stored bit and become a new stored bit. The CAM unit 300 can receive the external bits represented by the signals X_Compare and XF_Compare (for example, Compare(X) in FIG. 1) and can compare the external bits with the stored bits. Based on the comparison, the CAM unit can change the state of the matching signal BitMatch (for example, Match (Y) of FIG. 1) that can be shared with one or more other CAM units in the same field of the file.The latch portion 312 includes a first transistor 316 having a source coupled to a node that provides a voltage VPERI that can represent a high logic level. The first transistor 316 has a drain coupled to a node 327 having a voltage representing the value of the signal Q and a gate coupled to a node 329 having a voltage representing the value of the complementary signal QF. The signal Q represents the logic level of the bit stored in the latch section 312. The first transistor 316 may be a p-type transistor. The latch portion 312 also includes a second transistor 317 having a source coupled to the node providing VPERI, a gate coupled to the node 327, and a drain coupled to the node 329. The second transistor 317 may be a p-type transistor.The latch portion 312 includes a third transistor 318 having a drain coupled to the node 327, a gate coupled to the node 329, and a source coupled to a node that provides a ground voltage VSS that can represent a low logic level . The third transistor 318 may be an n-type transistor. The latch portion 312 includes a fourth transistor 319 having a drain coupled to the node 329, a gate coupled to the node 327, and a source coupled to a node that provides the ground voltage VSS. The fourth transistor 319 may be an n-type transistor. The transistors 316 and 318 may form an inverter circuit, and the transistors 317 and 319 may form another inverter circuit, and the two inverter circuits are cross-coupled with each other.In operation, the first, second, third, and fourth transistors 316 to 319 can operate to store the values of the stored signals Q and QF. Transistors 316 to 319 may work together to couple Q-carrying node 327 and QF-carrying node 329 to a node that provides a system voltage (e.g., VPERI or VSS) associated with the values of signals Q and QF. For example, if the stored signal Q is at a high logic level, the inverted signal QF is at a low logic level. The first transistor 316 may be active, and VPERI may be coupled to node 327. The second transistor 317 and the third transistor 318 may be inactive. The fourth transistor 319 may be active and may couple VSS to the node 329. This allows the node 327 to remain at the voltage VPERI representing a high logic level and the node 329 to maintain the voltage VSS representing a low logic level. In another example, if the stored signal Q is at a low logic level, the inverted signal QF may be at a high logic level. Both the first transistor 316 and the fourth transistor 319 may be inactive. The second transistor 317 can be active and can couple VPERI to node 329. The third transistor 318 may also be active and may couple VSS to the node 327. In this way, the stored signals Q and QF can be coupled to respective system voltages corresponding to their current logic levels, which can maintain the current logic values of the stored bits.The latch part 312 also includes a fifth transistor 320 and a sixth transistor 321. The transistors 320 and 321 can be used as switches that can couple the signal line carrying the input data D and the signal line carrying the inverted input data DF to the nodes 327 and 329 carrying Q and QF, respectively, when the write signal Write is active . The fifth transistor 320 has a gate coupled to the line carrying the Write signal, a drain coupled to the signal D, and a source coupled to the node 329. The sixth transistor 321 has a gate coupled to the Write signal, a drain coupled to the signal DF, and a source coupled to the node 329. Therefore, when the Write signal is at a high level (for example, at a voltage such as VPERI), the transistors 320 and 321 can be active, and the voltages of the signals D and DF can be coupled to the nodes 327 and 329 carrying Q and QF, respectively .In some embodiments, the first transistor 316 to the sixth transistor 321 may generally all have the same size as each other. For example, the transistors 316 to 321 may have a gate width of about 300 nm. Other sizes of transistors 316 to 321 can be used in other examples. The CAM unit 300 also includes a comparator part 314. The comparator part 314 can compare the signals Q and QF with the signals X_Compare and XF_Compare. The signal X_Compare may indicate the logic level of the external bit provided to the comparator part 314. If there is no match between the signal Q and X_Compare (and therefore between QF and XF_Compare), the comparator section 314 may change the state of the BitMatch signal from the first logic level (eg, high logic level) to the second logic level. Level (for example, low logic level). For example, if the stored bit does not match the external bit, the comparator part 314 may couple the ground voltage VSS to the signal line carrying the signal BitMatch. In some embodiments, if there is a match between the stored bit and the external bit, the comparator section 314 can do any of this operation. In some embodiments, the signal BitMatch may be precharged to the voltage associated with the high logic level (eg, VPERI) before the comparison operation. During the precharge operation (for example, block 225 of FIG. 2), both X_Compare and XF_Compare can be kept at a low logic level.The comparator part includes a seventh transistor 322, an eighth transistor 323, a ninth transistor 324, and a tenth transistor 325. The seventh transistor 322 and the ninth transistor 324 can implement the first part 101 of FIG. 1. The eighth transistor 323 and the tenth transistor 325 can implement the second part 103 of FIG. 1. The seventh transistor 322 includes a drain coupled to the signal BitMatch, a gate coupled to the node 327 (eg, signal Q), and a source coupled to the drain of the ninth transistor 324. The ninth transistor 324 also has a gate coupled to the signal XF_Compare and a source coupled to the signal line providing the ground voltage VSS.The eighth transistor 323 has a drain coupled to the signal BitMatch, a gate coupled to the node 329 (eg, signal QF), and a source coupled to the drain of the tenth transistor 325. The tenth transistor has a gate coupled to the signal X_Compare and a source coupled to the ground voltage VSS.Since the signal Q is complementary to the signal QF, the comparator part 312 can operate by comparing the external signal X_Compare with the signal QF to see if it matches and comparing the inverted external signal XF_Compare with the stored signal Q to see if it matches . If they do match, it may indicate that the signal X_Compare does not match the signal Q and the signal XF_Compare does not match the signal QF, and therefore the external bit does not match the associated stored bit.The comparator portion 314 can use relatively few components because it changes the signal BitMatch from a known state (eg, a precharged high logic level) to a low logic level. Therefore, it may not need to include additional components (for example, additional transistors) to change the logic level of the signal BitMatch from low to high or from an unknown level to low or high. The comparator section 314 can use this to provide dynamic logic. For example, the comparator part 314 has two parts (for example, transistors 322/324 and transistors 324/325), either of which can transfer the signal BitLine without a match between the stored bit and the external bit. Coupled to voltage VSS. Since only one of the parts is active at a time, the active part only needs to check the state of the signal Q or QF. Either of the parts is equally capable of changing the signal BitMatch to a low logic level.In the example operation, if the stored signal Q is at a high logic level (and therefore the signal QF is low) and the external signal X_Compare is also high (and the signal XF_Compare is low), then the external signal can match the stored signal, and the transistor 322 and 325 may be active while transistors 324 and 323 are inactive. This prevents the ground voltage VSS from coupling to the signal BitMatch. If the signal X_Compare is low (eg, if there is no match), then the external signal may not match the stored signal, and the transistors 322 and 324 may be active while the transistors 323 and 325 are inactive. The transistors 322 and 324, which are also currently in use, can couple the ground voltage VSS to the signal BitMatch.In another example operation, if the stored signal Q is low (and therefore the signal QF is high), then the transistor 322 may be inactive while the transistor 323 is active. If the external signal X_Compare is low (and XF_Compare is high), then the external signal can match the stored bit, and the transistor 324 is active and the transistor 325 is inactive. If the signal X_Compare is high (and the signal XF_Compare is low), then the external signal may not match the stored signal, and the transistor 324 may be inactive while the transistor 325 is active. Therefore, the signal BitMatch can be coupled to the ground voltage VSS through the active transistors 323 and 325.In some embodiments, the transistors 322 to 325 of the comparator part 314 may generally all have the same size as each other. In some embodiments, the transistors 322 to 325 of the comparator part 314 may have different sizes from the transistors 316 to 321 of the latch part 312. For example, the transistors 322 to 325 may have a gate width of about 400 nm and a gate length of about 45 nm. Other sizes of transistors 322 to 325 can be used in other examples.Fig. 4 is a schematic diagram of an accumulator circuit according to an embodiment of the present invention. In some embodiments, the accumulator circuit 400 may implement the accumulator circuit 106 of FIG. 1. The accumulator circuit 400 includes a latch circuit 408 (for example, the accumulator latch 108 of FIG. 1) that stores the accumulator signal MaxMinY (which in the case of the example accumulator circuit 400 of FIG. 4 is an accumulator bit). The latch circuit 408 may provide a signal MaxMinY (eg, the accumulator signal MaxMin(Y) of FIG. 1) based on the stored accumulator bits. The accumulator circuit 400 may receive various input and control signals for determining the state of the accumulator signal stored in the latch circuit 408 during the extreme value search operation.The accumulator circuit 400 receives the control signal BitxCompPre from the control logic circuit (for example, the control logic circuit 110 of FIG. 1) in common with all other accumulator circuits. The signal BitxCompPre can be used to pre-charge the node carrying the matching signal BitxMatch_Y to a high voltage level before each comparison as part of a pre-charge operation (eg, block 215 of FIG. 2). In some embodiments, the matching signal BitxMatch_Y can implement the matching signal Match(Y) of FIG. 1 and/or the BitMatch of FIG. 3. Therefore, whenever a comparison operation is performed (for example, whenever a comparison bit is provided to a file of the stack), the control logic circuit may provide the signal BitxCompPre to be at a high logic level (for example, a high voltage). The signal BitxCompPre can be'pulsed' by the control logic (for example, temporarily provided at a high level and then returned to a low level) in order to precharge the node carrying the matching signal BitxMatch_Y.It may also be desirable to prevent the node carrying the signal BitxMatch_Y from floating between operations. The control signal Standby can be used to indicate that the comparison operation is not currently performed. The signal Standby can be provided to all accumulator circuits of the stack together. Therefore, the control logic circuit can provide the pulse of the signal BitxCompPre when the comparison operation is about to be performed and provide the signal Standby when the comparison operation is not performed. During the comparison operation, the state of the node carrying the signal BitxMatch_Y may be allowed to change (for example, both BitxCompPre and Standby are not active).The node carrying the signal Standby is coupled to the gate of the transistor 433, which has a source coupled to a ground voltage (for example, VSS) and a drain coupled to the node carrying the matching signal BitxMatch_Y. The transistor 433 may be an n-type transistor. Therefore, when the signal Standby is provided, the transistor 433 is active, and the node carrying the signal BitxMatch_Y is coupled to ground to prevent it from floating.In one embodiment, not shown in FIG. 4, the signal BitxCompPre may be coupled to the gate of the transistor 432 through an inverter circuit. The source of the transistor 432 is coupled to a system voltage (for example, VPERI) higher than the ground voltage VSS and the drain of the transistor 432 is coupled to the node carrying the matching signal BitxMatch_Y. The transistor 432 may be a p-type transistor. Therefore, in this embodiment, when the signal BitxCompPre is provided at a high level, the transistor 432 may be active and the node carrying BitxMatch_Y may be coupled to the system voltage in order to precharge it.In some embodiments, such as the embodiment shown in FIG. 4, it may be desirable to allow only files containing extreme values to be precharged for comparison operations. For example, after finding the extreme value, it can be useful to only compare the external value with the file containing the extreme value. To achieve this functionality, the signal BitxCompPre is coupled to one of the input terminals of the NAND gate 431. The other input terminal of the NAND gate 431 may be coupled to the output terminal of the “OR” gate 430, and the “OR” gate 430 has input terminals coupled to the control signal FindMaxMinOp and the accumulator signal MaxMinY. The control signal FindMaxMinOp can be provided to all accumulator circuits in the stack together. The accumulator signal MaxMinY is a value stored in the latch circuit 408 of a particular accumulator circuit 400, and therefore the value of the accumulator signal MaxMinY may be different in different accumulator circuits.When it is desired to precharge all the matching signals BitxMatch_Y across the depth of the stack, the signal FindMaxMinOp can be pulsed while the signal BitxCompPre is pulsed. Therefore, the “OR” gate 430 may provide a high logic output, and therefore the two inputs of the “NAND” gate 431 may be at a high level, causing it to pass back a low logic level signal to activate the transistor 432.If it is desired to precharge only the node carrying the matching signal in the file with the extreme value, when the signal BitxCompPre is pulsed, the signal FindMaxMinOp can be kept low (for example, not provided). The state of the accumulator signal MaxMinY can determine whether the accumulator circuit 400 of a specific file will charge the signal line carrying the matching signal BitxMatch_Y. If the accumulator signal MaxMinY is at a high level (for example, indicating that the accumulator circuit is associated with a file containing extreme values), then when the signal BitxCompPre is pulsed, the matching signal BitxMatch_Y can be precharged. If the accumulator signal is at a low level (for example, indicating that the accumulator circuit is not associated with an extreme value), then the matching signal BitxMatch_Y is not precharged. In some embodiments, the signal FindMaxMinOp can be omitted, and since the signal MaxMinY is initially set to a high level, the signal MaxMinY can be used (together with BitxCompPre) to activate the transistor 432.As discussed in FIGS. 1 to 3, after the signal line carrying the match signal BitxMatch_Y is precharged, a comparison operation can be performed in which a comparison bit is provided and is coupled to one of the bits in the file of the accumulator circuit 400 Compare. After the comparison operation, the node carrying the matching signal BitxMatch_Y may have a voltage indicating the result of the comparison, which is a high voltage (for example, VPERI) if the comparison bit matches the stored bit or is a high voltage if there is no match Low voltage (for example, VSS). The node carrying the matching signal BitxMatch_Y is coupled to the input terminal D of the latch circuit 408. When the latch terminals LAT and LATf of the latch circuit 408 are triggered, the value of the matching signal BitxMatch_Y can be stored as the value of the accumulator signal stored in the latch circuit 408.The accumulator circuit 400 and other accumulator circuits receive the control signal BitxMatchAccumSample in common. After each of the comparison operations (eg, the delay time after pulsing the signal BitxCompPre), the control logic circuit may pulse the signal BitxMatchAccumSample. The control signal BitxMatchAccumSample may partially determine whether and when the latch circuit 408 captures the value of the matching signal BitxMatch_Y and saves it as the value of the accumulator signal MaxMinY.The signal BitxMatchAccumSample is coupled to the input terminal of the NAND gate 436. The other input terminal of the NAND gate 436 is coupled to the node carrying the accumulator signal MaxMinY stored in the latch. When the signal BitxMatchAccumSample is pulsed, the current value of the matching signal BitxMatch_Y can be captured in the latch circuit 408 only when the current value of the accumulator signal MaxMinY is still at a high level. If the accumulator signal MaxMinY has changed to a low level (eg, a mismatch due to a previous comparison operation), then the latch circuit 408 will be prevented from capturing the future value of the match signal BitxMatch_Y. The NAND gate 436 has a latch input LAT coupled to the latch circuit 408 and is also coupled to the output terminal of the inverting latch input LATf through the inverter circuit 437.The latch circuit 408 has a set input Sf coupled to the control signal FindMaxMinOp_InitF. The signal FindMaxMinOp_InitF can be commonly coupled to all accumulator circuits of the stack. The signal FindMaxMinOp_InitF can be used to set all the latch circuits 408 in different accumulator circuits to store a high level as the accumulator signal MaxMinY before the extreme value search operation (eg, as part of block 205 of FIG. 2). The signal FindMaxMinOp_InitF can be pulsed from a high level to a low level and then return to a high level. This may cause all the latch circuits 408 to be set to store a high level before starting the extreme value search operation. Since all the latch circuits 408 are initialized to store a high level as an accumulator signal, all the latch circuits 408 can initially respond to the signal BitxMatchAccumSample until the latch is locked by the match signal BitxMatch_Y being at a low level after the comparison operation. The register is disqualified.All accumulator circuits may be commonly coupled to the signal line carrying the signal AnyYbitMatchF, and the signal AnyYbitMatchF may be the signal AnyMatchF of FIG. 1 in some embodiments. The signal AnyYbitMatchF may indicate whether any of the matching signals BitxMatch_Y is at a high logic level after the comparison operation. In some embodiments, the state of the signal AnyYbitMatchF may be determined using dynamic logic (for example, similar to the matching signal BitMatch of FIG. 2). For example, after each comparison operation, the signal AnyYbitMatchF may be precharged to a high level (e.g., system voltage, such as VPERI), and each of the accumulator circuits 400 may be able to When the matching signal BitxMatch_Y of the file is at a high level (for example, indicating a match), the state of the signal AnyYbitMatchF is changed to a low level.The signal line carrying the signal AnyYbitMatchF can be coupled to a driver circuit (not shown) that can precharge the signal line before the comparison operation. The driver circuit may precharge the signal line in response to the control signal CrossRegCompPreF provided by the control logic. The signal CrossRegCompPreF can be pulsed to a low level to precharge the signal line carrying AnyYbitMatchF. In some embodiments, the driver circuit may include a transistor having a gate coupled to CrossRegCompPreF, a source coupled to a system voltage (such as VPERI), and a drain coupled to the signal line carrying AnyYbitMatchF. The transistor may be a p-type transistor such that when the signal CrossRegCompPreF is pulsed low, the transistor is active and the signal line is coupled to VPERI to precharge it.Each of the accumulator circuits 400 has a transistor 434 with a source coupled to the signal line carrying AnyYbitMatchF and a drain coupled to the source of the transistor 443. The drain of the transistor 443 is coupled to the source of the transistor 435. The drain of the transistor 435 is coupled to a ground voltage (eg, VSS). The transistors 434, 435, and 443 may be n-type transistors. The gate of the transistor 434 is coupled to the control signal CrossRegComp that can be commonly provided to all accumulator circuits. The signal CrossRegComp may be pulsed to a high level by the control logic to determine whether any of the matching signals BitxMatch_Y is at a high level after the comparison operation (eg, as part of block 225 of FIG. 2). The gate of transistor 443 is coupled to the signal MaxMinY. The gate of the transistor 435 is coupled to the node carrying the matching signal BitxMatch_Y. Therefore, when the signal CrossRegComp is pulsed, the transistor 434 is activated. If the matching signal BitxMatch_Y is high, then the transistor 435 is activated. If the accumulator signal MaxMinY is at a high level, then the transistor 443 is active. If all transistors 434, 435, and 443 are activated, the signal line carrying AnyYbitMatchF is coupled to the ground voltage VSS. Therefore, the state of the signal AnyYbitMatchF can be changed only when the accumulator signal MaxMinY is at a high level, the matching signal BitxMatch_Y is at a high level, and the command signal CrossRegComp is provided.For each comparison operation during the extreme value search operation, the signal line AnyYbitMatchF can be precharged to a high level, and can be pulled when any of the matching signals BitxMatch_Y is at a high level (for example, indicating a match) To low level. The control logic circuit may use the state of the signal AnyYbitMatchF to determine whether the state of the accumulator signal should be changed (eg, as described in blocks 225 to 245 of FIG. 2). For example, in response to the signal AnyYbitMatchF being at a low level (eg, indicating at least one match), the control logic may provide a pulse of the signal BitXMatchAccumSample. In response to the signal AnyYbitMatchF being at a high level (eg, indicating no match), the control logic may skip providing the signal BitXMatchAccumSample for a given comparison operation.In some embodiments, different accumulator circuits 400 may be connected together in a'daisy chain' manner. This may allow the accumulator circuit 400 and the control logic circuit to work together to resolve any tie, so that only one accumulator latch 408 in one of the accumulator circuits 400 maintains a high value. For example, the accumulator circuit 400 may jointly receive the control signal ClrLessSigAccums indicating that the tie should be resolved. Each accumulator circuit 400 can also receive the signals AccumYp1_Clr and MaxMinYp1 from the previous accumulator circuit 400. The signal MaxMinYp1 can be the accumulator signal MaxMinY of the previous accumulator circuit. The signal AccumYp1_Clr may be the signal AccumsLessThanY_Clr from the previous accumulator circuit (which will be described herein).The accumulator circuit includes an "OR" gate 438 having an input terminal coupled to the signal AccumYp1_Clr and an input terminal coupled to the signal MaxMinYp1. The output terminal of the “OR” gate 438 is coupled to one of the input terminals of the “NAND” gate 439. The other input terminal of the NAND gate 439 is coupled to the control signal ClrLessSigAccums. The first accumulator circuit in the daisy chain may have inputs AccumYp1_Clr and MaxMinYp1 that are coupled to ground voltage and initialized with the signal. When the control signal ClrLessSigAccums is pulsed, if AccumYp1_Clr or MaxMinYp1 is at a high level, the inverted reset terminal Rf of the latch circuit 408 can receive a low signal (for example, the ground voltage) from the NAND gate 439 and can The value stored in the latch circuit 408 is reset (for example, the signal MaxMinY is reset to a low level). The output terminal of the NAND gate 439 is passed through the inverter circuit 440 to become the signal AccumsLessThanY_Clr, and the signal AccumsLessThanY_Clr is provided to the next accumulator circuit in the daisy chain (for example, becomes the signal AccumYp1_Clr).The direction in which the accumulator circuits 400 are coupled together may determine the criterion for breaking the tie. For example, the accumulator circuit associated with File (Y) may receive the signals AccumYp1_Clr and MaxMinYp1 from the accumulator circuit associated with File (Y+1), and so on. This can result in a bias towards the accumulator circuit with the highest index and break any tie. Note that in some embodiments, the accumulator circuit can be daisy-chained in the opposite direction (e.g., File(0) can provide a signal to File(1)), and it only changes the direction along which the tie was broken (e.g., from the lowest number Register to the highest numbered register).In the example operation, after all comparison bits have been provided (for example, at block 260 of FIG. 2), there may be 3 accumulator circuits each storing the corresponding accumulator signal: MaxMin2 is high; MaxMin1 is low; MaxMin0 is high . In this example, the accumulator circuit is daisy-chained from the highest index to the lowest index. When the signal ClrLessSigAccums is pulsed, the first accumulator circuit can receive low logic inputs on both AccumYp1_Clr and MaxMinYp1 (because this is the way to initialize those signals). Therefore, the accumulator signal MaxMin2 is not reset and remains at a high value. The second accumulator receives the signal AccumYp1_Clr at a low logic level (because the previous accumulator circuit is not reset) and receives MaxMinYp1 at a high level because MaxMinYp1=MaxMin2. Therefore, the second accumulator circuit receives the reset signal (for example, the output of the NAND gate 439 is at a low level), but the accumulator signal MaxMin1 is already at a low level and remains at a low level. The third accumulator circuit receives the high-level signal AccumYp1_Clr (because the previous circuit did receive the reset signal, even if it does not cause any change) and the low-level MaxMinYp1 (for example, because MaxMin1 is low). Therefore, the third accumulator circuit can be reset (because at least one of AccumYp1_Clr and MaxMinYp1 is high) and the third accumulator signal MaxMin0 can be changed to a low level. Therefore, after the signal ClrLessSigAccums is pulsed, MaxMin2 is high, MaxMin1 is low, and MaxMin0 is low.In some embodiments, the accumulator circuit 400 may control whether data can be written to the associated file based on the state of the accumulator signal. Each of the accumulator circuits 400 can collectively receive the signal CountWriteEn that can be coupled to the input terminal of the NAND gate 441. The other input terminal of the NAND gate 441 may be coupled to the output terminal Q of the latch circuit 408 that provides the accumulator signal MaxMinY. The NAND gate 441 can be provided as a signal of CountWriteY through the inverter circuit 442. The signal CountWriteY may be a write signal indicating the value in the rewritable register (for example, the signal Write of FIG. 3). Due to the NAND gate 441 and the inverter 442, when the signal CountWriteEn is provided, the signal CountWriteY may be high only for the accumulator circuit storing the accumulator signal at a high logic level (for example, indicating an extreme value) level.An example environment where storing numerical values and identifying extreme values can be useful is a semiconductor memory device. A memory device can be used to store one or more bits of information in an array of memory cells, the array of memory cells containing a plurality of memory cells, each of the memory cells containing one or more bits of information. Memory cells can be organized at the intersection of rows (word lines) and columns (bit lines). During various operations, the memory device can access one or more memory cells along a specified word line or bit line by providing row and/or column addresses of the specified word line and bit line.An example application of the stack, accumulator circuit, and control logic circuit of the present invention is the refresh operation in a memory device. The information in the memory cell may decay over time, and may need to be refreshed periodically (for example, by rewriting the original value of the information to the memory cell). Repeated access to a particular row of memory (e.g., the attacker row) can cause the rate of decay in adjacent rows (e.g., the victim row) to increase, for example due to electromagnetic coupling between the rows. This can be commonly referred to as a ‘hammering’ row or row hammering event. In order to prevent information from being lost due to row hammering, it may be necessary to identify the attacker's row so that the corresponding victim row can be refreshed ('row hammering refresh' or RHR). The row address of the accessed row can be stored and compared to the new row address to determine whether one or more rows require RHR operations.The access counts for different rows of the memory can be stored in a stack (such as the stack 100 described in FIG. 1). The row address may be stored in one field of each file, and the count value associated with the row address may be stored in another field of the file. Whenever the row address is accessed, its count value can be updated (for example, it is incremented). Based on the count value, the victim row associated with the stored row address can be refreshed. For example, the maximum count value can be selected by performing an extreme value search operation for the maximum value (for example, as described in FIG. 2). The victim row associated with the attacker row associated with the maximum value can then be refreshed. In another example, in some cases, it may be necessary to replace the row address in the stack, and an extreme value search operation may be performed to find the minimum value in the stack, and the row address associated with the minimum value may be rewritten . The functionality described in FIG. 4 in which only write signals are supplied to files associated with extreme values may be useful in this example.FIG. 5 is a block diagram showing the overall configuration of a semiconductor device according to at least one embodiment of the present invention. The semiconductor device 500 may be a semiconductor memory device, such as a DRAM device integrated on a single semiconductor chip.The semiconductor device 500 includes a memory array 568. The memory array 568 is shown as containing multiple memory banks. In the embodiment of FIG. 1, the memory array 568 is shown as including eight memory banks BANK0 to BANK7. More or fewer memory banks may be included in the memory array 568 of other embodiments. Each memory bank includes a plurality of word lines WL, a plurality of bit lines BL and /BL, and a plurality of memory cells MC arranged at intersections of the plurality of word lines WL and the plurality of bit lines BL and /BL. The selection of the word line WL is performed by the row decoder 558, and the selection of the bit lines BL and /BL is performed by the column decoder 560. In the embodiment of FIG. 1, row decoder 558 includes a corresponding row decoder for each bank and column decoder 560 includes a corresponding column decoder for each bank. The bit lines BL and /BL are coupled to corresponding sense amplifiers (SAMP). The read data from the bit line BL or /BL is amplified by the sense amplifier SAMP and transferred to the read/write via the complementary local data line (LIOT/B), transfer gate (TG) and complementary main data line (MIOT/B) Into the amplifier 570. Conversely, the write data output from the read/write amplifier 570 is transmitted to the sense amplifier SAMP via the complementary main data line MIOT/B, the transfer gate TG, and the complementary local data line LIOT/B, and is written to the sense amplifier SAMP coupled to the bit line BL. Or /BL in the memory cell MC.The semiconductor device 500 may employ a plurality of external terminals including a command and address (C/A) terminal coupled to a command and address bus to receive commands and addresses and CS signals, and a clock to receive clocks CK and /CK Terminals, a data terminal DQ for providing data, and a power supply terminal for receiving power supply potentials VDD, VSS, VDDQ, and VSSQ.The clock terminal is supplied with external clocks CK and /CK supplied to the input circuit 562. The external clock may be complementary. The input circuit 562 generates an internal clock ICLK based on the CK and /CK clocks. The ICLK clock is provided to the command decoder 560 and to the internal clock generator 564. The internal clock generator 564 provides various internal clocks LCLK based on the ICLK clock. The LCLK clock can be used to time the operation of various internal circuits. The internal data clock LCLK is provided to the input/output circuit 572 to time the operation of circuits included in the input/output circuit 572, for example, to a data receiver to time the reception of write data.The C/A terminal can be supplied with a memory address. The memory address supplied to the C/A terminal is transferred to the address decoder 554 via the command/address input circuit 552. The address decoder 554 receives the address and supplies the decoded row address XADD to the row decoder 558 and supplies the decoded column address YADD to the column decoder 560. The address decoder 554 may also supply a decoded bank address BADD, which may indicate a bank of the memory array 568 containing the decoded row address XADD and column address YADD. The C/A terminal can be supplied with commands. Examples of commands include timing commands for controlling the timing of various operations, access commands for accessing memory (such as read commands for performing read operations and write commands for performing write operations), and Other commands and operations. The access command may be associated with one or more row address XADD, column address YADD, and bank address BADD to indicate the memory cell to be accessed.The command may be provided to the command decoder 556 via the command/address input circuit 552 as an internal command signal. The command decoder 556 includes a circuit for decoding internal command signals to generate various internal signals and commands for performing operations. For example, the command decoder 556 may provide a row command signal to select a word line and a column command signal to select a bit line.The device 500 may receive an access command as a read command. When a read command is received and the bank address, row address, and column address are supplied in time with the read command, data is read from the memory cell pair corresponding to the row address and column address in the memory array 568 Read it. The read command is received by the command decoder 556, and the command decoder 556 provides an internal command so that the read data from the memory array 568 is provided to the read/write amplifier 570. The read data is output to the outside of the data terminal DQ via the input/output circuit 572.The device 500 may receive an access command as a write command. When a write command is received and the bank address, row address, and column address are promptly supplied with the write command, the write data supplied to the data terminal DQ is written into the memory array 568 corresponding to the row address and column address. Address of the memory unit. The write command is received by the command decoder 556, and the command decoder 556 provides an internal command so that the write data is received by the data receiver in the input/output circuit 572. The write clock may also be provided to the external clock terminal to time the reception of the write data by the data receiver of the input/output circuit 572. The write data is supplied to the read/write amplifier 570 via the input/output circuit 572 and is supplied to the memory array 568 by the read/write amplifier 570 to be written into the memory cell MC.The device 500 may also receive a command that causes it to perform a refresh operation. The refresh signal AREF may be a pulse signal activated when the command decoder 556 receives a signal indicating an automatic refresh command. In some embodiments, the automatic refresh command may be issued to the memory device 500 from the outside. In some embodiments, the auto refresh command may be periodically generated by the components of the device. In some embodiments, when an external signal indicates a self-refresh entry command, the refresh signal AREF may also be activated. The refresh signal AREF can be activated once immediately after the command is input, and thereafter can be activated cyclically according to the desired internal timing. Therefore, the refresh operation can be automatically continued. The self-refresh exit command may cause the automatic activation of the refresh signal AREF to stop and return to the idle state.The refresh signal AREF is supplied to the refresh address control circuit 566. The refresh address control circuit 566 supplies the refresh row address RXADD to the row decoder 558, and the row decoder 558 can refresh the word line WL indicated by the refresh row address RXADD. The refresh address control circuit 566 can control the timing of the refresh operation, and can generate and provide a refresh address RXADD. The refresh address control circuit 566 may be controlled to change the details of the refresh address RXADD (for example, how to calculate the refresh address, the timing of the refresh address), or may operate based on internal logic.The refresh address control circuit 566 may selectively output a target refresh address (for example, a victim address) or an automatic refresh address (auto-refresh address) as the refresh address RXADD. The automatic refresh address may be an address sequence provided based on the activation of the automatic refresh signal AREF. The refresh address control circuit 566 can cycle through the automatic refresh address sequence at a rate determined by AREF.The refresh address control circuit 566 may also determine the address that needs to be refreshed (for example, the victim corresponding to the victim row) based on the access pattern of the nearby address in the memory array 568 (for example, the attacker address corresponding to the attacker row). Address) the target refresh address. The refresh address control circuit 566 may selectively use one or more signals of the device 500 to calculate the target refresh address RXADD. For example, the refresh address RXADD can be calculated based on the row address XADD provided by the address decoder. The refresh address control circuit 566 may sample the current value of the row address XADD provided by the address decoder 554 and determine the target refresh address based on one or more of the sampled addresses.The refresh address RXADD may be provided based on the timing of the timing of the refresh signal AREF. The refresh address control circuit 566 may have time slots corresponding to the timing of AREF, and may provide one or more refresh addresses RXADD during each time slot. In some embodiments, the target refresh address may be issued in a (eg, "stealing") time slot that would otherwise be assigned to the automatic refresh address. In some embodiments, certain time slots may be reserved for the target refresh address, and the refresh address control circuit 566 may determine whether to provide the target refresh address, not provide the address during the time slot, or provide it instead during the time slot. Refresh the address automatically.The target refresh address may be based on the characteristics of the row address XADD received from the address decoder 554 over time. The refresh address control circuit 566 can sample the current row address XADD to determine its characteristics over time. The sampling can occur intermittently, with each sample taken based on random or semi-random timing. The access count associated with the received row address XADD may be stored in a stack (eg, stack 100 of FIG. 1). In some embodiments, an access count that exceeds the threshold may cause its victim address to be calculated and refreshed. In some embodiments, an extremum search operation can be performed (for example, as described in FIG. 2), and the address with the largest access count can be identified as an attacker.The refresh address control circuit 566 may use different methods to calculate the target refresh address based on the sampled row address XADD. For example, the refresh address control circuit 566 may determine whether a given row is an attacker address and then calculate and provide an address corresponding to the victim address of the attacker address as the target refresh address. In some embodiments, more than one victim address may correspond to a given attacker address. In this case, the refresh address control circuit may queue a plurality of target refresh addresses and sequentially provide the plurality of target refresh addresses when it determines that the target refresh address should be provided. The refresh address control circuit 566 may provide the target refresh address immediately, or may queue the target refresh address to be provided at a later time (eg, in the next time slot available for target refresh).The power supply terminal is supplied with power supply potentials VDD and VSS. The power supply potentials VDD and VSS are supplied to the internal voltage generator circuit 574. The internal voltage generator circuit 574 generates various internal potentials VPP, VOD, VARY, VPERI, etc. based on the power supply potentials VDD and VSS supplied to the power supply terminals. The internal potential VPP is mainly used in the row decoder 558, the internal potentials VOD and VARY are mainly used in the sense amplifier SAMP included in the memory array 568, and the internal potential VPERI is used in many peripheral circuit blocks.The power supply terminal is also supplied with power supply potentials VDDQ and VSSQ. The power supply potentials VDDQ and VSSQ are supplied to the input/output circuit 572. In an embodiment of the present invention, the power supply potentials VDDQ and VSSQ supplied to the power supply terminal may be the same potentials as the power supply potentials VDD and VSS supplied to the power supply terminal. In another embodiment of the present invention, the power supply potentials VDDQ and VSSQ supplied to the power supply terminal may be different from the power supply potentials VDD and VSS supplied to the power supply terminal. The power supply potentials VDDQ and VSSQ supplied to the power supply terminal are used for the input/output circuit 572 so that the power supply noise generated by the input/output circuit 572 does not propagate to other circuit blocks.Fig. 6 is a block diagram of a refresh address control circuit according to an embodiment of the present invention. Dotted lines are shown to indicate that in some embodiments, each of the components (eg, refresh address control circuit 666 and row decoder 658) may correspond to a specific bank 668 of the memory and these components may be specific to the bank of the memory Repeat for each of them. In some embodiments, the components shown within the dashed lines may be positioned in each of the storage banks 668. Therefore, there may be multiple refresh address control circuits 666 and row decoders 658. For the sake of brevity, only the components of a single memory bank will be described.The DRAM interface 676 can provide one or more signals to the address refresh control circuit 676 and the row decoder 658. The refresh address control circuit 666 may include a sample timing generator 680, an attacker detector circuit 682, a row hammer refresh (RHR) status control 686, and a refresh address generator 684. The DRAM interface 676 can provide one or more control signals, such as an automatic refresh signal AREF and a row address XADD. An optional sample timing generator 680 generates a sampling signal ArmSample.In some embodiments, the attacker detector circuit 682 may receive each row address XADD associated with each access operation. In some embodiments, the attacker detector circuit 682 may sample the current row address XADD in response to activation of ArmSample.The attacker detector circuit 682 may store the received row address XADD and determine whether the current row address XADD is an attacker address based on one or more previously stored addresses. Attacker detector circuit 682 may include a stack (e.g., stack 100 of FIG. 1) that stores row addresses and access counts (e.g., values) associated with those row addresses. The attacker detector circuit 682 may provide the refresh address generator 684 as a matching address HitXADD based on the associated count value of one or more of the stored addresses.The RHR status control 686 can control the timing of the target refresh operation. The RHR status control 686 may provide a signal RHR to indicate that a row hammer refresh (eg, a refresh of a victim row corresponding to an identified attacker row) should occur. The RHR status control 686 may also provide an internal refresh signal IREF to indicate that automatic refresh should occur. In response to the activation of the RHR, the refresh address generator 684 may provide a refresh address RXADD, which may be an automatic refresh address or may be one or more victims corresponding to the attacker row corresponding to the matching address HitXADD address. The row decoder 658 may perform the target refresh operation in response to the refresh address RXADD and the row hammer refresh signal RHR. The row decoder 658 may perform an automatic refresh operation based on the refresh address RXADD and the internal refresh signal IREF. In some embodiments, the row decoder 658 may be coupled to the automatic refresh signal AREF provided by the DRAM interface 676, and the internal refresh signal IREF may not be used.The DRAM interface 676 may represent one or more components, such as components that provide signals to the memory bank 668. In some embodiments, DRAM interface 676 may represent a memory controller coupled to a semiconductor memory device (e.g., device 600 of FIG. 1). In some embodiments, the DRAM interface 676 may represent components such as the command address input circuit 652, the address decoder 654, and/or the command decoder 656 of FIG. 1. The DRAM interface 676 can provide a row address XADD, an automatic refresh signal AREF, an activation signal ACT, and a precharge signal Pre. The automatic refresh signal AREF may be a periodic signal that can indicate when the automatic refresh operation will occur. The activation signal ACT may be provided to activate a given bank 668 of the memory. The row address XADD may be a signal including multiple bits (which may be transmitted serially or in parallel) and may correspond to a specific row of a bank (for example, a bank activated by ACT/Pre).In the example embodiment of FIG. 6, the attacker detector circuit 600 uses the sampling signal ArmSample to determine when the attacker detector circuit 682 should check the value of the row address XADD. The sample timing generator 680 provides a sampling signal ArmSample that can alternate between a low logic level and a high logic level. The activation of ArmSample can be a'pulse', where ArmSample is raised to a high logic level and then returned to a low logic level. The sample timing generator 680 can provide a pulse sequence of ArmSample. Each pulse can be separated from the next pulse by a time interval. The sample timing generator 680 can randomly (and/or semi-randomly and/or pseudo-randomly) change the time interval.The attacker detector circuit 682 may receive the row address XADD from the DRAM interface 676 and the ArmSample from the sample timing generator 680. The row address XADD may be changed when the DRAM interface 676 directs access operations (eg, read and write operations) to different rows of the memory cell array (eg, the memory cell array 118 of FIG. 1). Whenever the attacker detector circuit 682 receives an activation (eg, pulse) of ArmSample, the attacker detector circuit 682 may sample the current value of XADD.In response to the activation of ArmSample, the attacker detector circuit 682 may determine whether one or more rows are attacker rows based on the sampled row address XADD, and may provide the identified attacker row as the matching address HitXADD. As part of this determination, the attacker detector circuit 682 may record (eg, by latching and/or storing in the stack) the current value of XADD in response to activation of ArmSample. The current value of XADD can be compared with a previously stored address in the attacker detector circuit 682 (eg, an address stored in the stack) to determine the access pattern of the sampled address over time. If the attacker detector circuit 682 determines that the current row address XADD is repeatedly accessed (for example, the attacker row), activation of ArmSample may also cause the attacker detector circuit 682 to provide the address of the attacker row as the matching address HitXADD. In some embodiments, the matching address (eg, attacker address) HitXADD may be stored in the latch circuit for later retrieval by the refresh address generator 684.For example, the attacker detector circuit 682 may store the value of the sampled address in the stack, and may have a counter associated with each of the stored addresses. When ArmSample is activated, if the current row address XADD matches one of the stored addresses, the value of the counter can be updated (eg, incremented). In response to activation of ArmSample, the attacker detector circuit 682 may provide the address associated with the maximum counter as the matching address HitXADD. An extremum search operation (eg, as described in FIG. 2) can be used to identify the maximum value. In other instances, other methods of identifying the address of the attacker can be used.The RHR status control 686 can receive the automatic refresh signal AREF and provide the row hammer refresh signal RHR. The automatic refresh signal AREF can be generated periodically and can be used to control the timing of the refresh operation. The memory device may perform a sequence of automatic refresh operations in order to periodically refresh the rows of the memory device. The RHR signal can be generated to indicate that the device should refresh a specific target row (e.g., victim row) instead of an address from an automatic refresh address sequence. The RHR status control 686 may use internal logic to provide the RHR signal. In some embodiments, the RHR status control 686 may provide the RHR signal based on a certain number of AREF activations (e.g., every 4 activations of AREF). The RHR status control 686 can also provide an internal refresh signal IREF that can indicate that an automatic refresh operation should occur. In some embodiments, the signals RHR and IREF may be generated so that they are not simultaneously active (for example, the two are not simultaneously at a high logic level).The refresh address generator 684 can receive the row hammer refresh signal RHR and the matching address HitXADD. The matching address HitXADD can indicate the attacker's line. The refresh address generator 684 may determine the location of one or more victim rows based on the matching address HitXADD and provide them as the refresh address RXADD. In some embodiments, the victim row may include rows that are physically adjacent to the attacker row (for example, HitXADD+1 and HitXADD-1). In some embodiments, the victim row may also include rows that are physically adjacent to the attacker row (for example, HitXADD+2 and HitXADD-2). In other examples, other relationships between victim behavior and identified attacker behavior can be used.The refresh address generator 684 may determine the value of the refresh address RXADD based on the row hammer refresh signal RHR. In some embodiments, when the signal RHR is not currently used, the refresh address generator 684 may provide an automatic refresh address in the automatic refresh address sequence. When the signal RHR is active, the refresh address generator 684 can provide a target refresh address (for example, a victim address) as the refresh address RXADD.The row decoder 658 may perform one or more operations on the memory array (not shown) based on the received signal and address. For example, in response to the activation signal ACT and the row address XADD (and IREF and RHR are at a low logic level), the row decoder 658 may direct one or more access operations to the specified row address XADD (for example, read Take operation). In response to the RHR signal being active, the row decoder 658 may refresh the refresh address RXADD.Fig. 7 is a block diagram of an attacker detector circuit according to the present invention. In some embodiments, the attacker detector circuit 700 may implement the attacker detector circuit 682. The attacker detector circuit 700 includes a stack 790 and a stack control logic 792. In some embodiments, the stack 790 may implement the stack 100 of FIG. 1.The stack 790 includes a number of files 702, and each of the files 702 includes a row address field 788 storing a row address XADD(Y) and an associated count value field 789 storing a count value Count(Y). Each file 702 is associated with an accumulator circuit 706. File 702 and accumulator circuit 706 are coupled to control logic 710 that can be used to perform extremum search operations (eg, as described in FIG. 2). Although not shown in FIG. 7 for clarity, stack 790 may generally use signals similar to those discussed with respect to stack 100 of FIG. 1 (eg, Match (Y), Compare (X), etc.).The stack 790 is coupled to a stack logic circuit 792, which can be used to provide signals and control the operation of the stack 790. In some embodiments, the control logic circuit 710 that manages the extreme value search operation may be included as part of the stack logic circuit 792. The row address field 788 may include a certain number of bits (for example, a certain number of CAM cells) based on the number of bits in the row address. For example, in some embodiments, the row address field 788 may be 16 bits wide. The count value field 789 may have a certain number of bits based on the largest possible value among the count values it expects to track. In some embodiments, for example, the count value field 789 may be 11 bits wide. In some embodiments, the stack 790 may have a depth of 100 (eg, the number of files 702). In other embodiments, other widths and depths of stack 790 can be used.In some embodiments, stack 790 may include additional fields in file 702 that may be used to store additional information associated with the stored row address. For example, each file 702 may include an empty flag, and the empty flag may be used to indicate whether the data in the file 702 is ready to be rewritten. The empty flag may be a single bit, with a first state indicating that the file is'full' and a second state indicating that the file is empty (for example, the information in the file is ready to be rewritten). When removing the row address and count from the stack 790 (e.g., after refreshing its victim) instead of deleting the data in the file 702, the empty flag may instead be set to the second state.When the row address XADD is received by the attacker detector circuit 700, it can be stored in the address latch 793. In some embodiments, when the signal ArmSample is provided, the stack logic circuit 792 can store the current value of the row address XADD in the address latch 793. The address latch 793 may include a number of bits equal to the number of bits of the row address XADD. Therefore, the address latch 793 may have the same width as the row address field 788 of the stack 790. The address latch 793 may include several latch circuits to store the bits of the row address. In some embodiments, the address latch 793 may include a CAM cell (for example, the CAM cell 300 of FIG. 3) and may be similar in structure to the file 702 of the stack 790.The address comparator 794 can compare the row address XADD stored in the address latch 793 with the address in the stack 790. The stack logic circuit 792 can perform a comparison operation based on the CAM cells of the stack 790. For example, as discussed with respect to FIG. 1, each CAM cell of each row of address field 788 may be commonly coupled to a signal line carrying a matching signal. When comparing the address with the row address field 788, the match signal line can be precharged (for example, by providing the signals BitxCompPre and FindMaxMinOp of FIG. 2). The stack logic circuit 792 may then collectively provide the address as a comparison signal to all row address fields 788. The first bit of each of the row address fields 788 can receive the first bit of the row address XADD for comparison, and the second bit of each of the row address fields 788 can receive the second bit of the row address XADD, And so on. After the CAM cell performs the comparison operation, the match signal can only remain at the precharged level (for example, a high logic level) when each bit of the comparison address matches each bit of the stored address XADD(Y) . The address comparator 794 may determine whether there is any match in the file in which the accumulator signal is at a high level based on the state of the matching signal (for example, the voltage on the matching signal line) and the accumulator signal.If there is a match between the received address XADD and one of the stored addresses XADD(Y), the count value Count(Y) associated with the matched stored address XADD(Y) may be updated. The count value Count(Y) can be read to the work counter 795, and the work counter 795 can update the value of the count value Count(Y) and then write it back to the same file associated with the matched stored address XADD(Y) 702. For example, the work counter 795 may increment the count value Count(Y).In some embodiments, the components of the accumulator circuit 706 may be used to provide additional functionality. For example, in the accumulator circuit 706 (for example, as described in detail in the accumulator circuit 400 of FIG. 4), it can also be used to serially read out the contents of the count value CAM for further operations, such as loading it. To the work counter 795 and increment its value. For example, when the address comparator indicates a match between the row address XADD and one of the stored addresses 788, the result may be coupled to a corresponding match signal (eg, BitxMatch) to the associated accumulator circuit 706 On and then the signal BitxMatchAccumSample can be pulsed. Therefore, only the accumulator signal MaxMinY of the file will be high, after which it is allowed to precharge its corresponding matching signal BixMatch_Y high when the global BitxCompPre is asserted. Then only the comparison signal X_Compare of the selected bit of the count value CAM array will be asserted (for example, at a high logic level). The signals CrossRegCompPreF and CrossRegComp will then be used by the control logic 710 as previously described (e.g., in FIG. 4) to determine the content of the selected count value 789 bits returned to the control circuit 710, where the content will be It is loaded into the work counter 795. This process will then be repeated for each bit. After incrementing the work counter 795, the updated count will then be written back in parallel to the count value 789 of the selected file (for example, by providing the write signal CountWriteY).If there is no match between the received address XADD and one of the stored addresses XADD(Y), then the address XADD may be added to the stack 790. This can be done by providing the write signal along with the bit of XADD to one of the files in the stack 790. If there is space in the stack 790, the received row address XADD can be stored in the empty file 702 and the work counter can set the associated count value in the file 702 to an initial value (e.g., 0, 1).If there is no match between the received address XADD and one of the stored addresses XADD(Y) and the stack 790 is full, then the stack logic circuit 792 can replace the current stored address in the stack 790 with the received address One. In some embodiments, the control logic 710 can be used to perform an extreme value search operation to find the minimum value of the stored count value Count(Y). The stack logic circuit 792 may then collectively provide the new address XADD to all row address fields 788 and the main write signal (for example, CountWriteEn of FIG. 4) to all accumulator circuits 706 collectively. The accumulator circuit 706 may only provide the write signal to the file 702 associated with the minimum count value. Since in some embodiments, the accumulator circuit 706 can also break the tie, this means that the write signal will only be provided to one file 702 and therefore the address XADD can be written to the file 702 containing the minimum value. The identified minimum count value can then be reset to the initial value.The stack logic circuit 792 can identify and provide the matching address HitXADD based on the count value Count(Y) stored in the stack. Generally speaking, when one of the stored addresses XADD(Y) is provided as the matching address HitXADD, the stored address XADD(Y) can be removed from the stack 790 (or the empty flag of the file 702 can be set) And the count value Count(Y) can be reset to the initial value.In some embodiments, the stack logic circuit 792 may include a threshold comparator circuit 797 that may compare the updated count value (eg, after the count value is updated by the work counter 795) with a threshold value. If the updated count value is greater than the threshold, the stored address XADD(Y) associated with the updated count value may be provided as the matching address HitXADD.In some embodiments, the stack logic circuit 792 may provide the stored row address XADD(Y) associated with the maximum count value as the matching address HitXADD. For example, the control logic 710 may perform an extremum search operation to locate the maximum value and then may provide the index of the file 702 containing the maximum value and the associated row address.In some embodiments, the current maximum and/or minimum position may be indicated by a pointer, and the pointer may be operated by the pointer logic circuit 796. For example, the control logic 710 may perform an extremum search to locate the maximum value and then may return the index of the file 702 containing the maximum value. The pointer logic circuit 796 can direct the maximum value pointer to indicate the file 702 containing the maximum value. When the count value changes, a new extreme value search operation can be performed to update the maximum value. When a matching address needs to be provided, the maximum pointer can be used to quickly supply the address associated with the current maximum.Of course, it should be understood that any of the examples, embodiments, or processes described herein can be combined with one or more other examples, embodiments, and/or processes, or can be implemented in a separate device according to the system, device, and method of the present invention. Or separate and/or execute in the device part.Finally, the above discussion is intended to merely illustrate the system of the present invention and should not be considered as limiting the appended claims to any particular embodiment or group of embodiments. Therefore, although the system of the present invention has been described in particular in detail with reference to exemplary embodiments, it should also be understood that those of ordinary skill in the art can have a broader view of the system of the present invention as set forth in the appended claims. Many modifications and alternative embodiments are designed with the expected spirit and scope. Therefore, the description and drawings should be viewed in an illustrative manner, and are not intended to limit the scope of the appended claims.
In some aspects of the present disclosure, a method for touch-panel processing is provided. The method includes receiving a plurality of sensor signals from a touch panel, wherein each one of the plurality of sensor signals corresponds to a respective one of a plurality of channels of the touch panel. The method also includes, for each one of the received sensor signals, converting the received sensor signal into one or more respective digital values. The method further includes, for each one of the received sensor signals, performing digital processing on the one or more respective digital values using a respective one of a plurality of processing engines to generate one or more respective processed digital values. The method further includes performing additional processing on the processed digital values.
1.A system for touch panel processing, including:A plurality of processing engines, wherein each of the plurality of processing engines is configured to receive one or more corresponding digital values corresponding to the corresponding sensor signal from a corresponding one of the plurality of channels of the touch panel, and Performing digital processing on the one or more corresponding digital values to generate one or more corresponding processed digital values;A controller configured to program the plurality of processing engines to execute the digital processing in parallel by inputting a set of instructions to each of the plurality of processing engines; andThe processor is configured to receive the processed digital value from the plurality of processing engines, and perform additional processing on the received processed digital value.2.The system of claim 1, wherein the digital processing performed by each processing engine of the plurality of processing engines includes at least one of the following: demodulation, Walsh decoding, averaging, Or filtering.3.The system of claim 2, wherein the additional processing performed by the processor includes calculating positions of a plurality of user fingers on the touch panel based on the received processed digital value.4.The system of claim 2, wherein each of the plurality of processing engines includes a corresponding arithmetic logic unit (ALU), the arithmetic logic unit is configured to perform any one of a plurality of operations, And the instruction set includes an operation instruction, and the operation instruction selects one of the multiple operations.5.The system of claim 4, wherein the plurality of operations include at least one of addition or subtraction.6.The system of claim 1, wherein the digital processing performed by each processing engine of the plurality of processing engines comprises: subtracting a corresponding digital baseline value from the one or more corresponding digital values.7.The system according to claim 1, wherein, for each processing engine of the plurality of processing engines, the system further comprises:A corresponding receiver configured to receive the corresponding sensor signal from the touch panel; andThe corresponding analog-to-digital converter is configured to convert the received corresponding sensor signal into the corresponding one or more digital values.8.The system of claim 1, wherein each channel of the plurality of channels of the touch panel corresponds to a corresponding receiving line of the touch panel.9.The system of claim 1, wherein each channel of the plurality of channels of the touch panel corresponds to a corresponding pair of receiving lines of the touch panel.10.A system for touch panel processing, including:A plurality of receivers, wherein each receiver of the plurality of receivers is configured to receive a corresponding sensor signal from the touch panel;A plurality of analog-to-digital converters, wherein each of the plurality of analog-to-digital converters is configured to convert the received sensor signal or a corresponding one of the plurality of analog-to-digital converters Convert to the corresponding one or more digital values;A plurality of processing engines, wherein each processing engine of the plurality of processing engines is configured to receive the one or more corresponding digital values from a corresponding one of the plurality of analog-to-digital converters, and Performing digital processing on the one or more corresponding digital values to generate one or more corresponding processed digital values; andA controller configured to program each of the plurality of processing engines to perform the digital processing; andA processor configured to receive the processed digital value from the plurality of processing engines, and perform additional processing on the received processed digital value;Wherein each receiver of the plurality of receivers includes a corresponding switched capacitor network and a corresponding amplifier, and wherein the controller is configured to control the switched capacitor of each receiver of the plurality of receivers A switch in the network to operate each of the plurality of receivers in one of the plurality of different receiver modes.11.The system according to claim 10, wherein the plurality of different receiver modes include two or more of the following: differential mutual capacitance sensing mode, single-ended mutual capacitance sensing mode, differential self-capacitance sensing Measurement mode, single-ended self-capacitance sensing mode, and charge amplifier mode.12.The system of claim 10, wherein the controller includes a decoder configured to receive one or more of the switched capacitor network specifying each of the plurality of receivers Node connection stage instruction, and is configured to convert the stage instruction into a plurality of switch control signals, and the plurality of switch control signals control the switched capacitor network of each of the plurality of receivers The switch to realize the connection of the one or more nodes specified by the phase instruction.13.A method for touch panel processing, including:Receiving a plurality of sensor signals from the touch panel, wherein each sensor signal of the plurality of sensor signals corresponds to a corresponding one of the plurality of channels of the touch panel;For each sensor signal in the received sensor signal, convert the received sensor signal into one or more corresponding digital values;For each sensor signal in the received sensor signal, a corresponding one of the plurality of processing engines is used to perform digital processing on the one or more corresponding digital values to generate one or more corresponding processed numbers value;Performing additional processing on the processed digital value; andProgramming the multiple processing engines to perform the digital processing in parallel, wherein programming the multiple processing engines includes inputting a set of instructions to each of the multiple processing engines.14.The method of claim 13, wherein the digital processing performed for each received sensor signal includes at least one of the following: demodulation, Walsh decoding, averaging, or filtering.15.The method of claim 14, wherein the additional processing performed on the processed digital value includes calculating positions of a plurality of user fingers on the touch panel based on the received processed digital value.16.The method of claim 14, wherein each processing engine of the plurality of processing engines includes a corresponding arithmetic logic unit (ALU), the arithmetic logic unit being configured to perform any one of a plurality of operations, And the instruction set includes an operation instruction, and the operation instruction selects one of the multiple operations.17.The method of claim 16, wherein the plurality of operations include at least one of addition or subtraction.18.The method of claim 13, wherein performing the digital processing on the one or more corresponding digital values includes subtracting the corresponding digital baseline value from the one or more corresponding digital values.19.The method according to claim 13, wherein each channel of the plurality of channels of the touch panel corresponds to a corresponding receiving line of the touch panel.20.The method according to claim 13, wherein each channel of the plurality of channels of the touch panel corresponds to a corresponding pair of receiving lines of the touch panel.
Highly configurable front end of touch controllerCross-references to related applicationsThis application claims priority and rights to the provisional application number 62/441,000 filed in the U.S. Patent and Trademark Office on December 30, 2016 and the non-provisional application number 15/470,731 filed in the U.S. Patent and Trademark Office on March 27, 2017. , The entire contents of which are incorporated herein by reference.Technical fieldAspects of the present disclosure generally relate to touch panels, and more specifically to configurable touch panel interfaces.Background techniqueA touch panel (also called a touch screen) includes a grid (array) of touch sensors covering the display. The touch sensor may adopt capacitance sensing, in which the user's finger is detected by detecting a change in the capacitance (for example, mutual capacitance and/or self-capacitance) of the sensor caused by the user's finger.Summary of the inventionThe following presents a brief summary of one or more embodiments in order to provide a basic understanding of these embodiments. This summary is not an extensive overview of all contemplated embodiments, and is neither intended to identify key or important elements of all embodiments, nor is it intended to delineate the scope of any or all embodiments. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later.The first aspect relates to a system. The system includes a plurality of processing engines, wherein each of the plurality of processing engines is configured to receive one or more corresponding digital values corresponding to the corresponding sensor signal from the touch panel, and to the one or more corresponding digital values Perform digital processing to generate one or more corresponding processed digital values. The system also includes a controller configured to program each of the plurality of processing engines to perform digital processing; and a processor configured to receive processed digital values from the plurality of processing engines, and Additional processing is performed on the received processed digital value.The second aspect relates to a method for touch panel processing. The method includes receiving a plurality of sensor signals from a touch panel, wherein each sensor signal of the plurality of sensor signals corresponds to a corresponding one of the plurality of channels of the touch panel. The method further includes, for each sensor signal in the received sensor signal, converting the received sensor signal into one or more corresponding digital values. The method further includes: for each sensor signal in the received sensor signal, using a corresponding one of the plurality of processing engines to perform digital processing on one or more corresponding digital values to generate one or more corresponding processed Numerical value. The method also includes performing additional processing on the processed digital value.In order to achieve the foregoing and related objects, one or more embodiments include the features fully described and specifically pointed out in the claims below. The following description and drawings set forth in detail certain illustrative aspects of one or more embodiments. However, these aspects only indicate some of the various ways in which the principles of the various embodiments can be adopted, and the described embodiments are intended to include all these aspects and their equivalents.Description of the drawingsFIG. 1 shows an example of a touch panel and a configurable interface for the touch panel according to certain aspects of the present disclosure.Figure 2 shows an example of two adjacent configurable receivers in an interface according to certain aspects of the present disclosure.Figure 3 shows an example of a configurable receiver including a switched capacitor network according to certain aspects of the present disclosure.4A shows an example of an input capacitor coupled to a receiving line of a touch panel to sample a voltage on the receiving line according to certain aspects of the present disclosure.Figure 4B shows an example of an input capacitor coupled to a feedback capacitor of an amplifier according to certain aspects of the present disclosure.Figure 5 shows an example of a switchable capacitor bank according to certain aspects of the present disclosure.Figure 6 shows an example of a receiver in a single-ended sensing mode configuration according to certain aspects of the present disclosure.FIG. 7A shows an example of a capacitor charged using a reference voltage according to certain aspects of the present disclosure.FIG. 7B illustrates an example of the capacitor in FIG. 7A that provides charge to the capacitor of the receiving line of the touch panel according to certain aspects of the present disclosure.FIG. 8 is a time line showing an example of the voltage of the receiving line capacitor during charge pumping according to certain aspects of the present disclosure.Figure 9 shows another example of a receiver in a single-ended sensing mode configuration according to certain aspects of the present disclosure.FIG. 10 is a time line showing another example of the voltage of the receiving line capacitor during charge pumping according to certain aspects of the present disclosure.Figure 11 shows an example of a receiver in a charge amplifier mode configuration according to certain aspects of the present disclosure.Figure 12A shows an example of capacitor connection for a charge amplifier mode configuration according to certain aspects of the present disclosure.Figure 12B illustrates a technique for removing the baseline charge of a receiver in a charge amplifier mode configuration in accordance with certain aspects of the present disclosure.FIG. 13 shows an example of a processing architecture for a touch panel interface according to certain aspects of the present disclosure.Figure 14 illustrates an exemplary embodiment of a processing engine according to certain aspects of the present disclosure.Figure 15 shows an example of a SIMD controller according to certain aspects of the present disclosure.Figure 16 shows an example of a power management architecture according to certain aspects of the present disclosure.FIG. 17 is a flowchart showing an example of a touch panel processing method according to certain aspects of the present disclosureFIG. 18 is a flowchart showing another example of a touch panel processing method according to certain aspects of the present disclosure.Detailed waysThe detailed description set forth below in conjunction with the accompanying drawings is intended as a description of various configurations, and is not intended to represent the only configurations in which the concepts described herein can be practiced. The detailed description includes specific details to provide a thorough understanding of various concepts. However, it is obvious to those skilled in the art that these concepts can be practiced without these specific details. In some instances, well-known structures and components are shown in the form of block diagrams so as not to obscure these concepts.A touch panel (also called a touch screen) includes a grid (array) of touch sensors covering the display. The touch sensor may adopt capacitance sensing, in which the user's finger is detected by detecting a change in the capacitance (for example, mutual capacitance and/or self-capacitance) of the sensor caused by the user's finger.The touch panel usually interfaces with the host processor via an interface. The interface can include an analog front end and a digital back end. The analog front end is configured to drive the touch sensor, receive signals from the touch sensor, and perform analog operations (for example, amplification) on the signals. The output signal of the analog front end is converted into digital signals by an analog-to-digital converter (ADC), and these digital signals are input to the digital back end. The digital backend performs digital operations on the digital signal and outputs the resulting signal to a main processor (for example, a processor on a mobile device incorporating a touch panel).A configurable interface is desired that can be programmed to interface with different touch panel designs. This will allow the interface to be used with different touch panel designs without having to develop a custom interface for each touch panel design, thereby reducing development costs.In this regard, FIG. 1 shows an example of a configurable (programmable) interface 112 that connects the touch panel 110 with a main processor (not shown) according to certain aspects of the present disclosure. The touch panel 110 includes a plurality of transmission lines Tx1 to Tx7 and a plurality of reception lines Rx1 to Rx5, wherein the reception lines Rx1 to Rx5 may be arranged to be substantially perpendicular to the transmission lines Tx1 to Tx7. The mutual capacitance between each transmission line and each reception line forms a touch sensor on the touch panel 110. Each of the touch sensors is depicted as a mutual capacitor (denoted as "Cm") in FIG. 1. In this example, the user's finger can be detected by detecting a change in the mutual capacitance of one or more of the touch sensors caused by the user's finger, as discussed further below. It should be appreciated that the number of transmission lines and reception lines shown in FIG. 1 is exemplary, and the number of transmission lines and reception lines may vary depending on, for example, the size of the display screen.The interface 112 includes a plurality of slices 145, where each slice may include an analog front end 115, an analog-to-digital converter (ADC) 135, and a processing engine PE 140. For simplicity, only one slice 145 is shown in FIG. 1.The analog front end 115 of each slice 145 may include a receiver 120 and a transmitter 130. The receiver 120 includes an amplifier 122 and a switched capacitor network 124 composed of switches and capacitors. The receiver 120 is configured to receive sensor signals from one or two of the receiving lines (also referred to as channels) of the touch panel 110. The transmitter 130 is configured to drive one or more of the transmission lines (for example, using a square wave signal, a sinusoidal signal, or other types of signals).The ADC 135 in each slice 145 converts the output signal of the corresponding receiver into a digital signal, which is input to the corresponding PE 140. The corresponding PE may include one or more programmable arithmetic logic units (ALU), which perform digital processing on the corresponding digital signal. Digital processing may include one or more of the following: Fast Fourier Transform (FFT), demodulation, filtering, averaging, Walsh decoding, baseline subtraction, etc. The resulting signal is output to a main processor (for example, a processor on a mobile device incorporating a touch panel). The PE 140 can also use the corresponding transmitter 130 to digitally process the signals used to drive one or more of the transmission lines on the touch panel 110.The interface 112 includes a single instruction multiple data (SIMD) controller 150 for controlling the analog front end 115 and the PE 140 of the slice 145. For example, the SIMD controller 150 may control the receivers 120 in multiple slices according to a single instruction that performs the same analog processing on corresponding sensor signals in parallel. In this example, the SIMD controller 150 may control the switching sequence of the switches in the switched capacitor network 124 of the receiver 120 to perform the desired operation, as discussed further below. The SIMD controller 150 can configure the receiver to operate in any of a variety of different receiver modes (eg, differential receiver mode, single-ended receiver mode, etc.) according to the requirements of a specific touch panel design. The SIMD controller 150 can also select a subset of the receiving channels of the touch panel by selecting the corresponding receiver.The SIMD controller 150 controls (programs) the sliced PE 140 to perform one or more digital operations (FFT, demodulation, etc.) on the corresponding digital signal. In this regard, each PE can be configured to perform any one of a variety of different digital operations, and the SIMD controller 150 can configure one or more of the PEs to be based on a specific touch panel design and/or master The processor is required to perform one or more of the digital operations.Therefore, the SIMD controller 150 controls the sliced analog front end 115 and the PE 140 of the interface 112, and allows the interface 112 to be programmed for connection with different touch panel designs. The SIMD controller 150 can be programmed through firmware to adapt to touch panel requirements.As discussed above, the SIMD controller 150 may configure the receiver 120 to operate in any of a plurality of receiver modes (eg, differential receiver mode, single-ended receiver mode, etc.). Now, an example of the receiver mode is described according to certain aspects of the present disclosure.Figure 2 shows an example of two receivers in the receiver. One of the receivers is denoted by the suffix "a", and the other receiver is denoted by the suffix "b". As shown in FIG. 2, each of the receivers 120 a and 120 b is coupled to two adjacent receiving lines of the touch panel 110. In this example, the receiver 120a is coupled to the adjacent receiving lines RX(n-1) and RX(n), and the receiver 120b is coupled to the adjacent receiving lines RX(n) and RX(n+1) . This allows the SIMD controller 150 to operate the receiver 120a in the differential mode to measure the difference between the capacitances of the two touch sensors on the adjacent receiving lines RX(n) and RX(n-1), and to operate in the differential mode The receiver 120b measures the difference between the capacitances of the two touch sensors on the adjacent receiving lines RX(n) and RX(n+1). Although only two of the receivers are shown in Figure 2 for ease of explanation, it should be appreciated that each of the receivers in the interface can be coupled to two adjacent receive lines and operate in differential mode. Next operation. Operating the receivers 120a and 120b in the differential mode allows each receiver to cancel noise (eg, touch panel noise) common to the two receive lines input to the receiver, as discussed further below.Now, according to certain aspects, the operation of the receiver 120a in the differential mutual capacitance sensing mode is discussed with reference to FIG. 3. It should be understood that each of the other receivers may also operate in the differential mutual capacitance sensing mode in the manner discussed below.In the example shown in FIG. 3, the switched capacitor network 124a includes input capacitors Cin1 and Cin2 and feedback capacitors Cfb1 and Cfb2. As discussed further below, in the differential mode, the input capacitor Cin1 is used to sample the voltage on the receive line RX(n-1), and the input capacitor Cin2 is used to sample the voltage on the receive line RX(n) Take a sample. In this regard, each input capacitor can also be referred to as a sampling capacitor. The feedback capacitor Cfb1 is coupled between the first input of the amplifier 122a and the first output of the amplifier 122a, and the feedback capacitor Cfb2 is coupled between the second input of the amplifier 122a and the second output of the amplifier 122a. In one example, the SIMD controller 150 can control the switching of the switches in the switched capacitor network 124a so that the receiver functions as a switched capacitor differential amplifier.In the example in Figure 3, the mutual capacitance of one of the touch sensors on the receiving line RX(n-1) is modeled as a mutual capacitor Cm1, and one of the touch sensors on the receiving line RX(n) The mutual capacitance of the touch sensor is modeled as a mutual capacitance Cm2. FIG. 3 also shows the self-capacitance of the receiving line RX(n-1) modeled as the self-capacitor Csrx1 and the self-capacitance of the receiving line RX(n) modeled as the self-capacitor Csrx2. The self-capacitance of the receiving line can come from the capacitance between the receiving line and the ground plane. Figure 3 also shows the self-capacitance of the transmission line driving the touch sensor modeled as mutual capacitors Cm1 and Cm2. The self-capacitance of the transmission line is modeled as the self-capacitance Cstx.In operation, the SIMD controller 150 switches the switches in the switched capacitor network 124a according to a switching sequence including a sampling phase and a charge transfer phase. In these two stages, switches 312(1), 314(1), 312(2), and 314(2) can be opened (opened). As discussed further below, these switches can be used to operate the receiver 120a in other modes.In the sampling phase, the controller 150 turns off (turns on) the switches 316(1), 316(2), 324(1), and 324(2), and turns on (turns off) the switches 322(1), 322(2) , 318(1) and 318(2). This allows each of the input capacitors Cin1 and Cin2 to sample the voltage on the corresponding receive line, as discussed further below.Figure 4A shows the connection between the input capacitor Cin1 and the receive line RX(n-1) during the sampling phase. In this example, the touch sensor (modeled as a mutual capacitor Cm) is driven by one of the transmitters 130 shown in FIG. 1 using a square wave signal. The mutual capacitor Cm1 and the receiving line self-capacitor Csrx1 form a capacitor voltage divider, in which a part of the voltage of the square wave signal appears on the receiving line self-capacitor Csrx1. The voltage on the self-capacitor Csrx1 depends on the capacitance of the mutual capacitor Cm and the capacitance of the self-capacitor Csrx1. Generally, the user's finger reduces the capacitance of the mutual capacitor Cm1 by interfering with the electric field between the electrodes of the mutual capacitor Cm1. Since the presence of the user's finger will affect the capacitance of the mutual capacitor Cm1, the presence of the user's finger will also affect the voltage on the self-capacitor Csrx1. Therefore, the voltage on the self-capacitor Csrx1 can be used to detect the presence of the user's finger.The input capacitor Cin1 samples the voltage on the self capacitor Csrx1. Assuming that the capacitance of the input capacitor Cin1 is much smaller than the capacitance of the self capacitor Csrx1, the input capacitor Cin1 can be charged to a voltage approximately equal to the voltage on the self capacitor Csrx1. In the example in FIG. 4, the input capacitor Cin1 is coupled between the receiving line R(n-1) and the fixed reference voltage Vr2. The reference voltage Vr2 may be approximately equal to the virtual ground or the DC reference voltage.The input capacitor Cin2 samples the voltage on the self capacitor Csrx2 in a similar manner. Therefore, for the sake of brevity, a detailed discussion of the input capacitor Cin2 during the sampling phase is omitted. During the sampling phase, the controller 150 may also turn off (turn on) the switches 340(1) and 340(2) to reset the feedback capacitors Cfb1 and Cfb2.Returning to FIG. 3, in the charge transfer phase, the controller 150 opens (disconnects) switches 316(1), 316(2), 324(1), 324(2), 340(1), and 340(2), and Turn off (turn on) switches 322(1), 322(2), 318(1), and 318(2). This allows the charge of each of the input capacitors Cin1 and Cin2 to be transferred to the corresponding feedback capacitors Cfb1 and Cfb2, as discussed further below.Figure 4B shows the connection between the input capacitor Cin1 and the feedback capacitor Cfb1 during the charge transfer phase. In this example, the input capacitor Cin1 is coupled between the reference voltage Vr2 and the first input of the amplifier 122a, and the feedback capacitor Cfb1 is coupled between the first input of the amplifier 122a and the first output of the amplifier 122a. The charge transfer results in the formation of an output voltage on the first output of the amplifier 122a, where the output voltage is a function of the voltage on the self-capacitor Csrx1 sampled by the input capacitor Cin1. Since the voltage on the self-capacitor Csrx1 depends on the capacitance of the mutual capacitor Cm1 (which is affected by the presence of the user's finger), the voltage at the first output of the amplifier 122a depends on the presence of the user's finger.During the charge transfer phase, the charge is also transferred from the input capacitor Cin2 to the feedback capacitor Cfb2 in a manner similar to the manner in which the charge is transferred from the input capacitor Cin1 to the feedback capacitor Cfb1. This results in the formation of a voltage on the second output of the amplifier 122a, where the output voltage is a function of the voltage on the receive line from the capacitor Csrx2 sampled by the input capacitor Cin2. Since the voltage on the self-capacitor Csrx2 depends on the capacitance of the mutual capacitor Cm2 (which is affected by the presence of the user's finger), the voltage at the second output of the amplifier 122a depends on the presence of the user's finger.Therefore, the difference between the voltage at the first output and the second output of the amplifier 122a (ie, the differential output voltage of the amplifier) is the capacitance of the mutual capacitors Cm1 and Cm2 (which models the mutual capacitance of adjacent touch sensors) A function of the difference between.The ADC 135a converts the differential output voltage of the amplifier 122a into a digital signal (digital code), which represents the difference between the capacitances of two adjacent touch sensors. The ADC 135a can output a digital signal (digital code) to the corresponding PE140 for digital processing, as discussed further below.The difference between the capacitances of two adjacent touch sensors can be used to detect the presence of a user's finger. This is because the surface of the user's finger is curved, so the mutual capacitance of adjacent sensors is changed (affected) by different amounts.Operating the receiver 120a in the differential mode has the benefit of canceling the noise common to the receiving lines RX(n-1) and RX(n). The common noise may be caused by the noise generated by the display driver IC, the human body's own noise, etc. The cancellation of common noise in the analog front end eliminates the need for the corresponding PE 140 to execute computationally intensive algorithms to filter out noise in the digital domain.The switching sequence may also include a reset phase to define the DC voltage on the touch panel 110 before the next transmission signal (eg, transmission pulse). The reset phase can be performed after the charge transfer phase discussed above or at the same time. During the reset phase, the switches 312(1), 312(2), 316(1), and 316(2) may be turned on to short the corresponding receiving line to the reference voltage Vr1. Then, the switches 312(1), 312(2), 316(1), and 316(2) can be opened before the next transmission signal (e.g., transmission pulse). Alternatively, the switches 322(1), 322(2), 316(1), and 316(2) may be turned on during the reset phase to short-circuit the corresponding lines to the reference voltage Vr2. In this example, the switches 322(1), 322(2), 316(1), and 316(2) may be opened before the next transmission signal (eg, transmission pulse). It should be appreciated that other switches (not shown) different from the switches discussed above may be used to short-circuit the receiving line to the reference voltage Vr1 or the reference voltage Vr2 during the reset phase.The gain of the receiver 120a can be given by the ratio of the capacitance of the input capacitor to the capacitance of the feedback capacitor. In the example in FIG. 3, each of the input capacitors Cin1 and Cin2 is implemented using a variable capacitor, and each of the feedback capacitors Cfb1 and Cfb2 is implemented using a variable capacitor. This allows the controller 150 to adjust the gain of the receiver 120a according to the desired gain by adjusting the capacitance of the input capacitors Cin1 and Cin2 and/or the capacitance of the feedback capacitors Cfb1 and Cfb2.In some aspects, each of the input capacitors Cin1 and Cin2 may be implemented using a switchable capacitor bank 505, an example of which is shown in FIG. 5. In this example, the capacitor bank 505 includes a plurality of capacitors Cs1 to Csm arranged in parallel, a first control switch set 510(1) to 510(m), and a second control switch set 520(1) to 520(m). The capacitor bank 505 also includes a first terminal 550 and a second terminal 560. Each control switch in the first control switch set 510(1) to 510(m) is coupled between a corresponding one of the capacitors Cs1 to Csm and the first terminal 550, and the second control switch set 520(1) Each of the control switches to 520 (m) is coupled between a corresponding one of the capacitors Cs1 to Csm and the second terminal 560.When the corresponding pair of control switches is turned on, each of the capacitors Cs1 to Csm is coupled between the first terminal 550 and the second terminal 560, and when the corresponding pair of control switches is turned off, it is connected to the first terminal 550. Decoupled from the second terminal 560. For example, when the control switches 510(1) and 520(1) are turned on, the capacitor Cs1 is coupled between the first terminal 550 and the second terminal 560, and when the control switches 510(1) and 520(1) are turned off At this time, it is decoupled from the first terminal 550 and the second terminal 560. In this regard, when the corresponding pair of control switches is turned on, it can be considered that the capacitor is enabled, and when the corresponding pair of control switches is turned off, the capacitor is disabled.The capacitance of the capacitor bank 505 is approximately equal to the sum of the capacitances of the capacitors in the bank activated at a given time. Since the control switches control which capacitors are enabled at a given time, the controller 150 can control (adjust) the capacitance of the capacitor bank 505 by controlling which control switches are turned on and off (turned on and off) at a given time. For example, the controller 150 may increase the capacitance of the capacitor bank 505 by enabling more capacitors in the bank 505.As discussed above, each of the input capacitors Cin1 and Cin2 can be implemented using the switchable capacitor bank 505 shown in FIG. 5. This allows the controller 150 to adjust the capacitance of each input capacitor Cin1 and Cin2 by controlling which control switches in the corresponding capacitor bank are turned on and off. Each of the feedback capacitors Cfb1 and Cfb2 may also be implemented using a switchable capacitor bank similar to the switchable capacitor bank 505 shown in FIG. 5.According to certain aspects of the present disclosure, the SIMD controller 150 may also operate each receiver in a single-ended mutual capacitance sensing mode. In this regard, the operation of the receiver 120a in the single-ended mutual capacitance sensing mode will now be discussed with reference to FIG. 6. It should be appreciated that each of the other receivers can also operate in the single-ended mutual capacitance sensing mode in the manner discussed below.In the example in FIG. 6, the receiver 120a includes a digital-to-analog converter (DAC) 610, and a switch 620 between the output of the DAC 610 and the second input of the amplifier 122a. For ease of description, the switches 312(2), 314(2), 316(2), 318(2), 322(2), and 324(2) and the input capacitor Cin2 are not shown in FIG. 6.In the single-ended mutual capacitance sensing mode, the switch 620 is closed to couple the output of the DAC 610 to the second input of the amplifier 122a. In this mode, the receiver 120a is used to measure the capacitance of the mutual capacitor Cm1 (not shown in FIG. 6) on the receiving line Rx(n-1). The output voltage of the DAC 610 (denoted as "VDAC") is controlled by a digital control signal from the corresponding PE 140a or SIMD controller 150, as discussed further below.In some aspects, PE 140a determines the output voltage setting of DAC 610 during the calibration process. The calibration process can be performed at the factory. During the calibration process, the touch panel can be placed in a controlled environment in which no objects (including fingers) are placed near the touch sensor of the touch panel. The SIMD controller 150 can then switch switches 316(1), 318(1), 322(1), and 324(1) according to the switching sequence discussed above, where the input capacitor Cin1 is coupled to the receive line during the sampling phase RX(n-1) is to sample the voltage on the self-capacitor Csrx1, and the input capacitor Cin1 is coupled to the feedback capacitor Cfb1 during the charge transfer phase to transfer the charge from the input capacitor Cin1 to the feedback capacitor Cfb1. In this case, when the user's finger is not present, the input capacitor Cin samples the voltage on the self capacitor Csrx1. This voltage can be considered as the baseline voltage from the capacitor Csrx1.Whenever the receiver samples the voltage on the self capacitor Csrx1, the PE 140a or the SIMD controller 150 may set the DAC 610 to a different output voltage VDAC, and receive a digital signal (digital code) from the ADC 135a, which represents the amplifier 122a The differential output voltage. The PE 140a can record digital codes in the memory, where each digital code corresponds to a different output voltage of the DAC. After recording the digital codes of the different output voltages of the DAC 610, the PE 140a may evaluate the digital codes to determine the digital code corresponding to the minimum differential output voltage of the amplifier 122a. The determined digital code can be considered as a baseline digital code. Then, the PE 140 may record the baseline digital code in the memory, and set the output voltage of the DAC 610 to the output voltage corresponding to the baseline digital code. Therefore, the calibration process determines the output voltage setting of the DAC 610, which produces a small differential output voltage for the baseline case (ie, the user's finger is not present). For the baseline case, reducing the differential output voltage of the amplifier increases the dynamic range of the ADC 135a in the single-ended mutual capacitance sensing mode.After the calibration process, the receiver 120 is ready to detect the presence of the user's finger in the single-ended mutual capacitance sensing mode. In this mode, the SIMD controller 150 can switch the switches 316(1), 318(1), 322(1), and 324(1) according to the switching sequence discussed above, where the input capacitor Cin1 is coupled during the sampling phase To the receiving line RX(n-1) to sample the voltage on the self-capacitor Csrx1, and the input capacitor Cin1 is coupled to the feedback capacitor Cfb1 during the charge transfer phase to transfer the charge from the input capacitor Cin1 to the feedback capacitor Cfb1. Whenever the receiver samples the voltage on the self-capacitor Csrx1, the PE 140a may receive the corresponding digital code from the ADC 135a, and subtract the baseline digital code to obtain the compensated digital code. Because the baseline is subtracted, the compensated digital code provides a measure of the change in capacitance of the corresponding mutual capacitor Cm1 due to the presence of the user's finger. Therefore, in this mode, the presence of the user's finger is detected by detecting the change in the capacitance of the mutual capacitor Cm1.In some aspects, the DAC 610 and the switch 620 may be implemented using an input capacitor Cin2 and a switch in the switched capacitor network 124a associated with the input capacitor Cin2. Therefore, the components of the receiver 120a used in the differential mode can be reconfigured to implement the DAC 610. In these aspects, the SIMD controller 150 may first turn off (turn on) the switches 312(2) and 324(2), and turn on the switches 316(2), 322(2), 314(2), and 318(2). Use the reference voltage Vr1 to charge the input capacitor. The reference voltage Vr1 may be a fixed reference voltage, which is equal to the power supply voltage or a part of the power supply voltage of the receiver.After charging the input capacitor Cin2, the controller 150 may decouple the input capacitor Cin2 from the reference voltage Vr1 by opening the switch 312(2). After the input capacitor Cin2 is decoupled from the reference voltage Vr1, the controller 150 may change the capacitance of the input capacitor Cin2 to change (adjust) the voltage of the input capacitor Cin2. For example, if the input capacitor Cin2 is implemented using the switchable capacitor bank 505 in FIG. 5, the controller 150 may first use the reference voltage Vr1 to pair the input with only one of the capacitors in the bank 505 (for example, Cs1) enabled. The capacitor Cin2 is charged. Then, the controller 150 may decouple the input capacitor Cin2 from the reference voltage Vr1, and enable one or more additional capacitors in the group 505 to reduce the voltage of the input capacitor Cin2 to one of a plurality of different voltages via charge sharing. The greater the number of additional capacitors in the enabled group 505, the greater the amount by which the voltage of the input capacitor Cin2 is reduced. Therefore, in this example, the controller 150 adjusts the voltage of the DAC implemented using the input capacitor Cin2 by controlling the number of additional capacitors in the group 505 that are activated after the input capacitor Cin2 is charged using the reference voltage Vr1. Then, by closing switches 322(2) and 318(2) with switches 312(2), 314(2), 316(2), and 324(2) on, the input capacitor Cin2 can be coupled to the amplifier 122a The second input.Generally speaking, the controller 150 charges the input capacitor Cin2 by using the reference voltage Vr1, decouples the input capacitor Cin2 from the reference voltage, and changes (adjusts) the input capacitor Cin2 that generates one of the voltages supported by the DAC. The capacitor is used to set the voltage of the DAC implemented using the input capacitor Cin2. Although the reference voltage Vr1 is used in the above example, it should be appreciated that a different reference voltage can be used to charge the input capacitor Cin2. It should also be appreciated that the input capacitor Cin2 can be charged using a different switching sequence than the exemplary switching sequence given above.According to certain aspects of the present disclosure, the SIMD controller 150 may also operate each receiver in a differential self-capacitance sensing mode. In this regard, the operation of the receiver 120a in the differential self-capacitance sensing mode will now be discussed. It should be appreciated that each of the other receivers can also operate in a differential self-capacitance sensing mode in the manner discussed below.In this mode, the controller 150 configures the receiver 120a to drive the self-capacitances Csrx1 and Csrx2 of the receiving lines RX(n-1) and RX(n), respectively, and sense the voltage on the self-capacitances Csrx1 and Csrx2. To drive the self-capacitor Csrx1, the controller 150 uses the input capacitor Cin1 to pump charge to the self-capacitor Csrx1 in multiple pump cycles. Each pump cycle includes a charging phase and a charge sharing phase. During the charging phase, the controller closes switches 312(1) and 324(1), and opens switches 316(1), 322(1), 314(1), and 318(1) to charge the input capacitor Cin1 to the reference voltage Vr1. The connection for the charging phase is illustrated in Figure 7A. During the charge sharing phase, the controller opens the switch 312(1) and closes the switch 316(1) to decouple the input capacitor Cin1 from the reference voltage Vr1, and couple the input capacitor Cin1 to the self-capacitor Csrx1. This allows the charge in the input capacitor Cin1 to flow to the self capacitor Csrx1 until the voltages of the input capacitor Cin1 and the self capacitor Csrx1 are approximately equal. The connections for the charge sharing phase are illustrated in Figure 7B.FIG. 8 is a timeline showing an example of the voltage on the self capacitor Csrx1, in which charge is pumped to the self capacitor Csrx1 in multiple pump cycles. As shown in Figure 8, the voltage on the self-capacitor Csrxl increases by a voltage step in each pump cycle. Although the voltage steps are shown as uniform in Figure 8 for simplicity, it should be appreciated that this is not necessarily the case. At the end of the pump cycle, in Figure 8, the voltage on the self capacitor Csrx1 rises to the voltage Vsrx1.The self-capacitor Csrx2 can be driven in a manner similar to that of the self-capacitor Csrx1. More specifically, the controller 150 may configure the receiver 120a to use the input capacitor Cin2 to pump the charge to the self capacitor Csrx1 in multiple pump cycles in a manner similar to that discussed above for the self capacitor Csrx1 using the input capacitor Cin1. Capacitor Csrx2.Therefore, the receiver 120a charge-pumps the self-capacitor Csrx1 to voltage (denoted as "Vsrx1"), and charge-pumps the self-capacitor Csrx2 to voltage (denoted as "Vsrx2"). The voltage Vsrx1 of the self-capacitor Csrx1 depends on the capacitance of the self-capacitor Csrx1. The larger the capacitance of the self-capacitance Csrx1, the lower the voltage Vsrx1. The presence of the user's finger usually causes the capacitance of the self-capacitor Csrx1 to increase, so the voltage Vsrx1 decreases.Similarly, the voltage Vsrx2 of the self-capacitor Csrx2 depends on the capacitance of the self-capacitor Csrx2. The larger the capacitance of the self-capacitor Csrx2, the lower the voltage Vsrx2. The presence of the user's finger usually causes the capacitance of the self-capacitor Csrx2 to increase, so the voltage Vsrx2 decreases.After the charge pumping, the receiver 120a may sample the voltages Vsrx1 and Vsrx2 from the capacitors Csrx1 and Csrx2, respectively, to generate a differential voltage corresponding to the difference between the voltages Vsrx1 and Vsrx2. For example, the SIMD controller 150 may switch switches 316(1), 318(1), 322(1), and 324(1) according to the switching sequence discussed above, where the input capacitor Cin1 is coupled to the receive line during the sampling phase RX(n-1) is to sample the voltage Vsrx1, and the input capacitor Cin1 is coupled to the feedback capacitor Cfb1 during the charge transfer phase to transfer the charge from the input capacitor Cin1 to the feedback capacitor Cfb1. Similarly, the SIMD controller 150 can switch switches 316(2), 318(2), 322(2), and 324(2) according to the switching sequence discussed above, where the input capacitor Cin2 is coupled to the receive line during the sampling phase RX(n) is to sample the voltage Vsrx2, and the input capacitor Cin2 is coupled to the feedback capacitor Cfb2 in the charge transfer phase to transfer the charge from the input capacitor Cin2 to the feedback capacitor Cfb2.Therefore, the amplifier 122a outputs a differential voltage corresponding to the difference between the voltages Vsrx1 and Vsrx2. Since the voltages Vsrx1 and Vsrx2 depend on the capacitances of the self-capacitors Csrx1 and Csrx2, respectively, the differential output voltage of the amplifier 122a represents the difference in the capacitances of the self-capacitors Csrx1 and Csrx2. The difference between the capacitances of the self-capacitors Csrx1 and Csrx2 indicates the presence of the user's finger. This is because the surface of the user's finger is curved, so the self-capacitance is changed (affected) by different amounts. Therefore, the differential output voltage of the amplifier 122a can be used to detect the presence of the user's finger.As discussed above, the receiver detects the presence of the user's finger in the differential self-capacitance sensing mode by detecting the difference in the capacitances of the self-capacitances Csrx1 and Csrx2 of the receiving lines RX(n-1) and RX(n), respectively. The differential output voltage of the amplifier 122a, which indicates the difference between the capacitances of the self-capacitances Csrx1 and Csrx2, allows the processor to detect the presence of the user's fingers on the receive lines RX(n-1) and RX(n). However, the differential output voltage does not allow the processor to determine the position of the user's finger on the receiving lines RX(n-1) and RX(n). In contrast, the differential output voltage in the differential mutual capacitance sensing mode discussed above allows the processor to determine the position of the user's finger on the receiving lines RX(n-1) and RX(n). This is because the differential output voltage in the differential mutual capacitance sensing mode indicates the difference between the mutual capacitances of the two touch sensors on the receiving lines RX(n-1) and RX(n), where the touch sensor passes through one of the transmission lines Transmission line drive. In this case, the position of the user's finger corresponds to the intersection of the transmission and reception lines RX(n-1) and RX(n) that drive the two touch sensors.Therefore, the differential self-capacitance sensing mode does not allow the processor to determine the position of the user's finger on the touch panel 110 with the same level of accuracy as the differential mutual-capacitance sensing mode. However, the differential self-capacitance sensing mode generally requires less power, and therefore can be used in applications that do not require the precise position of the user's finger on the touch panel 110 to save power.For example, when the interface 112 is in a low power mode, the controller 150 may configure the receiver 120 to operate in a differential self-capacitance sensing mode. The interface 112 may enter a low power mode, for example, when the user's finger is not detected within a predetermined period of time. When one or more of the receivers in the low-power mode detects a user's finger on the touch panel 110, the controller 150 can reconfigure the receiver 120 to be the differential mutual capacitance sensing discussed above. Operate in the mode to respond. Therefore, in this example, when the user's finger is detected in the low power mode, the receiver switches from the differential mutual capacitance sensing mode to the differential self capacitance sensing mode.According to certain aspects of the present disclosure, the SIMD controller 150 may also operate each receiver in a single-ended self-capacitance sensing mode. In this regard, the operation of the receiver 120a in the single-ended self-capacitance sensing mode will now be discussed with reference to FIG. 9. It should be appreciated that each of the other receivers can also operate in a single-ended self-capacitance sensing mode in the manner discussed below.In the example in FIG. 9, the receiver 120a includes a switch 910 for selectively coupling the second input of the amplifier 122a to the reference voltage Vr3. For ease of description, the switches 312(2), 314(2), 316(2), 318(2), 322(2), and 324(2) and the input capacitor Cin2 are not shown in FIG. 9.In the single-ended self-capacitance sensing mode, the switch 910 is closed to couple the second input of the amplifier 122a to the reference voltage Vr3, which may be approximately equal to half of the receiver's power supply voltage or another voltage.In certain aspects, PE 140a determines the charge pumping sequence of receiver 120a during the calibration process. During the calibration process, the touch panel can be placed in a controlled environment in which no objects (including fingers) are placed near the touch sensor of the touch panel. The controller 150 can then use the input capacitor Cin1 to switch the switches in the switched capacitor network to charge pump the self-capacitor Csrx1, as discussed above. For example, the controller 150 may use different charge pumping sequences to charge the self-capacitor Csrx1, where each charge pumping sequence may include a different number of pump cycles. For each charge pumping sequence, the receiver 120a may sample the voltage Vsrx1 on the self-capacitor Csrx1. Since the charge pumping sequence has a different number of pump cycles, the voltage Vsrx1 can be different for different charge pumping sequences.For each charge pumping sequence, the ADC 135 receives the corresponding differential output voltage from the amplifier 122a, and converts the differential output voltage into a corresponding digital code. The PE140a receives digital codes for different charge pumping sequences from the ADC 135a, and records the digital codes in the memory. The PE 140a may evaluate the digital code to determine the digital code corresponding to the minimum differential output voltage of the amplifier 122a. The determined digital code can be considered as a baseline digital code. The PE 140 can then record the baseline digital code and the corresponding charge pumping sequence in the memory.After the calibration process, the receiver 120a is ready to detect the presence of the user's finger in the single-ended self-capacitance sensing mode. In this mode, the SIMD controller 150 can configure the receiver 120a to charge the pumped capacitor Csrx1 using the charge pumping sequence determined during the calibration process and sample the resulting voltage Vsrx1 on the self capacitor Csrx1. Whenever the receiver samples the voltage Vsrx1 on the capacitor Csrx1, the PE 140a may receive the corresponding digital code from the ADC 135a, and subtract the baseline digital code to obtain the compensated digital code. Because the baseline is subtracted, the compensated digital code provides a measure of the change in capacitance from capacitor Csrx1 due to the presence of the user's finger. Therefore, in this mode, the presence of the user's finger is detected by detecting the change in the capacitance of the self-capacitor Csrx1 from the baseline.In some aspects, the capacitance of the input capacitor Cin can be adjusted during the charge pumping sequence to adjust the size of the voltage step. In this regard, FIG. 10 shows an example in which the magnitude of the voltage step changes to the charge pumping sequence by adjusting the capacitance of the input capacitor Cin. In this example, the input capacitor Cin is initially set as the first capacitance to provide a relatively large voltage step 1010. This can be used to reduce the number of pump cycles in the charge pumping sequence. In one example, the first capacitance may correspond to the maximum (maximum) capacitance setting of the input capacitor Cin. For an example of implementing the input capacitor Cin using the switchable capacitor bank 505, the input capacitor Cin can be set to the maximum capacitance by enabling all capacitors in the bank 505.After a certain number of pump cycles, the controller 150 may reduce the capacitance of the input capacitor Cin to reduce the voltage step 1020. For an example in which the input capacitor Cin is implemented using a switchable capacitor bank 505, the controller 150 may reduce the capacitance by disabling one or more of the capacitors in the bank 505. The reduction of the voltage step allows the controller 150 to control the voltage of the self-capacitor with a finer granularity during calibration. The finer granularity may allow the controller to achieve a smaller differential output voltage against the baseline during calibration.It should be appreciated that the duration of the pump cycle and/or the size of the voltage step can be varied to use any of a variety of different waveforms to drive the capacitor Crsx1. The time period of the pump cycle controls the time interval between voltage steps. As discussed above, the size of the voltage step can be controlled by adjusting the capacitance of the input capacitor Cin.After charge pumping the self-capacitor Csrx1 and sampling the voltage Vsrx1, the charge on the self-capacitor Csrx1 can be removed to reset the self-capacitor Csrx1. In one example, this can be achieved by short-circuiting the self-capacitor Csrx1. For example, if the reference voltage Vr2 is approximately grounded, the self-capacitor Csrx1 can be short-circuited to ground by closing the switches 316(1) and 322(2). In another example, the controller may use the input capacitor Cin1 to remove charge from the self capacitor Csrx1 in multiple discharge cycles. During each discharge cycle, the controller can discharge the input capacitor Cin by closing switches 322(1) and 324(1) with switch 316(1) open. Then, the controller can couple the input capacitor Cin to the self capacitor Csrx1 by opening the switch 322(1) and closing the switch 316(1). This allows the input capacitor Cin to remove a portion of the charge from the capacitor Csrx1. Therefore, in this example, a part of the charge on the capacitor Csrx1 is removed at a time. In the differential self-capacitance sensing mode, the charge from the capacitor Csrx2 can be removed in a similar mannerAccording to certain aspects of the present disclosure, the SIMD controller 150 can also operate each receiver in a charge amplifier mode. In this regard, the operation of the receiver 120a in the charge amplifier mode will now be discussed with reference to FIG. 11. It should be appreciated that each of the other receivers can also operate in charge amplifier mode in the manner discussed below.In the example in FIG. 11, the receiver 120a includes a switch 910 for selectively coupling the second input of the amplifier 122a to the reference voltage Vr3. For ease of description, the switches 312(2), 314(2), 316(2), 318(2), 322(2), and 324(2) and the input capacitor Cin2 are not shown in FIG. 11. The receiver 120a also includes a switch 1110 for coupling the receiving line Rx(n-1) to the first input of the amplifier, and a switch 1120 for coupling the input capacitor Cin and the feedback capacitor Cfb2 in parallel to increase the feedback capacitance, as follows Discussed further.In the charge amplifier mode, the switch 910 is closed to couple the second input of the amplifier 122a to the reference voltage Vr3, which may be approximately equal to half of the receiver's power supply voltage or another voltage. In addition, the switch 1110 is closed to couple the receive line RX(n-1) to the first input of the amplifier 122a. Turn on switches 312(1), 314(1), 316(1), 322(1), and 324(1).In addition, switches 318(1) and 1120 are closed to couple the input capacitor Cin1 and the feedback capacitor Cfb1 in parallel. Therefore, in this mode, the capacitance of the input capacitor Cin1 is added to the feedback capacitance between the first input and the first output of the amplifier 122a, thereby increasing the feedback capacitance. In the charge amplifier mode, a larger feedback capacitance may be required to integrate a relatively large amount of charge from the mutual capacitor of the touch sensor on the receiving line Rx(n-1). If more feedback capacitance is required in the charge amplifier mode, the receiver 120a may include an additional switch (not shown) for coupling the input capacitor Cin2 in parallel with the feedback capacitor Cfb1 and/or connecting another capacitor with the feedback capacitor Cfb1 Parallel coupling.In order to sense the change in the capacitance of the mutual capacitor of the touch sensor due to the presence of the user's finger, the transmitter drives the mutual capacitor via the corresponding transmission line. The feedback capacitor of the amplifier integrates the charge from the mutual capacitor to generate an output voltage that is a function of the capacitance of the mutual capacitor. The ADC 135 converts the output voltage into a digital code, which represents a change in the capacitance of the mutual capacitor.FIG. 12A shows an example of the connection in the charge amplifier mode, in which the receiving line RX(n-1) is coupled to the first input of the amplifier 122a, and the input capacitor Cin1 is coupled in parallel with the feedback capacitor Cfb1 to increase the feedback capacitance.Figure 12B shows an example in which a capacitor Cb is coupled to the first input of the amplifier 122a to remove some or all of the baseline charge from the mutual capacitor on the receive line Rx(n-1). For example, the dynamic range of ADC 135a can be improved in this way. For example, the capacitance of the capacitor Cb may be approximately equal to the baseline capacitance of the mutual capacitor (ie, the capacitance of the mutual capacitor when a finger is not present). During operation, capacitor Cb can then be driven by signal 1210, which is the inverse of the signal used to drive the mutual capacitor. This causes the capacitor Cb to remove the baseline charge from the mutual capacitor, so that the remaining charge (which is integrated by the feedback capacitor of the amplifier 122a) is caused by the change in the capacitance of the mutual capacitor due to the presence of the user's finger.Therefore, the SIMD controller 150 can configure (program) the receiver in the interface to operate in one of a variety of different receiver modes. These receiver modes include differential mutual capacitance sensing mode, single-ended mutual capacitance sensing mode, and single-ended mutual capacitance sensing mode. Capacitance sensing mode, differential self-capacitance sensing mode, single-ended self-capacitance sensing mode, and charge amplifier mode. In addition, each receiver can reuse the same components for different modes to save chip area. For example, the input capacitor Cin1 in each receiver can be used to sample the voltage on the corresponding receiving line, perform charge pumping on the corresponding receiving line, and/or increase the feedback capacitance according to the selected mode. In addition, the input capacitor Cin2 in each receiver can be used to sample the voltage on the corresponding receiving line, charge the corresponding receiving line, increase the feedback capacitance and/or implement the DAC according to the selected mode. The high configurability of the receiver 120 allows the receiver to be used with different touch panel designs without having to develop a custom interface for each touch panel design, thereby reducing development costs.As discussed above, the SIMD controller 150 can also configure (program) the PE 140 to perform one or more digital operations (FFT, demodulation, etc.). For example, the SIMD controller 150 may program the PE to enable simultaneous driving of multiple transmission lines using, for example, Walsh encoding and decoding, as discussed further below.In the conventional system, one of the transmission lines of the touch panel is driven (for example, sequentially driven) at a time. Whenever one of the transmission lines is driven, the resulting signal on the receiving line is sensed in parallel by the receiver. For example, when the transmission line Tx1 in FIG. 1 is driven with a signal (for example, a square wave signal), the receiver 120 may sample corresponding signals (for example, voltage) on the receiving lines Rx1 to Rx5 in parallel. In this example, the signal on the receiving line corresponds to a touch sensor (for example, a mutual capacitor) located at the intersection of the transmission line Tx1 and the receiver lines Rx1 to Rx5. The disadvantage of driving one of the transmission lines at a time is that it increases the time required to read the entire touch panel.To solve this problem, the SIMD controller 150 can program the PE 140 to simultaneously drive the transmission line using, for example, Walsh encoding and decoding. For example, the controller 150 may configure each PE 140 to drive the corresponding transmitter 130 with a signal (for example, a pulse sequence) multiplied by a different Walsh code. In another example, each PE 140 may simply use the corresponding Walsh code to drive the corresponding transmitter. As discussed below, this allows PE 140 to use Walsh decoding to separate received signals corresponding to different transmission lines. In this example, the controller 150 may configure the PE 140 to use the corresponding transmitter 130 to simultaneously drive the transmission lines, wherein the driving signal of each transmission line is encoded using a different Walsh code.In this example, since the transmission lines are driven at the same time, the resulting signal received by each receiver 120 is the sum of the signals corresponding to different transmission lines. The controller 150 may configure each receiver 120 to sample the corresponding signal multiple times according to the sampling clock to generate a plurality of digital codes using the corresponding ADC 135. Then, each PE 140 may perform Walsh decoding on the received digital code based on the Walsh code used by the transmitter 130. Walsh decoding generates multiple sets of digital codes, where each set of digital codes corresponds to one of the transmission lines. Therefore, each PE 140 can use Walsh decoding to separate received signals corresponding to different transmission lines. Although the Walsh code is used in the example given above, it should be appreciated that the present disclosure is not limited to this example, and other types of orthogonal codes may be used to simultaneously drive the transmission line.The SIMD controller 150 can also configure (program) the PE 140 to perform filtering (e.g., FIR filtering) to filter out noise. For example, each PE 140 may be configured to filter out noise (for example, noise generated by a display driver IC, human body's own noise, etc.) by filtering out a spectrum containing noise. In this example, the transmission line can be driven by signals with different frequency spectra as noise, so that the PE 140 does not filter out the desired signal.Figure 13 illustrates an exemplary processing architecture 1305 according to certain aspects of the present disclosure. The processing architecture 1305 includes multiple slices 145(1)-145(m), and each slice 145(1)-145(m) includes a corresponding analog front end (AFE) 115(1)-115(m), The corresponding analog-to-digital converter (ADC) 135(1)-135(m), and the corresponding processing engine PE140(1)-140(m).Each AFE 115(1)-115(m) includes a corresponding receiver (not shown in FIG. 13), which can be implemented using the exemplary receiver 120 shown in FIG. 3. As discussed above, each AFE 115(1)-115(m) may also include a corresponding transmitter (not shown in FIG. 13) to drive one or more of the transmission lines of the touch panel.The SIMD controller 150 (not shown in Figure 13) can configure the receiver in each AFE 115(1)-115(m) to operate in any of a number of different receiver modes. These The receiver mode includes any of the exemplary receiver modes discussed above. The receiver in each AFE 115(1)-115(m) is configured to receive sensor signals from the touch panel (not shown in FIG. 13) via the corresponding channel 1312(1)-1312(m). For the differential sensing mode, each channel can represent two adjacent receiving lines of the touch panel. For single-ended sensing mode, each channel can represent a single receiving line of the touch panel.The ADC 135(1)-135(m) in each slice 145(1)-145(m) converts the output signal of the corresponding receiver into a digital signal, which can be input to the corresponding PE 140(1)- 140(M). The corresponding PE may include one or more programmable arithmetic logic units (ALU), which perform digital processing on the corresponding digital signal. Digital processing may include one or more of the following: Fast Fourier Transform (FFT), demodulation, filtering, averaging, Walsh decoding, baseline subtraction, etc. An exemplary embodiment of one PE of PE 140(1)-140(m) is discussed below with reference to FIG. 14. As discussed further below, the SIMD controller 150 (not shown in FIG. 13) can program the PE 140(1)-140(m) to perform the processing of the corresponding digital signal (e.g., digital code) based on the same instruction set. Perform the same digital processing in parallel.In the exemplary processing architecture 1305, slices 145(1)-145(m) are divided into multiple subsets 1310(1)-1310(L). In the example shown in FIG. 13, each subset 1310(1)-1310(L) includes four corresponding slices. However, it should be appreciated that the present disclosure is not limited to this example, and the number of slices in each subset may be different from four.Each subset 1310(1)-1310(L) also includes a corresponding local memory 1315(1)-1315(L), which may include static random access memory (SRAM) and/or another type of memory. As discussed further below, each local memory 1315(1)-1315(L) can store digital values from slices in the corresponding subset. The digital value in each local memory 1315(1)-1315(L) provides sensor information for the corresponding local area of the touch panel.The exemplary processing architecture 1305 also includes a global memory 1320 and a processor 1330 (e.g., a microprocessor). The processor 1330 may correspond to the main processor discussed above. The digital values in the local memory 1315(1)-1315(L) may be written into the global memory 1320 to provide sensor information for a large area of the touch panel (for example, the entire touch panel) in the global memory 1320. As discussed further below, this allows the processor 1330 (which has access to the global memory 1320) to process digital values corresponding to a large area of the touch panel.In operation, the receivers in slices 145(1)-145(m) receive sensor signals from various channels 1312(1)-1312(m). For example, in the differential sensing mode, each receiver can receive the sensor signal on the corresponding adjacent receiving line, and output the received sensor signal as a differential output voltage, which is the capacitance of the adjacent receiving line (for example, , Mutual capacitance and/or self capacitance). In another example, in the single-ended sensing mode, each receiver can receive the sensor signal on the corresponding receiving line, and output the received sensor signal as an output voltage, which is the capacitance of the receiving line (for example, , Mutual capacitance and/or self capacitance).The ADC 135(1)-135(m) in each slice 145(1)-145(m) converts the output signal (for example, output voltage) of the corresponding receiver into a digital signal (digital code), which ( Numerical code) can be entered into the corresponding PE140(1)-140(m). Each PE 140(1)-140(m) performs digital processing on the corresponding digital signal. In the discussion below, the digital code is referred to as a digital value, which represents the value of the digital code.In some aspects, each ADC 135(1)-135(m) can sample the corresponding receiver output signal at different sampling times to generate multiple digital codes (digital values). For example, the SIMD controller 150 may operate each receiver through a plurality of switching sequences, where each switching sequence includes a sampling phase, a charge transfer phase, and a reset phase. At the end of the charge transfer phase of each switching sequence, the corresponding ADC 135(1)-135(m) can sample the receiver output signal (eg, output voltage) to generate a corresponding digital value. In this example, the sampling time of ADC 135(1)-135(m) can be timed to coincide with the charge transfer phase of the switching sequence. Therefore, in this example, each ADC outputs multiple digital values corresponding to the output of the corresponding receiver at different sampling times.In these aspects, each PE 140(1)-140(m) performs digital processing on the corresponding digital value (ie, the digital value of the corresponding channel). For example, each PE 140(1)-140(m) can average the corresponding digital value to generate an average digital value. In another example, each PE 140(1)-140(m) may perform filtering (e.g., finite impulse response (FIR) filtering) on the corresponding digital value to filter out noise. For the example of each receiver operating in single-ended sensing mode, each PE 140(1)-140(m) can subtract the corresponding digital baseline code from each of the corresponding digital values. Alternatively, each PE140(1)-140(m) may first calculate the average value of the corresponding digital value, and then subtract the corresponding digital baseline code from the average digital value.Therefore, in this example, each PE 140(1)-140(m) performs digital processing on the digital value of the corresponding channel. Each PE 140(1)-140(m) can store the corresponding one or more processed digital values in the corresponding local memory 1315(1)-1315(L). For example, PEs 140(1)-140(4) may store their processed digital values in local storage 1315(1). For each PE 140(1)-140(m) to average the corresponding digital value example, each PE 140(1)-140(m) can store the corresponding average digital value in the corresponding local memory 1315(1 )-1315(L).The processed digital values in the local memory 1315(1)-1315(L) can be written into the global memory 1320. For example, the processed digital value in each local memory may be assigned to one or more corresponding addresses in the global memory 1320. In this example, the digital value in each local memory is written to one or more addresses in the global memory 1320 assigned to the local memory. It should be appreciated that the global memory 1320 includes a read/write circuit (not shown) for writing digital values into the global memory 1320 and outputting the digital values from the global memory 1320 to the processor 1330 for processing by the processor 1330, as follows This is discussed further in the article. The processed digital values in the local memory 1315(1)-1315(L) may be written into the global memory 1320 sequentially and/or in parallel.Therefore, the PEs 140(1)-140(m) process the digital values in the global memory 1320 before being further processed by the processor 1330 (for example, a microprocessor). This reduces the amount of processing performed by the processor 1330. For the example of PE 140(1)-140(m) averaging the digital value of the corresponding channel, the processor 1330 can process the average digital value generated by PE 140(1)-140(m) instead of from the ADC 135( 1) The original digital value output from -135(m). Averaging reduces the amount of digital values that the processor 1330 needs to process, thereby reducing the processing load on the processor 1330. In other words, part of the processing load is performed by PE 140(1)-140(m), which can perform processing that can be performed at the channel level (for example, averaging, filtering, etc.) .The processor 1330 may read the digital value in the global memory 1320 and process the read digital value. Because the digital values in the global memory 1320 can come from all channels 1312(1)-1312(m), the digital values in the global memory 1320 provide the processor 1330 with a global view of the touch panel. For example, the processor 1330 may process digital values to calculate the positions of multiple fingers on the touch panel. In this example, the processor 1330 may calculate the position of the finger on the touch panel based on the change in the capacitance (for example, mutual capacitance and/or self capacitance) of the touch panel indicated by the digital value.In one example, the processor 1330 may process digital values of multiple frames to track the movement of one or more fingers on the touch panel. In this example, slices 145(1)-145(m) generate the digital value of each frame through the following items: receive sensor signals from the touch panel via a channel, and convert the received sensor signals into original digital values (ie , Digital values generated by ADCs 135(1)-135(m) in slices 145(1)-145(m)) and processing the original digital values to generate digital values of the frame. The digital value of each frame can be written into the global memory 1320. The digital value of the frame can be generated at a predetermined frame rate. In this example, the processor 1330 processes the digital value of each frame to determine the position of one or more fingers on the touch panel for each frame. Then, the processor can process the position of one or more fingers on multiple frames to track the movement of one or more fingers on multiple frames (for example, to detect user gestures such as sliding, pinching, expanding, etc.).Therefore, the processing architecture 1305 distributes processing between the PE 140(1)-140(m) and the processor 1330. For example, PE 140(1)-140(m) may perform digital processing (eg, averaging, filtering, etc.) on digital values from corresponding ADCs 135(1)-135(m) to generate processed digital values. Then, the processor 1330 may perform additional digital processing on the processed digital value (for example, to determine the position of multiple fingers on the touch panel, to track the movement of one or more fingers on the touch panel, etc.).In the above example, each PE 140(1)-140(m) processes the digital value of the corresponding channel. However, it should be appreciated that the present invention is not limited to this example. For example, the PE may also perform digital processing on digital values from adjacent channels (e.g., channels in the same subset), as discussed further below with reference to FIG. 16.FIG. 14 shows an exemplary embodiment of PE 140 according to certain aspects of the present disclosure. Each of the PEs 140(1)-140(m) shown in FIG. 13 is implemented using the exemplary PE 140 shown in FIG. 14. The PE 140 includes a first multiplexer 1410 (marked as "Mux_A" in FIG. 14), a second multiplexer 1420 (marked as "Mux_B" in FIG. 14), an arithmetic logic unit (ALU) 1430, and Spinner 1440. It should be appreciated that in addition to the elements shown in FIG. 14, the PE 140 may also include additional elements.The first multiplexer 1410 has a first input 1412, a second input 1414, a third input 1416, and a fourth input 1418. The first input 1412 can receive a digital value (labeled "Mem_A") from the corresponding local memory, the second input 1414 can receive an overflow signal (labeled "oflow"), and the third input 1416 can receive from a register (not shown) The digital value (labeled "ALUreg"), and the fourth input 1418 may be coupled to the output of the corresponding ADC (ie, the ADC in the same slice as the PE 140). The first multiplexer 1410 may include one or more additional inputs. In operation, the first multiplexer 1410 is configured to select one of the inputs of the first multiplexer 1410 according to the first selection instruction (labeled "Sel_A") received at the selection input 1417 , And couple the selected input to the first input 1415 of the ALU 1430, as discussed further below.The second multiplexer 1410 has a first input 1422 and a second input 1414. The first input 1422 may receive a digital value (labeled "Mem_B") from the corresponding local memory, which may be different from the digital value received by the first input 1412 of the first multiplexer 1410. The second input 1424 may be coupled to the output 1445 of the PE140. The second multiplexer 1420 may include one or more additional inputs. In operation, the second multiplexer 1420 is configured to select one of the inputs of the second multiplexer 1420 according to the second selection instruction (labeled "Sel_B") received at the selection input 1427 , And couple the selected input to the second input 1425 of the ALU 1430, as discussed further below.The ALU 1430 is configured to receive the first operand at the first input 1415 from the first multiplexer 1410 and the second operand at the second input 1425 from the second multiplexer 1420, and according to the Enter the operation instruction (labeled "opcode") received at 1437 to perform arithmetic and/or logic operations on the operands. For example, the ALU 1430 may be configured to perform any of a plurality of arithmetic and/or logical operations (eg, addition, subtraction, etc.) on the first operand and the second operand. In this example, the operation instruction Opcode (also referred to as an operation selection code) selects which arithmetic and/or logic operation of a plurality of arithmetic and/or logic operations performed by the ALU 1430. The ALU 1430 outputs the results of one or more arithmetic and/or logic operations at the output 1435.The rotator 1440 is coupled to the output 1435 of the ALU 1430 and is configured to rotate (shift) the output value of the ALU 1430 according to the shift instruction (labeled "Shift") received at the input 1447. Rotator 1440 outputs the resulting shifted output value at output 1445. If there is an overflow at the output 1445, the rotator 1440 may also output an overflow signal (labeled "oflow").In operation, the SIMD controller 150 programs the PE 140 to perform one or more operations by inputting a set of instructions to the PE 140, and the set of instructions causes the PE to perform the desired one or more operations. The instruction set may include a first selection instruction Sel_A for the first multiplexer 1410, a second selection instruction Sel_B for the second multiplexer 1420, an operation instruction Opcode for the ALU 1430, and/or a rotation instruction Shift instruction Shift of the processor 1440. This set of instructions can be thought of as part of a single longer instruction. The SIMD controller 150 can input the same set of instructions to the sliced PEs 140(1)-140(m) in parallel, so that the PEs 140(1)-140(m) perform the same digital processing on their respective digital values in parallel.The SIMD controller 150 can program the PE 140 to perform a series of operations sequentially to perform more complex operations. In this example, each operation in a series of operations may be specified by an instruction set (for example, the first selection instruction Sel_A, the second selection instruction Sel_B, the operation instruction Opcode, and/or the shift instruction Shift), where the SIMD controller 150 The instruction set for each operation is sequentially input to the PE 140 to perform more complex operations.For example, the SIMD controller 150 may program the PE 140 to perform a series of addition and/or shift operations to perform multiplication, division, averaging, filtering, FFT, etc. In this example, PE 140 may directly receive the digital value from the output of the corresponding ADC at input 1418. Alternatively, the digital value may first be stored in the corresponding local memory. For example, the output of the corresponding ADC may be coupled to the corresponding local memory to store the digital value in the corresponding local memory. In this case, PE 140 may receive a digital value at input 1412 and/or input 1422 from the corresponding local memory. The PE 140 can also receive digital values from the combination of the output of the corresponding ADC and the corresponding local memory. The output 1145 of the PE 140 may be stored in the corresponding local memory and/or output to the global memory 1320. The output of PE 140 may also be fed back to ALU 1430 via input 1424 (for example, when the output is an intermediate result of a series of operations).In one example, PE 140 may also subtract the baseline digital code (for example, for single-ended sensing mode) from the digital value to generate a compensated digital value. In this example, the first multiplexer 1410 and the second multiplexer 1420 can input the digital value and the baseline digital code to the ALU 1430, and can instruct the ALU 1430 to perform subtraction to subtract the baseline from the digital value Numeric code. The baseline digital code can be received from the register via input 1416 or another input.It should be appreciated that the exemplary PE 140 may include additional elements in addition to the elements shown in FIG. 14. For example, the PE 140 may include one or more load registers (not shown), where one or more digital values in the corresponding local memory or another source are input to the multiplexer via the one or more load registers One or more multiplexers in 1410 and 1420. In this example, one or more load registers may be used to control the timing of one or more digital values input to one or more multiplexers.In some embodiments, a new type of instruction (referred to as a phase instruction) is provided for switching the switched capacitor network in the AFE 115(1)-115(m) of slices 145(1)-145(m) The configuration is programmed. In one example, the phase instruction includes a plurality of node values, where each node value corresponds to a corresponding node in the switched capacitor network of each AFE, and specifies the connection of the corresponding node. Using an example in which the switched capacitor network of each AFE is implemented using the switched capacitor network 124 shown in FIG. 3, one of the node values may correspond to the node 315 in the switched capacitor network of each AFE. In this example, the node value of the node 315 may specify whether the node 315 is connected to the reference voltage Vr1, the reference voltage Vr2, the receiving line RX(n-1), and/or another element (not shown). As discussed further below, the decoder converts the node value into a corresponding switch control signal to achieve the connection specified by the node value. For example, if the node value of the node 315 specifies that the node 315 is connected to the reference voltage Vr1, the decoder converts the node value into a switch control signal, which turns off the switch 312(1), and turns on the switched capacitor network of each AFE The switches 316(1) and 322(1). Therefore, the node value in the phase instruction allows the programmer to program the connection of the nodes in the switched capacitor network of each AFE at an abstraction level that does not require detailed knowledge of the switches.Figure 15 shows an exemplary system 1505 for programming the switch configuration of the switched capacitor network for AFE 115(1)-115(m) according to phase instructions. The system 1505 may be part of the SIMD controller 150. The system 1505 includes a decoder 1510, an instruction register 1520, an instruction memory 1530, and an instruction controller 1540.The instruction memory 1530 may include a plurality of stage instructions, where each stage instruction specifies a switch configuration of a specific stage. For example, the first phase command in the phase command can specify the switch configuration of the sampling phase, the second phase command in the phase command can specify the switch configuration of the charge transfer phase, and the third phase command in the phase command can specify the switch of the reset phase. Configuration etc. As further disclosed below, one stage instruction can be loaded into the instruction register 1520 at a time to implement the switch configuration specified by the stage instruction.The decoder 1510 is configured to convert the stage instructions currently in the instruction register 1520 into corresponding switch control signals S1-Sn, so as to realize the switch configuration specified by the stage instructions. For example, if the node value of node 315 in the phase instruction specifies that node 315 is connected to the reference voltage Vr1, the decoder 1510 converts the node value into a corresponding switch control signal, which turns off switch 312(1), and turns on each Switches 316(1) and 322(1) in the switched capacitor network of an AFE. The decoder 1510 may be implemented using hard-wired logic and/or programmable logic including combinational logic, latches, multiplexers, or any combination thereof. Each of the switch control signals S1-Sn can control a corresponding switch in the switched capacitor network of each AFE. For example, the switch control signal may be asserted high (e.g., logic 1) to turn on the corresponding switch, and asserted low (e.g., logic 0) to turn off the corresponding switch, and vice versa.In the example in FIG. 15, a decoder 1510 is shown, which controls the switching of the switched capacitor network of each AFE according to the phase instruction in the instruction register 1520. However, it should be appreciated that the decoder 1510 may include multiple decoders (for example, one decoder for each switched capacitor network or a subset of the switched capacitor network), where each decoder controls the switched capacitor according to the phase command in the command register 1520 The switch of one or a subset of the network.In operation, the instruction controller 1540 is configured to sequentially load multiple phase instructions from the instruction memory 1530 into the instruction register 1520 to achieve a desired switching sequence. For example, in order to collect samples in the differential mutual capacitance sensing mode, the command controller 1540 may load the first phase command into the command register 1520 to realize the switch configuration of the sampling phase. After the sampling phase, the command controller 1540 can load the second phase command into the command register 1520 to realize the switch configuration of the charge transfer phase. After the charge transfer phase, the command controller 1540 may load the third phase command into the command register 1520 to implement the switch configuration of the reset phase to limit the DC on the touch panel 110 before the next transfer signal (for example, transfer pulse). Voltage, as discussed above. Therefore, the instruction controller 1540 can sequentially load the multiple phase instructions from the instruction memory 1530 into the instruction register 1520 to execute the multiple phase instructions sequentially. The sequential execution of phase instructions implements the desired switching sequence, where each phase instruction specifies the switch configuration of one of the phases in the switching sequence.Although the example above uses the example switched capacitor network 124 shown in FIG. 3 to implement the switched capacitor network in each AFE 115(1)-115(m) to describe the system 1505, it should be appreciated that the system 1505 is not limited to The example. Further, although the system 1505 is described using an example in which the switch configuration of the switched capacitor network of each of the AFEs 115(1)-115(m) is controlled in parallel by the system 1505, it should be appreciated that this is not necessarily the case. For example, the system 1505 can control the switch configuration of the switched capacitor network of each AFE in the subset of AFE 115(1)-115(m) in parallel. For example, in some applications, only a subset of AFE 115(1)-115(m) may be required. In this case, the subset of AFEs can be enabled, and the remaining AFEs can be disabled. In this example, the system 1505 can control the switch configuration of the switched capacitor network of each AFE in the subset of AFEs that are enabled in parallel.Figure 16 shows an exemplary power management architecture 1605 according to certain aspects of the present disclosure. The power management architecture 1605 can be used with the exemplary processing architecture 1305 shown in FIG. 13. For ease of description, only a subset of the subsets 1310(1)-1310(L) is shown in FIG. 16. The power management architecture 1605 includes a first power gate 1610, a second power gate 1615, a third power gate 1625, a first clock gate 1630, a second clock gate 1640, a third clock gate 1650, a power controller 1650, and a timer 1655.The first power gate 1610 is configured to control the power to slices 145(1)-145(m). In this regard, the first power gate 1610 is coupled between the power supply rail Vdd and the slices 145(1)-145(m), and can be implemented using one or more power switches. The power rail provides a power supply voltage from a power source (for example, a power management integrated circuit (PMIC)). When the first power gate 1610 is turned on, the first power gate 1610 couples the power rail Vdd to the slices 145(1)-145(m), thereby supplying power to the slices 145(1)-145(m). When the first power gate 1610 is turned off, the first power gate 1610 decouples the power supply rail Vdd from the slices 145(1)-145(m), thereby causing power collapse of the slices 145(1)-145(m). As discussed further below, when slices 145(1)-145(m) are not used to reduce power leakage, power collapse can occur in the slices, thus saving power. The first power gate 1610 can also control the power to the local memory 1315(1)-1315(L).Although FIG. 16 shows an example of using one power gate to control the power to slices 145(1)-145(m), it should be appreciated that the present disclosure is not limited to this example. For example, the power management architecture 1605 may include separate power gates for each subset 1310(1)-1310(L) of slices 145(1)-145(m). This allows the subset to be independently power gated (power collapse). For example, some applications may only require a subset. In this example, the power gate that controls the power to the subset being used is turned on, while the power gate that controls the power to the remaining subset is turned off.The second power gate 1615 is configured to control power to the global memory 1320 and the processor 1330. In this regard, the second power gate 1615 is coupled between the power rail Vdd and the global memory 1320, and between the power rail Vdd and the processor 1330. The second power gate 1615 may be implemented using one or more power switches. When the second power gate 1615 is turned on, the second power gate 1615 couples the power rail Vdd to the global memory 1320 and the processor 1330, thereby powering the global memory 1320 and the processor 1330. When the second power gate 1615 is disconnected, the second power gate 1615 decouples the power supply rail Vdd from the global memory 1320 and the processor 1330, thereby causing the global memory 1320 and the processor 1330 to experience power collapse.Although FIG. 16 shows an example in which one power gate is used to control power to the global memory 1320 and the processor 1330, it should be appreciated that the present disclosure is not limited to this example. For example, the power management architecture 1605 may include separate power gates for the global memory 1320 and the processor 1330 to independently power the global memory 1320 and the processor 1330.The third power gate 1625 is configured to control the power to the SIMD controller 150. In this regard, the third power gate 1625 is coupled between the power supply rail Vdd and the controller 150, and can be implemented using one or more power switches. When the third power gate 1625 is turned on, the third power gate 1625 couples the power rail Vdd to the controller 150 to supply power to the controller 150. When the third power gate 1625 is disconnected, the third power gate 1625 decouples the power rail Vdd from the controller 150, thereby causing the controller 150 to experience power collapse.The first clock gate 1630 is configured to control the first clock signal (labeled "Clk_1") to the slices 145(1)-145(m). The first clock signal Clk_1 may be used to time the operations of AFE 115(1)-115(m), ADC 135(1)-135(m), and/or PE 140(1)-140(m). The first clock signal Clk_1 may come from a phase locked loop (PLL) or another clock source. When the first clock gate 1630 is enabled, the first clock gate 1630 passes the first clock signal Clk_1 to the slices 145(1)-145(m). When the first clock gate 1630 is disabled, the first clock gate 1630 gates the first clock signal Clk_1 (ie, blocks the first clock signal Clk_1 from the slices 145(1)-145(m)). This reduces the dynamic power consumption of slices 145(1)-145(m) by preventing switching activity in slices 145(1)-145(m). As discussed further below, when the slice is not used to save power, the first clock gate 1630 may gate the first clock signal Clk_1. The first clock gate 1630 can also control the first clock signal Clk_1 to the local memory 1315(1)-1315(L).Although FIG. 16 shows an example in which one clock gate is used to control the first clock signal Clk_1 to the slices 145(1)-145(m), it should be appreciated that the present disclosure is not limited to this example. For example, the power management architecture 1605 may include separate clock gates for each subset 1310(1)-1310(L) of slices 145(1)-145(m). This allows independent clock gating of subsets. For example, some applications may only require a subset. In this example, the clock gates that control the clock signals to the subset being used are enabled, and at the same time the clock gates that control the clock signals to the remaining subset can be disabled to reduce dynamic power.The second clock gate 1640 is configured to control the first clock signal Clk_1 to the global memory 1320 and the processor 1330. The first clock signal Clk_1 may be used to time the operations of the global memory 1320 and the processor 1330. When the second clock gate 1640 is enabled, the second clock gate 1640 passes the first clock signal Clk_1 to the global memory 1320 and the processor 1330. When the second clock gate 1640 is disabled, the second clock gate 1640 gates the first clock signal Clk_1 (ie, blocks the first clock signal Clk_1 from the global memory 1320 and the processor 1330). As discussed further below, when the global memory 1320 and the processor 1330 are not used to save power, the second clock gate 1640 may gate the first clock signal Clk_1.Although FIG. 16 shows an example in which one clock gate is used to control the clock signal to the global memory 1320 and the processor 1330, it should be appreciated that the present disclosure is not limited to this example. For example, the power management architecture 1605 may include separate clock gates for the global memory 1320 and the processor 1330 to independently gate the clock signals to the global memory 1320 and the processor 1330.In the example in FIG. 16, the first clock signal Clk_1 is used to clock the slices 145(1)-145(m) and the processor 1330. However, it should be appreciated that the present disclosure is not limited to this example, and different clock signals may be used for the slices 145(1)-145(m) and the processor 1330. In this case, the first clock gate 1630 is used to selectively gate the clock signals of the slices 145(1)-145(m), and the second clock gate 1640 is used to selectively gate the clock signals of the processor 1330 Gating. Therefore, the slices 145(1)-145(m) and the processor 1330 can operate in the same clock domain or in different clock domains.The third clock gate 1650 is configured to control the first clock signal Clk_1 to the controller 150. The first clock signal Clk_1 may be used to time the operation of the controller 150. When the third clock gate 1650 is enabled, the third clock gate 1650 passes the first clock signal Clk_1 to the controller 150. When the third clock gate 1650 is disabled, the third clock gate 1650 gates the first clock signal Clk_1 (ie, blocks the first clock signal Clk_1 from the controller 150).In some aspects, one or more devices (eg, slices 145(1)-145(m), processor 1330, and controller 150) can be put into a sleep state for a predetermined sleep period to save power. By disabling the corresponding clock gate and/or disconnecting the corresponding power gate, one or more devices can be put into a sleep state. In these aspects, the power controller 1650 is configured to use the timer 1655 to track the amount of time one or more devices sleep, and to wake the one or more devices at the end of the sleep period. The power controller 1650 can wake up one or more devices by enabling the corresponding clock gate and/or turning on the corresponding power gate.In one example, the timer 1655 includes a counter that leaves the second clock signal Clk_2. The frequency of the second clock signal Clk_2 may be lower than the frequency of the first clock signal Clk_1 to reduce the power consumption of the timer 1655. For each cycle (period) of the second clock signal Clk_2, the count value of the counter may increase by one.In one example, the power controller 1650 may start the counter at the beginning of the sleep period. Then, the power controller 1650 may compare the count value of the counter with the sleep count value, where the sleep count value is set according to a predetermined sleep period and may be stored in a register. When the count value of the counter reaches the sleep count value, the power controller 1650 wakes up one or more devices. The power controller 1650 can also reset the counter for the next sleep period.In another example, the power controller 1650 sets the count value of the counter to the sleep count value at the beginning of the sleep period. Then, the counter can count down from the sleep count value. In this example, when the counter counts down to zero, the power controller 1650 can wake up one or more devices.The controller 150 may program the sleep count value into the power controller 150. For example, the controller 150 may execute a sleep instruction (also referred to as an idle instruction), which includes a parameter specifying a sleep count value.The power controller 1650 allows the controller 150 to put itself into a sleep state for a predetermined sleep period to save power. For example, the controller 150 may program a sleep count value corresponding to the sleep period into the power controller 1650, and instruct the power controller 1650 to put the controller 150 into a sleep state and wake up the controller 150 when the sleep period ends. Then, the power controller 1650 may disable the third clock gate 1650 and/or disconnect the third power gate 1625 to make the controller 1650 enter the sleep state. For ease of description, the connection between the power manager and the clock and power gate is not explicitly shown in FIG. 16. At the end of the sleep period, the power controller 1650 enables the third clock gate 1650 and/or turns on the third power gate 1625 to wake up the controller 1650. Hereinafter, an exemplary situation in which the controller 150 can put itself into a sleep state will be discussed according to certain aspects.In some aspects, the controller 150 can put the touch panel interface in a low power mode to save power. For example, when the user's finger is not detected within a predetermined period of time, when the mobile device incorporating the touch panel times out, etc., the controller 150 may place the touch panel interface in a low power mode. In the low power mode, the controller 150 can put the slices 145(1)-145(m) into sleep state most of the time, and periodically wake up the slices 145(1)-145(m) every time in a short duration. ) To monitor whether there is a user's finger on the touch panel. During each short duration, the controller 150 may operate the slices 145(1)-145(m) in the self-capacitance sensing mode to detect the presence of the user's finger. As discussed above, the self-capacitance sensing mode generally does not resolve the position of the user's finger with the same accuracy as the mutual-capacitance sensing mode. However, the self-capacitance sensing mode consumes less power and may be sufficient to detect the presence of the user's finger on the touch panel in order to determine whether to exit the low power mode.When the user's finger is detected, the controller 150 takes the touch panel interface out of the low power mode, and operates the touch panel interface in the normal mode. In the normal mode, the controller 150 can operate the slices 145(1)-145(m) in the mutual capacitance sensing mode to detect the position of one or more fingers on the touch panel and/or track the position of one or more fingers on the touch panel. Move on the touch panel.As discussed above, in the low power mode, the controller 150 can put the slices 145(1)-145(m) in sleep state most of the time, and periodically wake up the slices 145 ( 1)-145(m) to monitor whether there is a user's finger on the touch panel. In one example, the controller 150 may set the sleep time between wake-ups by setting the sleep count value of the timer 1655 accordingly.In the low power mode, without detecting the user's finger, after monitoring the touch panel for a short duration, the controller 150 can instruct the power controller 1650 to slice 145(1)-145(m) and the controller 150 Put to sleep. The power controller 1650 can put the slices 145(1)-145(m) into the sleep state by disabling the first clock gate 1630 and/or disconnecting the first power gate 1610, and by disabling the third clock gate 1650 and/or Turning off the third power gate 1625 puts the controller 150 in a sleep state. The power controller 1650 can then use the timer 1655 to track the amount of time that the slices 145(1)-145(m) and the controller 150 are in a sleep state, as discussed above. At the end of the sleep time, the power controller 1650 wakes up the slices 145(1)-145(m) by enabling the first clock gate 1630 and/or turning on the first power gate 1610, and by enabling the third clock gate 1650 and/or Or turn on the third power gate 1625 to wake up the controller 150.Then, in the self-capacitance sensing mode, the controller 150 operates the slices 145(1)-145(m) in a short duration to monitor whether there is a user's finger on the touch panel. Exemplary techniques for detecting a user's finger are discussed further below. If the user’s finger is not detected within a short duration, the controller 150 instructs the power controller 1650 to reset the slices 145(1)-145(m) and the controller 150 to the sleep state, in this case, repeat The above process. If the user's finger is detected, the controller 150 takes the touch panel interface out of the low power mode, as discussed above.As discussed above, after waking up, in the self-capacitance sensing mode, the controller 150 operates the slices 145(1)-145(m) for a short duration to monitor whether there is a user's finger on the touch panel. In this regard, the receiver in each slice 145(1)-145(m) can receive one or more sensor signals from the corresponding channel, and use the received one or more sensor signals as one or more outputs The voltage is output to the corresponding ADC. The ADC 135(1)-135(m) in each slice 145(1)-145(m) converts one or more output voltages of the corresponding receiver into one or more digital values, the one or more digital values Values can be entered into the corresponding PE 140(1)-140(M). If the single-ended self-capacitance sensing mode is used, each PE can subtract the corresponding baseline digital code from the corresponding one or more digital values.Then, each PE 140(1)-140(m) can compare each of the corresponding one or more digital values with the detection threshold, and if one or more of the corresponding digital values Above the detection threshold, a detection indicator is generated. Alternatively, each PE 140 140(1)-140(m) can average one or more corresponding digital values, compare the resulting average with the detection threshold, and if the average is higher than the detection threshold, then Generate a detection indicator. The detection indicator may indicate the detection of the user's finger on the corresponding channel. If the PE generates the detection indicator, the PE may write the detection indicator into the corresponding local memory or another memory accessible by the controller 150.Then, the controller 150 can look up any detection indicator in the local storage or other storage. In one example, if the controller 150 finds one or more detection indicators, the controller 150 may take the touch panel interface out of the low power mode. In another example, the controller may require two or more detection indicators corresponding to adjacent channels before taking the touch panel interface out of low power mode. In this way, the erroneous detection of a single channel due to noise will not cause the controller 150 to leave the touch panel interface out of the low power mode. This example assumes that the receiving lines of adjacent channels are spaced close enough so that the presence of the user's finger will be detected on more than one channel.In the above example, each PE 140(1)-140(m) processes one or more digital values of the corresponding channel, and if the user’s finger is detected on the corresponding channel based on the one or more digital values of the corresponding channel , A detection indicator is generated. However, it should be appreciated that the present invention is not limited to this example. For example, the PE may perform digital processing on digital values from adjacent channels (e.g., channels in the same subset) to detect the presence of a user's finger on the adjacent channels, as discussed further below.In some aspects, in low power mode, one PE is used in each subset 1310(1)-1310(L) to process digital values from the channel corresponding to the subset. In these aspects, the receiver in each slice 145(1)-145(m) receives one or more sensor signals from the corresponding channel, and uses the received one or more sensor signals as one or more output voltages Output to the corresponding ADC. The ADC 135(1)-135(m) in each slice 145(1)-145(m) converts one or more output voltages of the corresponding receiver into one or more digital values, and converts one or more The digital value is output to the corresponding local memory. For example, each of the ADCs 135(1)-135(4) in the subset 1310(1) outputs the corresponding one or more digital values to the local memory 1315(1).For each subset 1310(1)-1310(L), one of the PEs in the subset processes the digital value of the channel corresponding to the subset. For example, for subset 1310(1), one of PE 140(1)-140(4) in subset 1310(1) processes the digital value of channel 1312(1)-1312(4), these channels 1312 (1)-1312(4) corresponds to the subset 1310(1).For example, for each subset 1310(1)-1310(L), one PE in the subset can read the digital value of the channel corresponding to the subset from the corresponding local memory, and average the digital value to Produce an average digital value. The PE may then compare the average value with the detection threshold, and if the average value is higher than the detection threshold, generate a detection indicator. In this example, the detection indicator indicates that the user's finger is detected on the channel corresponding to the subset. The detection indicator may be written into the corresponding local memory or another memory accessible by the controller 150.Then, the controller 150 can look up any detection indicator in the local storage or other storage. If the controller 150 finds one or more detection indicators, the controller 150 may take the touch panel interface out of the low power mode.Therefore, in this example, one PE in each subset performs digital processing on the digital value from the channel corresponding to the subset to detect the presence of the user's finger. The remaining PEs in each subset can be disabled to save power. For example, for each subset, the power management architecture 1605 may include a separate power gate, which is used to control the power to one PE in the subset and the remaining PE in the subset. In this example, for each subset, the power controller 1650 turns on the power gate of one of the PEs in the subset, and turns off the power gates of the remaining PEs in the subset to disable the remaining PEs in the subset.In the above example, the global memory 1320 and the processor 1330 are not used in the low power mode to monitor whether there is a user's finger on the touch panel. Thus, the global memory 1320 and the processor 1330 can be disabled in the low power mode to save power. In this example, the power controller 1650 and/or the controller 150 may disable the global memory 1320 and the processor 1330 by disabling the second clock gate 1640 and/or disconnecting the second power gate 1615.FIG. 17 illustrates a touch panel processing method 1700 according to aspects of the present disclosure. The method 1700 may be performed by the touch panel interface 112 shown in FIG. 1.At step 1710, multiple receivers are used to receive sensor signals from the touch panel, where each receiver in the receiver is coupled to one or more receiving lines of the touch panel, and each receiver in the receiver includes a switch Capacitor network and amplifier. For example, multiple receivers may correspond to the receiver 120 shown in FIG. 1. Each of the receivers may be coupled to one receive line (for example, for single-ended sensing) or two adjacent receive lines (for example, for differential-ended sensing).At step 1720, switch the switch in the switched capacitor network of each of one or more receivers in the receiver to operate the receiver in one of the multiple different receiver modes. Each of one or more receivers. For example, multiple different receiver modes may include two or more of the following: differential mutual capacitance sensing mode, single-ended mutual capacitance sensing mode, differential self-capacitance sensing mode, single-ended self-capacitance sensing Mode, and charge amplifier mode. In one example, a switch (e.g., the switch shown in FIG. 3) in a switched capacitor network (e.g., switched capacitor network 124) is switched according to a switching sequence to operate the receiver in one of the receiver modes described above. Device. The switching sequence may include a sampling phase, a charge transfer phase, and/or one or more additional phases.FIG. 18 illustrates another example of a touch panel processing method 1800 according to aspects of the present disclosure. The method 1800 may be executed by the processing architecture 1305.At step 1810, a plurality of sensor signals are received from the touch panel, where each sensor signal of the plurality of sensor signals corresponds to a corresponding channel of the plurality of channels of the touch panel. For example, each of the multiple channels of the touch panel may correspond to a corresponding receiving line of the touch panel. In another example, each of the multiple channels of the touch panel may correspond to a pair of corresponding receiving lines (for example, adjacent receiving lines) of the touch panel. In this example, the sensor signal of each channel may include two sensor signals on a pair of corresponding receiving lines.At step 1820, for each sensor signal in the received sensor signal, the received sensor signal is converted into one or more corresponding digital values. For example, the received sensor signal can be converted into one or more corresponding digital values through a corresponding ADC (for example, a corresponding one of ADCs 135(1)-135(m)). As discussed above, the received sensor signal may be in the form of a voltage, which is a function of one or more capacitances (e.g., mutual capacitance and/or self capacitance) of the touch panel.At step 1830, for each sensor signal in the received sensor signal, a corresponding one of the multiple processing engines is used to perform digital processing on one or more corresponding digital values to generate one or more corresponding signals. The processed numeric value. The digital processing may include at least one of the following: demodulation, Walsh decoding, averaging, or filtering. Multiple processing engines may correspond to two or more of PE 140(1)-140(m).At step 1840, additional processing is performed on the processed digital value. For example, the additional processing may be performed by a processor (for example, the processor 1340), and may include calculating the positions of a plurality of user fingers on the touch panel based on the received processed digital value.It should be appreciated that although various aspects of the present disclosure are discussed above using the example of the user's finger, the present disclosure is not limited to this example. For example, the present disclosure can be used to detect the presence of a stylus or another touching object.In addition, it should be appreciated that the present disclosure is not limited to the specific terms used to describe various aspects of the present disclosure above. For example, the clock gate may also be called a clock gating unit or another term, and the power gate may also be called a power gate switch or another term.In this disclosure, the word "exemplary" is used to mean "serving as an example, instance, or illustration." Any embodiment or aspect described herein as "exemplary" is not necessarily construed as preferred or advantageous compared to other aspects of the present disclosure. Likewise, the term "aspects" does not require that all aspects of the present disclosure include the discussed feature, advantage, or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two components.It should be understood that the present disclosure is not limited to the specific order or hierarchy of steps in the methods disclosed herein. Based on design preferences, it should be understood that the specific order or hierarchy of steps in the method can be rearranged. The accompanying method claims present the elements of each step in a sample order, and unless specifically stated therein, they are not meant to be limited to the specific order or hierarchy presented.The steps of the method described in combination with the disclosure herein may be directly embodied in hardware, a software module executed by a processor, or a combination of both. The software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM or any other form of storage medium known in the art. An exemplary storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be an integral part of the processor. The processor and the storage medium may reside in the computing system.The previous description of the present invention is provided to enable those skilled in the art to make or use the present invention. Various modifications to the present disclosure are obvious to those skilled in the art, and the general principles defined herein can be applied to other modifications without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure is not intended to be limited to the examples described herein, but is consistent with the widest scope that conforms to the principles and novel features disclosed herein.
An integrated circuit and a method of forming an integrated circuit including a first dielectric layer including a surface, a plurality of first trenches defined in the dielectric layer surface, and a plurality of first wires, wherein each of the first wires are formed in each of the first trenches. The integrated circuit also includes a plurality of second trenches defined in the dielectric layer surface, and a plurality of second wires, wherein each of the second wires are formed in each of the second trenches. Further, the first wires comprise a first material having a first bulk resistivity and the second wires comprise a second material having a second bulk resistivity, wherein the first bulk resistivity and the second bulk resistivity are different.
1.A method of depositing a wire, comprising:Forming a plurality of first trenches in a surface of the dielectric layer;Forming a plurality of first wires, wherein each of the first wires is formed in each of the first trenches, and the first wires are made of a first body resistivity Forming a material;Forming a plurality of second trenches in the surface of the dielectric layer;Forming a plurality of second wires, wherein each of the second wires is formed in each of the second trenches, and the second wires are formed by a second body resistivity The two materials are formed, wherein the first bulk resistivity and the second bulk resistivity are different.2.The method of claim 1 further comprising: prior to forming said plurality of first trenches in said dielectric layer, applying a first hard mask over said dielectric layer, and at said first The plurality of first trenches are formed in the hard mask and the dielectric layer.3.The method of claim 2, further comprising removing a portion of the first material from each of the first wires, forming in each of the plurality of first trenches a first recess; and applying a second hard mask to the first recess prior to forming the plurality of second trenches in the dielectric layer, wherein the second trench extends through The first hard mask.4.The method of claim 3, further comprising: removing a portion of the second material to form a second recess in each of the plurality of second trenches; and applying a third hard mask Applying to the second recess.5.The method of claim 4, further comprising: forming a second dielectric over the first dielectric layer, the first hard mask, the second hard mask, and the third hard mask Forming a first via opening in the second dielectric layer and exposing a portion of the second hard mask; removing the exposed portion of the second hard mask to form a second via opening; The first via opening and the second via opening are filled with the first material to form a via.6.The method of claim 4, further comprising: forming a second over the first dielectric layer material, the first hard mask, the second hard mask, and the third hard mask a dielectric layer; forming a first via opening in the second dielectric layer and exposing a portion of the third hard mask; selectively removing the exposed portion of the third hard mask to form a second pass a hole opening; and filling the first via opening and the second via opening with the second material to form a via.7.The method of claim 1 further comprising depositing a barrier layer in said plurality of second trenches prior to forming said second wire.8.The method of claim 1 wherein said plurality of first trenches and said plurality of said second trenches are formed by photolithography of said dielectric layer.9.The method of claim 1, wherein the plurality of first trenches and the plurality of the second trenches are formed by spacer-based pitch division.10.The method of claim 1 wherein said first wire is parallel to said second wire.11.The method of claim 1 wherein said first material is deposited in said plurality of first trenches by vapor deposition.12.The method of claim 1 wherein said second material is deposited in said second trench by vapor deposition.13.The method according to claim 1, wherein said first volume resistivity is 5.0 μΩ·cm or more at 20 ° C, and preferably in the range of 5.0 to 8.0 μΩ·cm at 20 ° C, and The second volume resistivity is 4.0 μΩ·cm or less at 20 ° C, and preferably in the range of 1.0 to 4.0 μΩ·cm at 20 ° C.14.The method of claim 1 wherein the dielectric layer exhibits a dielectric constant less than 3.9 and preferably in the range of 1.5 to 3.8.15.An integrated circuit comprising:a first dielectric layer, the first dielectric layer including a surface;a plurality of first trenches defined in a surface of the dielectric layer;a plurality of first wires, wherein each of the first wires is formed in each of the first trenches, wherein the first wires include a first body resistivity a materiala plurality of second trenches defined in a surface of the dielectric layer;a plurality of second wires, wherein each of the second wires is formed in each of the second trenches, wherein the second wires comprise a second body resistivity A second material, wherein the first bulk resistivity and the second bulk resistivity are different.16.The integrated circuit of claim 15 further comprising a hard mask layer comprising a first hard mask disposed over said first dielectric layer, disposed over said first material a second hard mask, and a third hard mask disposed over the second material.17.The integrated circuit of claim 16 further comprising: a second dielectric layer disposed over said hard mask layer; a first via opening in said second dielectric layer and adjoining said first pass a second via opening in the second hard mask layer of the aperture opening; and a via formed by the first material in the first via opening and the second via opening, Wherein the via contacts a one of the first wires.18.The integrated circuit of claim 16 further comprising: a second dielectric layer disposed over said hard mask layer; a first via opening in said second dielectric layer and adjacent said first via a second via opening in the third hard mask layer; and a via formed by the second material in the first via opening and the second via opening, wherein The via contacts a one of the second wires.19.The integrated circuit of claim 15 wherein said plurality of first wires are parallel to said plurality of second wires.20.The integrated circuit of claim 15 wherein said plurality of first wires and said plurality of second wires alternate across a surface of said first dielectric layer.21.The integrated circuit of claim 15, further comprising: a barrier layer deposited between each of the plurality of second trenches and each of the plurality of second wires.22.The integrated circuit of claim 15 wherein the first wire exhibits a first height and the second wire exhibits a second height and the first height is different from the second height.23.The integrated circuit according to claim 15, wherein said first bulk resistivity is 5.0 μΩ·cm or more at 20 ° C, and preferably in the range of 5.0 to 8.0 μΩ·cm at 20 ° C, And the second volume resistivity is 4.0 μΩ·cm or less at 20 ° C, and preferably in the range of 1.0 to 4.0 μΩ·cm at 20 ° C.24.The integrated circuit of claim 15 wherein said first dielectric layer exhibits a dielectric constant of less than 3.9 and preferably in the range of 1.5 to 3.8.25.The integrated circuit of claim 18 wherein said second dielectric layer exhibits a dielectric constant of less than 3.9 and preferably in the range of 1.5 to 3.8.
Method of forming parallel wires of different metallic materials by double patterning and filling techniquesTechnical fieldThe present disclosure relates to a method of forming parallel wires of different metallic materials by dual patterning and filling techniques.Background techniqueElectromigration has become relatively prominent with the feature scaling of integrated circuits (especially with critical dimensions below 50 nm) and increased power density. Electromigration is understood to mean the transport of material due to the movement of ions in a conductor. Electromigration may cause hillocks or pores to form in the interconnect and may eventually result in reduced reliability or failure of the circuit. In order to reduce electromigration and other pressure-induced failures, refractory metals continue to be explored for interconnect fabrication. However, refractory metals exhibit increased bulk resistivity, which negatively affects the observed electrical resistance.In addition, as feature sizes decrease, interconnect delays may exceed gate delays and form a relatively large portion of the total device delay. Interconnect delay is understood to be caused, at least in part, by a resistor-capacitor delay. The resistance-capacitance delay (or RC delay) is understood as the delay in signal propagation that varies with resistance and varies with insulator capacitance, which in turn depends in part on the bulk resistivity of the metal line component, which is partially dependent on the insulator capacitance. The permittivity of the interlayer dielectric. Materials that exhibit relatively low bulk resistivity are generally more susceptible to electromigration.Thus, as feature sizes continue to decrease, there is still room for improvement in the design of interconnects, and in some instances, interconnect designs have delays on interconnects and for various stresses (eg, causing electromigration) And the stress of those of the thermomechanical failure).DRAWINGSThe above-mentioned features and other features of the present disclosure, as well as the manner in which the features are obtained, will become more apparent and better understood by referring to the following description of the embodiments described herein. among them:1 illustrates a top cross-sectional perspective view of an embodiment of a dielectric layer including a plurality of wires formed of different materials, wherein the wires of the first material tend to be parallel to the wires of the second material;2 illustrates a first dielectric layer including a plurality of wires formed of different materials, and a via including a one of the wires for connecting the first material and one of the wires for connecting the second material. A cross-sectional view of an embodiment of a second dielectric layer of vias.3 illustrates a flow diagram of an embodiment of a method of forming a wire of a first material and a wire of a second material in a dielectric layer using photolithography;4a through 4h illustrate an embodiment of wire formation in a dielectric layer in accordance with the method illustrated in FIG. 3, wherein FIG. 4a illustrates a patterned resist for forming trenches in a first dielectric layer. Figure 4b illustrates a first set of trenches formed in a dielectric layer; Figure 4c illustrates a first conductive material and a capping layer deposited in a first set of trenches; Figure 4d illustrates planarization of a capping layer a first set of wires; FIG. 4e illustrates a patterned resist for a second set of trenches; FIG. 4f illustrates a second set of trenches formed in a dielectric layer for a second set of wires; Figure 4g illustrates a second wire material comprising a cover layer deposited in a second set of trenches; Figure 4h illustrates a dielectric layer having a second set of wires after removal of the cap layer and a first set of wires;Figure 5 illustrates a flow diagram of an embodiment of a method of forming a lead of a first material and a second material in a dielectric layer using spacer-based pitch division;Figures 6a through 6k illustrate an embodiment of wire formation in a dielectric layer in accordance with the method illustrated in Figure 5, wherein Figure 6a illustrates a patterned resist; Figure 6b illustrates the patterned resistance a first spacer layer formed over the etchant; Figure 6c illustrates the removal of portions of the spacer layer to form spacers on either side of the patterned resist; Figure 6d illustrates the removal of the patterned a first set of spacers after the resist; FIG. 6e illustrates a backbone formed by a sacrificial hard mask; FIG. 6f illustrates a second spacer layer; FIG. 6g illustrates a second set of spacers; FIG. 6h illustrates a trench formed in a dielectric layer; FIG. 6i illustrates a first conductive material deposited in the dielectric layer; FIG. 6j illustrates the removal of the cap layer of the first conductive material, formation of the first set of leads, removal of the backbone, and etching of the dielectric Forming a second set of trenches after the layer; and FIG. 6k illustrates a second set of wires formed in the second set of trenches after depositing the second material and removing the capping layer of the second material;Figure 7 illustrates a method of forming vias in a second dielectric layer for connecting embodiments of wires in a first dielectric layer;8a through 8h illustrate an embodiment of a wire and hard mask formation in accordance with the method illustrated in FIG. 7, wherein FIG. 8a illustrates a first set of trenches formed in a first hard mask and a first dielectric layer. Figure 8b illustrates the first set of wires formed in the first set of trenches; Figure 8c illustrates the recesses formed in the trenches above the wires; Figure 8d illustrates the deposition in the recesses above the first set of wires a second hard mask; FIG. 8e illustrates a second set of trenches formed in the first hard mask and the first dielectric layer; FIG. 8f illustrates a second set of wires formed in the second set of trenches; FIG. 8g A second set of depressions formed in a second set of grooves above the second set of wires is illustrated; Figure 8h illustrates a third hard mask formed over the second set of wires in the second set of depressions;9a through 9e illustrate an embodiment of via formation in accordance with the method illustrated in FIG. 7, wherein FIG. 9a illustrates a second dielectric layer deposited over the first, second, and third hard masks; 9b illustrates an opening formed in the second dielectric layer and an opening formed in the second hard mask; FIG. 9c illustrates a via formed in the opening formed in the second dielectric layer and the second hard mask; Figure 9d illustrates an opening formed in the second dielectric layer and an opening formed in the third hard mask; and Figure 9e illustrates a via formed in the opening in the second dielectric layer and the third hard mask .Detailed waysThe present disclosure relates to methods of forming parallel wires of different metal materials in a dielectric layer by dual patterning and filling techniques and devices formed by such methods. This method is suitable for devices exhibiting a node size of 50 nm or less (e.g., in the range of 5 nm to 50 nm, including 5 nm to 20 nm, 12 nm, 8 nm, etc.). However, this method can also be applied to devices with larger node sizes. In particular, the present disclosure provides an interlayer dielectric comprising at least one dielectric layer having a surface. Wires of different materials are formed in the surface of the dielectric layer. When a plurality of materials are provided for the wires in the interlayer dielectric, the wire material properties can be selected based on factors such as the amount of power that the wires are intended to carry and the desired speed at which the signals can be transmitted through the wires. Thus, in providing a dielectric layer comprising wires formed of different materials as disclosed herein, the wire material can be selected based on the desired function of the wires. For example, power transport wires are formed from materials that exhibit relatively low electromigration, while signal transport wires are formed from materials that exhibit relatively low resistivity.Again, electromigration is understood to be the transfer of material due to the movement of ions in the wire. Electromigration may cause hillocks or pores to form in the interconnect and may eventually result in reduced reliability or failure of the circuit. In order to reduce electromigration and other pressure-induced failures, refractory metals continue to be explored for interconnect manufacturing. However, refractory metals exhibit increased bulk resistivity, which negatively affects the observed resistance and increases resistance-capacitance (RC) retardation. Resistor-capacitance delay (or RC delay) is understood to be related to 1) resistance (which is in part dependent on the resistivity of the metal line component) and 2) insulator capacitance (which depends in part on the permittivity of the interlayer dielectric) The delay in the propagation of varying signals. Therefore, materials exhibiting relatively low electromigration may not be suitable for signal transport connections due to interconnect delays. Also, vice versa, materials exhibiting relatively low bulk resistivity tend to be relatively susceptible to electromigration.1 illustrates an embodiment of a dielectric layer 100 having a surface 102 in which a plurality of trenches including a first trench 104 and a second trench 106 are defined, which trenches may form, for example, a metallization layer. A wire is provided in the groove. The first set of trenches 104 includes the wires of the first material 108 and the second set of trenches 106 includes the wires of the second material 110. Although wires formed of two materials are exemplified, wires of more than two materials, such as wires of three materials or four materials, may be formed. Optionally, depending on the choice of materials such as wire material and dielectric layer 100, a diffusion barrier, an adhesion layer, or both are deposited within the trenches 104, 106 prior to deposition of the wires 108, 110 (represented by 112) ).In a further embodiment (e.g., as illustrated in Figure 2), an additional dielectric layer, such as a second dielectric layer 114, is deposited over the first dielectric layer 100. Vias 116, 118 are formed in the second dielectric layer. In an embodiment, the via is formed of a material that exhibits bulk resistivity, electromigration characteristics, or both, similar to the material of the wire that the via is in contact with. In the example, the via is formed of the same material as the material of the wire it is in contact with. In such an example, via 116 is formed of the same material as conductor 108 and via 118 is formed of the same material as conductor 110. A hard mask layer 120 (including one or more hard mask materials) is present between the first dielectric layer 100 and the second dielectric layer 114. Moreover, in an example, there is a diffusion barrier, an adhesive layer, or both on the wall of the via opening (represented again by 112).The one or more dielectric layers 100, 114 comprise a dielectric material. A dielectric material is understood to be an insulator but a material that is polarized once an electric field is applied. In an embodiment, the dielectric comprises a low-k dielectric, that is, a dielectric constant below 3.9 (ie, the dielectric constant of silicon dioxide), including all values and ranges from 1.5 to 3.8 (eg, 1.7, 1.9, 2.1) , 2.8, 2.7, etc.) materials. Non-limiting examples from which the dielectric material can be selected include fluorine doped silicon dioxide, carbon doped oxide (ie, carbon doped silicon dioxide), organosilicate glass, silicon oxycarbide, hydrogenated silicon carbon Oxide, porous silica, and organic polymer dielectrics such as polyamide, polytetrafluoroethylene, polynorbornene, benzocyclobutene, hydrogen silsesquioxane, and methyl siloxane. Each dielectric layer material was individually selected from the above. In an example, the dielectric layer is formed of the same material or a different material. Further, in an embodiment, each dielectric layer has a thickness in the range of 50 nm to 300 nm, including all values and ranges within the range, such as 100 nm to 300 nm, 100 nm to 200 nm, and the like.In an embodiment, the first wire and the second wire exhibit different bulk resistivities. In an embodiment, the first bulk resistivity is greater than the second bulk resistivity. For example, the first wire (ie, the wire of the first material) exhibits a first volume resistivity ρ1 of 5.0 μΩ·cm or more at 20 ° C, including all values and ranges from 5.0 μΩ·cm to 8.0 μΩ·cm. For example, 5.5 μΩ·cm, 5.6 μΩ·cm, 6.0 μΩ·cm, and 7.1 μΩ·cm. The first wire material includes, for example, tungsten, cobalt, rhenium, molybdenum or an alloy including one or more of these elements. In some examples, the alloy includes an alloy of one of the above metals with copper or aluminum. In a particular embodiment, the first wire does not include copper. The second wire (ie, the wire of the second material) exhibits a second volume resistivity ρ2 of 4.0 μΩ·cm or less at 20 ° C, including all values and ranges from 1.0 μΩ·cm to 4.0 μΩ·cm, for example 1.7, 2.7, and so on. The second wire material includes, for example, copper, aluminum, gold, silver, or an alloy including one or more of these elements. As understood by one of ordinary skill in the art, the actual resistivity exhibited by each material is partially indicated by the geometry of the wires.Although the geometry of the wire is illustrated as being generally square or rectangular and having a relatively pointed angle, the geometry of the wire may be rounded, elliptical, or rounded with varying radii. Furthermore, referring again to Figure 1, the height of the wires may be different for different materials, wherein the wires of the first material exhibit different heights than the wires of the second material. This geometric difference can allow the conductive wire area to be tailored to a higher resistivity material to provide an overall lower resistance of the wire. In one example, as exemplified, the wire 108 of the first material has a higher height than the wire 110 of the second material. However, in an embodiment, the wires of the second material may be higher than the wires of the first material.3 illustrates an embodiment of a method 300 of forming a first material and a second material wire in a dielectric layer, and in a particular embodiment, forming a metallization layer. The method includes forming a first set of trenches 302 in a surface of a dielectric material. In one embodiment, the trenches are formed using photolithography through a lithography-lithography-lithography-lithography-lithography process. In another embodiment, spacer-based pitch division is used to form the trench. In other embodiments, both methods can be used to form trenches in the dielectric layer. After the first trench is formed, a first material is utilized to form a wire 304 within each trench. In an embodiment, a wire is formed using a vapor deposition process such as chemical vapor deposition or physical vapor deposition, including magnetron sputtering.Subsequently, a second set of trenches 306 is formed in the surface of the dielectric layer. A second set of trenches is again formed in the dielectric layer using photolithography, spacer based spacer division, or a combination thereof. After forming the second set of trenches, a second material is utilized to form wires 306 within each trench. The wires are formed using an electrodeposition process, a vapor deposition process, or a combination thereof (for example, in the case of copper), wherein physical vapor deposition is used to form a seed layer, followed by electrodeposition.The above is developed, and in an embodiment, photolithography (and in particular, optical lithography and electron beam or extreme ultraviolet lithography) is used to form the first set of trenches. In lithography, a casting process (e.g., spin coating) is used to cast the resist material onto the surface of the dielectric layer. The resist material includes, for example, a photopolymer. Using a mask, light having a wavelength in the range of 157 nm to 436 nm (including all values and ranges therein, for example, 193 nm) is used to project a pattern onto the resist. The resist is developed and, as illustrated in Figure 4a, portions of the resist 430 are removed based on the projected pattern to expose portions of the surface 402 of the dielectric layer 400. The exposed surface of the dielectric layer is then etched, trenches 404 are formed in surface 402, and the remainder of the resist is removed, such as by ashing, as illustrated in Figure 4b. Etching is understood to be the removal of material by a physical removal process or a chemical removal process. Examples of the physical removal process include ion bombardment and examples of chemical processes include redox reactions. Ashing is understood to be a process for removing a resist, such as by plasma ashing using oxygen or a fluorine plasma.As illustrated in Figure 4c, the first material 405 is then deposited over the surface 402 of the dielectric layer 400 and deposited into the first set of trenches 404. The first material 405 is deposited using a deposition process including chemical vapor deposition (including atomic layer deposition) or physical vapor deposition (e.g., magnetron sputtering). The cover layer of the first material 405 is then removed by chemical mechanical planarization, or other planarization process, or a chemical removal process such as oxidation, i.e., the first material is present on or over the surface 402 of the dielectric layer 400. The amount. As illustrated in Figure 4d, the cover layer is removed to expose the dielectric layer to separate the deposited first material into wires 408.Optionally, a diffusion barrier, an adhesion layer, or both (see 112 in Figure 1) is deposited onto the surface of trench 404 prior to depositing the first wire material into the trench. The diffusion barrier, the adhesion layer, or both are selected based on, for example, the choice of the wire material and the material from which the dielectric layer is formed. In an example, these layers are deposited using vapor deposition (chemical or physical) or by an atomic layer deposition process.After forming the first set of wires, a second set of wires is formed. Again, using a photolithography, a resist process (e.g., spin coating) is used to cast the resist material onto the dielectric material. The resist material includes, for example, a photopolymer. The resist may be the same or different from the resist used to form the first set of trenches. Using a mask, light having a wavelength in the range of 157 nm to 436 nm (including all values and ranges therein, for example, 193 nm) is used to project a pattern onto the resist. In other embodiments, extreme ultraviolet radiation or x-rays are used for patterning. The resist is developed and, as illustrated in Figure 4e, portions of the resist 432 are again removed based on the projected pattern to expose portions of the surface 402 of the dielectric layer 400. The exposed surface 402 of the dielectric layer 400 is then etched, a second set of trenches 406 for the second wire material is formed in the surface 402, and the remaining portion of the resist is removed, such as by an ashing process, as in Figure 4f Illustrated in the example.As illustrated in Figure 4g, the second material 407 is then deposited over the surface 402 of the dielectric layer 400 and deposited into the second trench 406. The second material 407 is deposited using a deposition process including chemical vapor deposition or physical vapor deposition (e.g., magnetron sputtering). In a further embodiment, in the case where the second material is copper, physical vapor deposition is used to deposit copper to form a seed layer in the trench, and then the remaining portion of the trench is filled with deposition by electroplating. copper. The cover layer of the second material 407 is removed by chemical mechanical planarization. As illustrated in FIG. 4h, removing the capping layer provides a dielectric layer 400 including one or more second trenches 406, one or more second trenches 406 being included in each of the trenches 406 A wire 410 of the formed second material. In addition to the first wire 408, the second set of wires 410 are also formed in the dielectric layer 400, wherein both the first set of wires and the second set of wires are formed in the same surface 402 of the dielectric layer 400.As indicated above, in another embodiment, spacer-based pitch division is used to form wires of different materials within the opening. A brief summary of the spacer-based spacing division is illustrated herein with reference to Figure 5 and illustrated in Figures 6a through 6g.Figure 5 is a flow diagram of an embodiment of a method based on spacing of spacers. The dielectric layer includes, for example, a dielectric barrier deposited over the dielectric layer, a sacrificial hard mask deposited over the dielectric barrier, and an anti-reflective coating optionally deposited over the sacrificial hard mask, further as described with respect to FIG. 6a Describe. In an embodiment, the process begins by patterning a resist mold onto dielectric layer 502. A first spacer layer is then deposited in the conformal layer over the patterned resist and dielectric surface 504. The spacer layer is then anisotropically etched, leaving the spacer walls, and the resist removed to form a first set of spacers 506.A second set of spacers 508 is formed by anisotropically etching into the sacrificial hard mask, removing the anti-reflective coating, and forming a backbone for the second spacer layer in the sacrificial hard mask. A second spacer layer 510 is then deposited over the backbone formed in the sacrificial hard mask. The second spacer layer 512 is then anisotropically etched. A dielectric barrier and a dielectric are etched to form trenches 514 in the dielectric layer. In an embodiment, the first wire material is subsequently deposited into a trench formed in the dielectric layer and the surface is polished to expose the backbone and form a first set of wires 516. The backbone 516 is then removed and the dielectric layer is etched again to form a second set of trenches 518. A second wire material is then deposited in the second set of trenches and the surface is polished to remove any capping layer and expose the first set of wires and form a second set of wires 520.Expanding above, beginning with Figure 6a, a dielectric layer 600 is provided that includes a dielectric barrier 644 disposed on top of the dielectric layer 600 and a sacrificial hard mask 646 disposed over the dielectric barrier 644. Additionally, an optional anti-reflective coating 648 is disposed over the sacrificial hard mask 646. The hard mask and the anti-reflective coating are applied, for example, by spin coating. Alternatively, other deposition processes can be used.A layer of resist is deposited over the hard masks 644, 646 and an optional anti-reflective coating by casting. The resist is patterned by photolithography. In a specific embodiment, optical lithography is used in which light having a wavelength in the range of 157 nm to 436 nm, including all values and ranges therein, such as 193 nm, is used to project a pattern onto the resist layer 642. . The resist 642 is developed and a portion of the resist is removed to expose a portion of the upper surface of the dielectric layer (defined by the anti-reflective coating 648 or the upper surface 647 of the hard mask 646, depending on which is present as a resist The upper layer below the agent).A first spacer material layer 650 is deposited over the surface of the patterned resist 642 and the anti-reflective coating surface 647 as illustrated in Figure 6b. In an embodiment, the spacer material layer is a conformal coating that is understood to be a coating conformal to the exposed surface (including the sidewalls and upper surface of the resist and the exposed surface of the anti-reflective coating) Surface 647) and exhibits a uniform thickness over all such surfaces, wherein the thickness appears to be constant for subsequent processing steps. In an embodiment, the change in coating thickness is +/- 20% of the average coating thickness. As illustrated in Figure 6c, the spacer layer is then anisotropically etched to remove portions of the spacer layer that are generally parallel to the upper surface 603 of the dielectric layer 600. The remainder of the resist 642 is also removed, for example by ashing. This forms a first set of spacers 652 having openings 654 therebetween as illustrated in Figure 6d. The upper surface 647 is again anisotropically etched to remove portions of the spacer 652, the anti-reflective coating 648, and the sacrificial hard mask 646 that are generally parallel to the upper surface 602 of the dielectric layer between the spacers. As illustrated in Figure 6e, this is formed by a sacrificial hard mask 656 on the surface 645 of the dielectric barrier 644 that forms a series of backbones 656.A second layer of spacer material 658 is deposited over the backbone 656 and the upper surface of the dielectric, as exemplified in Figure 6f, which is now defined by the upper surface 645 of the dielectric barrier 644. Again, in an embodiment, the second spacer material layer 658 is a conformal coating. In an example, the first spacer material layer and the second spacer material layer are formed of the same material or different materials. The spacer layer 658 is then anisotropically etched away from portions of the spacer layer 658 that are generally parallel to the upper surface 602 of the dielectric layer 600. This forms a second set of spacers 660 as illustrated in Figure 6g having a backbone 656 formed by a sacrificial hard mask between alternating spacers 660. The upper surface 645 of the dielectric barrier 644 and the dielectric are anisotropically etched to form a first set of trenches 604 as illustrated in Figure 6h.As illustrated in Figure 6i, a first wire material 605 is then deposited into the first set of trenches 604. The cover layer is removed, for example by chemical mechanical planarization, to expose the backbone 656 and form a first set of wires 608. The backbone 656 is also removed, for example by ashing. As illustrated in Figure 6j, dielectric barrier 644 and dielectric layer 600 are subsequently etched to form a second set of trenches 606 in dielectric barrier 644 and dielectric layer 600. These trenches are then filled with a second wire material and the cover layer is removed to form a second set of wires 610 as illustrated in Figure 6k. Thus, a first set of wires 608 of the first material and a second set of wires 610 of the second material are formed in the same surface of the dielectric layer.In a further embodiment, as discussed above with reference to Figure 2, vias are formed in an additional dielectric layer provided over the dielectric layer, in which the wires are provided. The vias provide electrical connectivity to the wires used for power or communication (or both). A via is understood to be a vertical electrical connection formed through a dielectric layer. An embodiment of a method of forming a via is further described with respect to FIG. The method begins by depositing a hard mask 702 via chemical vapor deposition or by casting on a first dielectric layer, such as a spin coating technique. The hard mask and dielectric are then patterned and etched 704 using the patterning and etching processes described above to form a first set of trenches in the dielectric layer and the hard mask. A first set of wires 706 is then formed by depositing a first wire material into the first set of openings using the deposition process described above. Any cover layer is flattened or otherwise removed. The wires are then recessed below the surface of the hard mask 708 by oxidative removal or selective removal of other plasma or chemical etching processes of the wire material. In a particular embodiment, the wires are as high as the surface of the dielectric layer. A second hard mask is then deposited into the first wire recess 710, forming a discrete region of the hard mask over the exposed first wire (i.e., the surface of the wire).The second wire 712 is formed by patterning and forming a second set of trenches in the dielectric layer. A second wire material is then deposited into the trench using the deposition process described above to form a second set of wires 714 in the second set of trenches. Again, any cover layer is planarized or any cover layer is removed in other ways. The second set of wires are then recessed 716 from the surface of the hard mask by etching the metal again. In a particular embodiment, the second set of wires are as high as the surface of the dielectric layer after the recess. A third hard mask 718 is then deposited within the recesses of the second set of wires. Again, a discrete region of the third hard mask is formed over the exposed wire surface. This results in a hard mask layer including a first hard mask having regions of the second hard mask and the third hard mask defined therein.A second dielectric layer 720 is then formed over the first dielectric layer and the hard mask layer. The via opening 722 is formed by patterning and etching the opening into the second dielectric layer and then selectively etching the second hard mask or the third hard mask (depending on which wire the via will be connected to). A via material is then deposited into the via opening to form via 724.Expanding above, in one embodiment, as illustrated in Figure 8a, the dielectric layer 800 and the first hard mask 870 deposited over the dielectric layer 800 are etched to surface the dielectric layer 802 and hard mask A first set of trenches 804 is formed in the mold 870. In an example, a hard mask is formed using a casting process, a chemical vapor deposition process, or a physical vapor deposition technique. Moreover, in an example, a resist is plated over the upper surface 872 of the hard mask and the resist is patterned using photolithography or spacer-based spacer division techniques, such as those described above. The dielectric layer and the first hard mask are then etched using the etching techniques previously described.As illustrated in Figure 8b, the first set of trenches 804 are filled with a first wire material to form a wire 808 of the first material in the trench 804. Again, in an example, a wire is formed using a physical or chemical vapor deposition process including those described above. The first set of wires 808 are then recessed from the upper surface 872 of the hard mask. In an embodiment, the recess of the wire is achieved using an etching technique such as oxidative removal of the metal. Figure 8c illustrates recessing the wire 808 to form a first recess 874. The wire is recessed below the upper surface 872 of the first hard mask 870 by a distance D R1 . In an embodiment, the distance D R1 is in the range of 1% to 20% of the total height H O1 of the opening 804, including all values and ranges therein, such as 5%, 10%, and the like. In a particular example, upper surface 876 of wire 808 is as high as surface 802 of dielectric layer 800. As illustrated in Figure 8d, a second hard mask 878 is then deposited in the first set of recesses 874 and over the first traces 808. In an embodiment, the upper surface of the second hard mask region 878 is as high as the first hard mask 870.As illustrated in Figure 8e, a second set of wires is then formed by forming a second set of trenches 806 in the first hard mask 870 and dielectric layer 800 using the patterning and etching techniques described above. A second wire material is then deposited in the trench and any capping layer removed to form the second wire 810 illustrated in Figure 8f. Wire 810 is then slotted below upper surface 872 of first hard mask 870 to form a second set of recesses 880 as illustrated in Figure 8g. Again, oxidation or other etching techniques are used. As can be appreciated, assuming that the first metal line is plated with a second hard mask, the first set of wires remains unaffected during the second wire recess process. In the example, the wire recess distances D R2 , D R2 are in the range of 1% to 20% of the total height H O2 of the opening 806, including all values and ranges therein, such as 5%, 10%, and the like. In a particular example, the upper surface 884 of the wire 808 is as high as the surface 802 of the dielectric layer 800. As illustrated in Figure 8h, the third hard mask 882 is then deposited into the recess using the techniques described above. In an example, the upper surface of the third hard mask 882 is as high as the first hard mask 872.Turning to FIG. 9a, after forming the first dielectric layer, forming over the first dielectric layer 900 and the hard mask layer including the first hard mask 970, the second hard mask 978, and the third hard mask 982 Second dielectric layer 914. A second dielectric layer is deposited over the first dielectric layer using a casting process or a vapor deposition process including those described above.In order to provide connectivity to the wires in the first dielectric layer, vias are formed in the second dielectric layer by forming two openings, one of which is located in the second dielectric layer and one of which is located to cover the via. The hard mask of the wire. As illustrated in Figure 9b, a first via opening 991 extending through the second dielectric layer 914 is formed by patterning and etching as previously described. If the via is to be connected, for example, to a wire formed by the first material 908, the exposed portion of the second hard mask 978 is selectively removed to form a second via opening 992. The via material is then deposited into the first opening 991 and the second opening 992 to form a via 916 as illustrated in Figure 9c. In an embodiment, the first via material is the same material as the first conductor 908, or exhibits similar bulk resistivity, electromigration properties, or both. The via then contacts the first wire.Similarly, if the via will connect the wire formed by the second material 910, the first portion of the via is formed in the second dielectric layer and the via is formed by removing the third hard mask 982 over the target conductor. Two parts. As illustrated in Figure 9d, a first via opening 995 extending through the second dielectric layer 914 is formed by patterning and etching. A second via opening 996 is formed in the exposed portion of the third hard mask over the wire 910 to be connected. Once the via opening is formed, the via material is then deposited into the first opening 995 and the second opening 996 forming a via 918 as illustrated in Figure 9e. In an embodiment, the via material is the same material as the second conductor 910, or exhibits similar bulk resistivity, electromigration properties, or both. The via then contacts the second lead.Assuming that each hard mask in the hard mask exhibits a different etch selectivity than other hard masks, the individual removal of the hard mask can be achieved without affecting other hard masks, ie, exposing other hard masks Dielectric or wire under the mold. For example, when removing portions of the second hard mask over a given first wire, the first hard mask and the third hard mask are still intact, isolating the dielectric material and is close to the first A wire of a second material of a wire of a material. When removing portions of the third hard mask over a given second wire, the first hard mask and the second hard mask are still intact, isolating the dielectric material from the second material of interest The wire of the first material of the wire. In an embodiment, the via opening has a width W O , which is 1.5 times the pitch P W of the wire spacing W S (including the first wire and the second wire). Spacing can be understood as the distance between similar features on adjacent wires, which is exemplified as a center point to a center point; however, it can also start from the left or right edge of each wire. This allows the overlap requirement to be relaxed when forming the first set of trenches and the second set of trenches. Furthermore, the via-metal short-circuit margin can be improved, which is understood to be an error provided between features for preventing short-circuiting, or a margin of distance. The overall performance and reliability of the interconnect is improved with relaxed overlap and improved via-to-metal short-circuit margin.Although the above process for forming vias in the second dielectric layer to connect the wires in the first dielectric layer is discussed in the context of lithography-etch lithography-etch mode formation, when used as described above A similar process can be performed when the spacing of the bodies is divided to form interconnects.In an embodiment, the dielectric layer includes one or more wires of the first material and the second material that are formed parallel to each other. Further, one or more wires of the first material and the second material are optionally formed to be non-parallel to each other. In addition, the first and second wires alternate across the surface of the dielectric layer (as exemplified in Figure 1); however, not all of the wires need to alternate across the surface in each embodiment.In an embodiment, one or more dielectric layers are provided in an integrated circuit. Wires and vias (when present) are used to connect the various components associated with the integrated circuit. Components include, for example, transistors, diodes, power supplies, resistors, capacitors, inductors, sensors, transceivers, receivers, antennas, and the like. Components associated with an integrated circuit include those mounted on an integrated circuit or those connected to an integrated circuit. Depending on the components associated with the integrated circuit, the integrated circuitry is analog or digital and can be used in a variety of applications, such as microprocessors, optoelectronic devices, logic blocks, audio amplifiers, and the like. An integrated circuit can be employed as part of a chipset for performing one or more related functions in a computer.References to the serial numbers (e.g., first and second) in this document are for convenience and clarity to facilitate the description. In addition, references to "top", "bottom", "side", and the like are provided for convenience and clarity to facilitate the description.Accordingly, aspects of the disclosure relate to methods of depositing wires. The method includes forming a plurality of first trenches in a surface of a dielectric layer and forming a plurality of first wires, wherein each of the first wires is formed in each of the first trenches. Further, the first wire is formed of a first material having a first volume resistivity. The method also includes forming a plurality of second trenches in the surface of the dielectric layer and forming a plurality of second wires, wherein each of the second wires is formed in each of the second trenches. The second wire is formed of a second material having a second volume resistivity. In addition, the first bulk resistivity and the second bulk resistivity are different.In an embodiment, the method further includes: first forming a plurality of first trenches in the dielectric layer, applying a first hard mask on the dielectric layer, wherein forming in the first hard mask and the dielectric layer a plurality of first grooves. Moreover, in any of the above embodiments, a portion of the first material is removed from each of the first wires to form a first recess in each of the plurality of first trenches. Further, in the above embodiment, the second hard mask is applied to the first recess before the plurality of second trenches are formed in the dielectric layer, wherein the second trench further extends through the first hard Mask. Further, in the above embodiments, a portion of the second material is removed to form a second recess in each of the plurality of second trenches; and the third hard mask is applied to the second In the depression.In any of the above embodiments, the method further includes forming a second dielectric layer over the first dielectric layer, the first hard mask, the second hard mask, and the third hard mask. The method also includes forming a first via opening in the second dielectric layer and exposing a portion of the second hard mask. The method also includes removing the exposed portion of the second hard mask to form a second via opening. Further, the first via opening and the second via opening are filled with a first material to form a via. Alternatively or additionally, in any of the above embodiments, the method further includes forming on the first dielectric layer material, the first hard mask, the second hard mask, and the third hard mask A second dielectric layer. The method also includes forming a first via opening in the second dielectric layer and exposing a portion of the third hard mask, and selectively removing the exposed portion of the third hard mask to form a second via opening . The method additionally includes filling a first via opening and a second via opening using a second material to form a via.In any of the above embodiments, the barrier layer is optionally deposited in the plurality of second trenches prior to forming the second wire. Moreover, in any of the above embodiments, the plurality of first trenches and the plurality of second trenches are formed by photolithography-etch lithography-etching of the dielectric layer. Alternatively or additionally, in any of the above embodiments, the plurality of first trenches and the plurality of second trenches are formed by spacer spacing based on the spacers.In any of the above embodiments, the first wire is formed to be parallel to the second wire. Additionally, in any of the above embodiments, the first material is deposited in the plurality of first trenches by vapor deposition. Furthermore, in any of the above embodiments, the second material is deposited in the second trench by vapor deposition. Also, in any of the above embodiments, the first bulk resistivity of the first material is 5.0 μΩ·cm or more at 20 ° C, and preferably 5.0 to 8.0 μΩ·cm at 20 ° C or On a larger scale. Further, the second bulk resistivity of the second material is 4.0 μΩ·cm or less at 20 ° C, and preferably in the range of 1.0 to 4.0 μΩ·cm or more at 20 °C. Additionally, in any of the above embodiments, the dielectric layer exhibits a dielectric constant less than 3.9 and preferably in the range of 1.5 to 3.8.Another aspect of the present application relates to an integrated circuit. In an embodiment, the integrated circuit is formed using any of the methods described above. The integrated circuit includes a first dielectric layer, the first dielectric layer including a surface. A plurality of first trenches are defined in the surface of the dielectric layer. The integrated circuit also includes a plurality of first wires, wherein each of the first wires is formed in each of the first trenches. The first wire includes a first material having a first volume resistivity. The integrated circuit also includes a plurality of second trenches defined in a surface of the dielectric layer. Further, the integrated circuit includes a plurality of second wires, wherein each of the second wires is formed in each of the second trenches. The second wire includes a second material having a second volume resistivity. The first bulk resistivity is different from the second bulk resistivity.In an embodiment, the integrated circuit further includes a hard mask layer including a first hard mask disposed over the first dielectric layer, the second hard mask being disposed over the first material; The third hard mask is disposed over the second material. In addition, in an embodiment, the integrated circuit further includes: a second dielectric layer disposed over the hard mask layer; a first via opening in the second dielectric layer; and a first via opening adjacent to the first via opening A second via opening in the second hard mask layer. The via is located in the first via opening and the second via opening and is formed of a first material, wherein the via contacts one of the first conductors. Alternatively or in addition to the above, the integrated circuit further includes: a second dielectric layer disposed over the hard mask layer; a first via opening in the second dielectric layer and a location adjacent the first via opening a second via opening in the third hard mask layer. The via is located in the first via opening and the second via opening and is formed of a second material, wherein the via contacts one of the second conductors.In any of the above embodiments, the plurality of first wires are parallel to the plurality of second wires. Moreover, in any of the above embodiments, the plurality of first wires and the plurality of second wires alternate across the surface of the first dielectric layer. Moreover, in any of the above embodiments, a barrier layer is deposited between each of the plurality of trenches and each of the plurality of second wires. Also, in any of the above embodiments, the first wire exhibits a first height and the second wire exhibits a second height, and the first height is different from the second height.Further, in any of the above embodiments, the first volume resistivity is 5.0 μΩ·cm or more at 20 ° C, and preferably in the range of 5.0 to 8.0 μΩ·cm or more at 20 ° C. The inside and the second volume resistivity is 4.0 μΩ·cm or less at 20 ° C, and preferably in the range of 1.0 to 4.0 μΩ·cm or more at 20 ° C. Moreover, in any of the above embodiments, the first dielectric layer exhibits a dielectric constant less than 3.9 and preferably in the range of 1.5 to 3.8. Also, in any of the above embodiments, the second dielectric layer (when present) exhibits a dielectric constant less than 3.9 and preferably in the range of 1.5 to 3.8.Yet another aspect of the present disclosure is directed to an integrated circuit including a dielectric layer, a first set of wires formed within the dielectric layer, and a second set of wires formed within the dielectric layer, the second set of wires including the first conductive A second electrically conductive material of different material. The first set of wires includes a first electrically conductive material and the second set of wires includes a second electrically conductive material that is different than the first electrically conductive material. Additionally, the first set of wires alternate with the second set of wires such that each wire in the first set is adjacent only to the wires in the second set, and wherein each of the wires in the second set is only associated with the first set The wires in the middle are adjacent.In the above embodiments, the first conductive material has a lower electrical resistance than the second conductive material. Additionally, in any of the above embodiments, the second electrically conductive material exhibits a lower electromigration than the first electrically conductive material. Moreover, in the above embodiments, the first set of wires includes copper. Additionally, in any of the above embodiments, the second set of wires comprises tungsten. Moreover, in any of the above embodiments, a hard mask is formed atop the first set of wires. Additionally, in any of the above embodiments, a hard mask is formed atop the second set of wires.The foregoing description of several methods and embodiments has been provided for purposes of illustration. It is not intended to be exhaustive or to limit the scope of the invention to the precise steps and/or form of the disclosure, and it is obvious that many modifications and variations are possible in light of the above teaching. The scope of the invention is intended to be defined by the appended claims.
In one embodiment, a method for specifying addressability in a memory-mapped device is disclosed. A data access primitive is used to model addressablity for the memory-mapped device. Addressability comprises an address matching function, a lane matching function and one or more bus connections. A first starting address for the memory-mapped device is specified. A first set of addressing matching function, lane matching function and one or more bus connections for the memory-mapped device is generated using the data access primitive and the first starting address.
1. A method comprising:using a logic design component to specify addressability for a memory-mapped device, addressability comprising an address matching function configured to process an address range for a set of transactions at different data byte sizes, a lane matching function selecting an address in part of the address range, and one or more bus connections; specifying a first starting address for the memory-mapped device; and replacing the logic design component with logic components that implement a first set of addressing matching function, lane matching function and one or more bus connections for the memory-mapped device based upon the logic design component and the first starting address. 2. The method of claim 1, further comprising replacing the logic design component with logic components that implement a second set of addressing matching function, lane matching function and one or more bus connections for the memory-mapped device based upon the logic design component and a second starting address, wherein the lane matching function of the second set is responsive to an address output by the addressing matching function of the second set.3. The method of claim 1, further comprising:coupling the logic design component to the memory-mapped device; and coupling an address bus to the logic design component. 4. The method of claim 3, wherein the addressing matching function compares an address from the address bus with the first starting address for the memory-mapped device.5. The method of claim 4, wherein the first starting address is specified by a user.6. The method of claim 4, wherein the first starting address is generated automatically.7. The method of claim 6, wherein the first starting address is generated automatically using a set of address constraints.8. The method of claim 1, wherein the logic design component is selected to allow addressability for a minimum size transaction supported by the memory-mapped device.9. The method of claim 8, wherein the memory-mapped device is a register.10. A computer readable medium containing executable instructions which, when executed in a processing system, causes the processing system to perform a method comprising:using a logic design component to specify addressability for a memory-mapped device, addressability comprising an address matching function configured to process an address range for a set of transactions at different data byte sizes, a lane matching function selecting an address in part of the address range, and one or more bus connections; specifying a first starting address for the memory-mapped device; and replacing the logic design component with logic components that implement a first set of addressing matching function, lane matching function and one or more bus connections for the memory-mapped device based upon the logic design component and the first starting address. 11. The computer readable medium of claim 10, further comprising replacing the logic design component logic components that implement a second set of addressing matching function, lane matching function and one or more bus connections for the memory-mapped device based upon the logic design component and a second starting address.12. The computer readable medium of claim 10, further comprising:coupling the logic design component to the memory-mapped device; and coupling an address bus to the logic design component. 13. The computer readable medium of claim 12, wherein the addressing matching function compares an address from the address bus with the first starting address for the memory-mapped device.14. The computer readable medium of claim 13, wherein the first starting address is specified by a user.15. The computer readable medium of claim 13, wherein the first starting address is generated automatically.16. The computer readable medium of claim 15, wherein the first starting address is generated automatically using a set of address constraints.17. The computer readable medium of claim 10, wherein the memory-mapped device is selected to allow addressability for a minimum size transaction supported by the memory-mapped device.18. A method, comprising:selecting a logic design component to provide data access of a desired transaction size, and to indicate an addressing matching function configured to process an address range for a set of transactions at different data byte sizes, a lane matching function selecting an address in part of the address range, and one or more bus connections for a memory-mapped device; specifying an address constraint for the memory-mapped device; instantiating a logic for the memory-mapped device, comprising: generating a starting address for the memory mapped device using the address constraint; using the selected logic design component and the starting address to map the logic for the memory mapped device capable of being accessed at the desired transaction size, comprising: generating first logic components that implement the address matching function, and generating second logic components that implement the lane matching function and the one or more bus connections. 19. The method of claim 18, wherein the address constraint is specified by a user, and wherein the starting address for the memory mapped device is generated automatically.20. The method of claim 18, wherein the transaction size is one in a group comprising a byte, a halfword and a word.21. The method of claim 18, further comprising using a new starting address for the memory-mapped device without having to specify changes to the addressing function, the lane matching function and the one or more bus connections.22. The method of claim 21, wherein different logic for the memory mapped device is instantiated automatically using the same logic design component and the new starting address.23. The method of claim 18, wherein the addressing matching function compares an address from an address bus coupled with the logic design component with the starting address, and wherein when there is match, the lane matching function matching the transaction size of a transaction to a respective section of the memory-mapped device.24. A computer readable medium containing executable instructions which, when executed in a processing system, causes the processing system to perform a method, comprising:selecting a logic design component to provide data access of a desired transaction size, and to indicate an addressing matching function, configured to process an address range for a set of transactions at different data byte sizes, a lane matching function selecting an address in part of the address range, and one or more bus connections for a memory-mapped device; specifying an address constraint for the memory-mapped device; instantiating a logic for the memory-mapped device, comprising: generating a starting address for the memory mapped device using the address constraint; using the selected logic design component and the starting address to map the logic for the memory mapped device capable of being accessed at the desired transaction size, comprising: generating first logic components that implement the address matching function, and generating second logic components, coupled to the first logic components, that implement the lane matching function and the one or more bus connections. 25. The computer readable medium of claim 24, wherein the address constraint is specified by a user, and wherein the starting address for the memory mapped device is generated automatically.26. The computer readable medium of claim 24, wherein the transaction size is one in a group comprising a byte, a halfword and a word.27. The computer readable medium of claim 24, further comprising using a new starting address for the memory-mapped device without having to specify changes to the addressing function, the lane matching function and the one or more bus connections.28. The computer readable medium of claim 27, wherein different logic for the memory mapped device is instantiated automatically using the same logic design component and the new starting address.29. The computer readable medium of claim 24, wherein the addressing matching function compares an address from an address bus coupled with the logic design component with the starting address, and wherein when there is match, the lane matching function matching the transaction size of a transaction to a respective section of the memory-mapped device.
FIELD OF THE INVENTIONThe present invention relates generally to the field of logic design. More specifically, the present invention is directed to a method and an apparatus for specifying addressability and bus connections.BACKGROUNDLogic designers use hardware description language (HDL) or schematic capture to model a circuit at different level of abstractions. The circuit model is synthesized to construct a gate-level netlist. System designs that include memory-mapped devices require the logic designer to fully specify the addressability and bus connections of the memory-mapped device in the logic design. Great caution is exercised when specifying the addressability and bus connections for memory-mapped devices interacting with multi-byte system buses. Traditional electronic design automation tool flows require the addressability and data connections of a device to a system bus to be explicitly specified. This includes address matching, lane matching, connections to system bus data bits and any other auxiliary logic.In the case of an 8-bit system bus, where only byte-wide transactions are supported, the specifications of addressability and bus connections is fairly obvious. The lane-matching function is unnecessary because all transactions are byte-wide. All devices connect to the same bits (bits 7:0) of the data bus. Although specifying the addressability and bus connections may be tedious when designing for an 8-bit system bus, there is little danger of accidentally specifying inconsistent addressability and bus connections.In the case of a 32-bit system bus, where byte-wide, halfword-wide, and word-wide transactions are supported, the connections between the device and the system bus are much more complex. The interdependencies between the address-matching function, the lane-matching function, and the connections to the data bus make it much more likely that the logic designer will accidentally specify inconsistent addressability and bus connection information.Because the bus connections and lane-matching function must be consistent with the address-matching function, it is not possible to change the address of a memory-mapped device without invalidating the lane-matching function and bus connectivity.SUMMARY OF THE INVENTIONIn one embodiment, a method for specifying addressability in a memory-mapped device is disclosed. A data access primitive is used to model addressability for the memory-mapped device. Addressability comprises an address matching function, a lane matching function and one or more bus connections. A first starting address for the memory-mapped device is specified. A first set of addressing matching function, lane matching function and one or more bus connections for the memory-mapped device is generated using the data access primitive and the first starting address.Other features and advantages of the present invention will be apparent from the accompanying drawings and from the detailed description which follows.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example in the following drawings in which like references indicate similar elements. The following drawings disclose various embodiments of the present invention for purposes of illustration only and are not intended to limit the scope of the invention.FIG. 1 is an exemplary logic diagram showing explicit addressability and bus connections.FIG. 2 is an exemplary diagram of a halfword selector data access primitive.FIG. 3 is an exemplary diagram of another halfword selector data access primitive.FIG. 4 is an exemplary diagram of a byte selector data access primitive.FIG. 5 is an exemplary logic diagram using a data access primitive.FIG. 6 illustrates an embodiment of a computer system that can be used with the present invention.FIG. 7 illustrates an embodiment of a computer-readable medium.DETAILED DESCRIPTIONIn one embodiment, a method for specifying addressability for a memory-mapped device is disclosed. An intended advantage of this method is to simplify the task of the logic designer when specifying the addressability and bus connections of a memory-mapped logic device.FIG. 1 is an exemplary logic diagram showing explicit address-matching functions, lane matching functions and data bus connections. FIG. 1 shows a memory mapped 2-byte device (e.g., register) with individually addressable bytes 120, 125. Other signals not shown in FIG. 1 may include, for example, addresses, data, clock, wait, read, write, etc. Address-matching function 105 is typically specified as a set of logic gates (in schematic capture) or a logic equation (in hardware description language ("HDL")) that is synthesized to a set of gates. The address-matching function 105 determines the addresses the memory-mapped device is mapped to. The address-matching function 105 also performs address-decoding function. At design time, a constant address is specified. At run time, during operation of the logic, the address decoding function compares the constant against the addresses seen on the bus to see if there is a match.The lane-matching function which includes Lane Match 0110 and Lane Match 1115, is also specified as a set of logic gates (in schematic capture) or a logic equation (in HDL) that is synthesized to a set of gates. The lane-matching function suppresses the address-matching function for certain bus transaction sizes and alignments. For example, for the 2-byte register with the individually addressable bytes shown in FIG. 1, there may be a single address-matching function 105 with a distinct lane-matching function for each byte. The Lane Match 0 lane-matching function 110 would match all transactions (e.g., read, write) that include the first byte. The Lane Match 1 lane-matching function 115 would match all transactions that include the second byte. The logic diagram of FIG. 1 would require the logic designer to explicitly specify connections to system bus data bits. The first byte 120 of the 2-byte register would connect to a set of eight different system bus data bits (0 to 7) 130. Similarly, the second byte 125 of the 2-byte register would connect to another set of eight different system bus data bits (8 to 15) 135.For example, in a 2-byte transaction to the register at address 0x00000004, when there is a match, the Lane Match 0 lane-matching function 110 would match the first byte address 0x00000004. The Lane Match 1 lane-matching function 115 would match the second byte address 0x00000005. The address 0x00000004 refers to the first byte 120 of the register, and the address 0x00000005 refers to the second byte 125 of the register. When there is a write transaction, the first data byte 130 is written into the first byte of the register 120 and the second data byte 135 is written into the second byte of the register 125. When there is a read transaction, the first data byte 140 and the second data byte 145 from the register is provided to the respective bits of the data bus.The address 0x00000004 in this example is a constant specified at design time. When there is a need to make any changes to the design of the memory-mapped device, such as, for example, the address constant, the logic designer has to make the change to the lane matching and the related connections. For example, for a 32-bit data bus, when the logic designer wants to change the address from 0x00000004 to 0x00000006, this requires the lane matching function to match the first byte to Lane Match 2 (not shown) and the second byte to Lane Match 3 (not shown). The corresponding data bytes would be from bits 15-23 for the first data byte and bits 24-31 for the second data byte (i.e., the second half of the word). This would require having to change and recompile the HDL source and the schematic.In one embodiment, the method of the present invention allows the logic designer to specify addresses for an addressable entity without having to be involved in the detailed requirement of address matching and lane matching. The logic designer uses a set of logic design components, referred to herein as "data access primitives", to specify an assembly of address and lane-matching logic and associated data bus connections. The data access primitive hides the details of interconnection to the bus, and abstracts away the interdependency of address-matching functions, lane-matching functions, and data bus connections.FIG. 2 is an exemplary diagram of a halfword selector data access primitive. Each data access primitive implies an address-matching function, one or more lane-matching functions, and bus connections for one or more bytes of data, as well as auxiliary logic. The data access primitive in FIG. 2 is referred to herein as "HALFSEL" data access primitive. The HALFSEL data access primitive is a fully addressable data access primitive because it can be used to connect a byte-, a halfword-, or a word-addressable 2-byte entity to the data bus.The write-select (WRSEL) port 205 has two lines, one for each byte of the halfword. During a halfword or word write transaction, both lines of the WRSEL port 205 go high when there is an address match. During a byte write transaction, at most one of the lines of the WRSEL port 205 goes high when there is an address match. Similarly, the read-select (RDSEL) port 208 has two lines and goes active during a read transaction when the addresses and lanes match. The HALFSEL data access primitive includes a data write port (DW) 210 and a data read port (DR) 215. The data read port provides data from the device to the bus. The data write port receives data from the bus. For read-only data, the WRSEL port 205 and the DW port 210 are not connected. For write-only data, the DR port 215 is tied low.The physical port 220 has the address constant indicating the starting address of the memory-mapped device. The autowait (AWAIT) port 225 is a constant flag. When the AWAIT flag is high, one wait state is automatically generated to indicate to a device reading this address that it is going to take an additional bus clock cycle to get the data out of the memory mapped device. When the flag is low, there is no wait state. In another embodiment, the AWAIT port 225 can be configured to be multiple bits wide to enable encoding of additional wait states. The BUSCLK port receives clock signals from a bus coupled to the halfword selector data access primitive. The SIM port is a bi-directional port connected to the system bus functional model during simulation, allowing the socket primitives to be simulated in the context of various bus transactions. The SYMBOLIC port receives a symbolic address that is part of an address space. One skilled in the art would recognize that other wait states beyond those asserted by the data access primitive may be asserted by other logic in the design, depending on the needs of the design.FIG. 3 is an exemplary diagram of another halfword selector data access primitive. The data access primitive in FIG. 3 is referred to herein as "HALFSELH" data access primitive. The ports in the HALFSELH data access primitive have the same function as for the HALFSEL data access primitive ports described above with respect to FIG. 2. The HALFSELH data access primitive is a restricted data access primitive because it can be used to connect a halfword-, or a word-addressable half-word entity to the data bus. The HALFSELH can not be used to address a byte of the half-word entity, which is different from the fully addressable HALFSEL data access primitive. The write-select (WRSEL) port 305 has one line which is shared by both bytes of the halfword. During a halfword or word write transaction, the line goes high when there is an address match. The read-select (RDSEL) port 308 has one line and goes high during a read transaction when the address matches. The HALFSELH data access primitive includes a data write port (DW) 310 and a data read port (DR) 315.FIG. 4 is an exemplary diagram of byte selector data access primitive. The data access primitive in FIG. 4 is referred to herein as "BYTESEL" data access primitive. The ports in the BYTESEL data access primitive have the same function as the HALFSEL data access primitive ports described above with respect to FIG. 2. The BYTESEL data access primitive is a fully addressable data access primitive because it can be used to connect a one-byte entity to the data bus. The BYTESEL data access primitive is very similar to the HALFSEL data access primitive, except that the write-select (WRSEL) port 405 and the read-select (RDSEL) 408 each has one line (bit) instead of two lines. The BYTESEL data access primitive also differs from the HALFSEL data access primitive in that it matches only a single byte rather than two bytes.The data access primitives do not instantiate the logic (e.g., registers or RAMs) that stores the data being accessed. The data access primitives provide the addressability and data bus connections for the logic. Using the data access primitive such as, for example, the HALFSEL or the HALFSELH, the logic designer does not have to be involved with the complication of lane matching or addressability at design time or at subsequent changes. One skilled in the art will recognize that other data access primitives such as, for example, WORDSEL and WORDSELW, can also be implemented using the descriptions described above.Although a data access primitive may require the logic designer to specify an explicit starting address for the physical port, the logic designer may leave the starting address of a data access primitive unspecified. The logic designer may choose to allow the starting address to be automatically assigned by an address allocator. The logic designer may choose to specify or restrict the starting addresses to be assigned to data access primitives using one or more address constraints. For example, the address constraints may be a block of addresses to be excluded or a specific starting address. In one embodiment, the address allocator is a software program that ensures that all of the data access primitives have fully specified addressability information. The address allocator reconciles the addressability specified in the logic design with the address constraints specified by the logic designer.In one embodiment, a mapper program, referred to herein as a data access technology mapper, converts the data access primitives into low-level logic components necessary to implement the address-matching function, lane-matching function, bus connections, and auxiliary logic described above in FIG. 1. The data access technology mapper replaces the data access primitives with low-level logic components whose type and interconnection depend on both the type of the data access primitive and the starting address. The starting address may be either allocated by the address allocator or specified by the user. The data access technology mapper uses the starting address which is allocated by the address allocator and decides how the lane matching should be done among the components and which data should be read from the register. The exact mapping from data access primitives to low-level logic components depends on the implementation technology targeted by the data access technology mapper. For example, in the case of a Configurable System-on-Chip (CSoC), the data access technology mapper converts each data access primitives into one or more address selectors and multiple socket primitives for connecting to the system bus signals.In one embodiment, the data access technology mapper combines the addressability implied by the data access primitives and the output of the address allocator program. By incorporating a data access primitive into a design, the logic designer can specify a complex assembly of address and lane-matching logic and associated data bus connections easily and without risk of specifying inconsistent information. At a later time, the logic designer can change the address for the data access primitive just by changing the address constraints. The logic designer does not have to change the logic design.The data access primitives in the present invention are not implemented as a traditional logic macro. Although the data access primitive simplifies and abstracts the specification of logic design, the data access primitive is not a fixed composition of lower-level logic components. The data access technology mapper decides how to decompose the data access primitive, and that decomposition is dependent directly on the address assigned to the data access primitive. For example, depending on the address specified by the logic designer, the HALFSEL data access primitive may be converted by the data access technology mapper into different implementations.FIG. 5 is an exemplary logic diagram showing a HALFSEL data access primitive in an equivalent logic to that in FIG. 1. The HALFSEL data access primitive 505 includes the data read (DR) port, the data write (DW) port and the data write select (DWSEL) port. The DWSEL port has two lines. Although not shown, the HALFSEL 505 also include the ports shown in FIG. 2. The HALFSEL 505 is connected to a two-byte register with individually addressable bytes 510 and 520. Each of the two write-select lines of the HALFSEL data access primitive 505 is connected to the write-enable input of one of the 8-bit registers 510 and 520. Each of the two bytes of DW port of the HALFSEL data access primitive 505 is connected to the data input of one of the 8-bit registers 510 and 520. The data output of the register 510 and the register 520 are combined to form the two bytes of data input to the HALFSEL data access primitive 505 and is connected to the DR port.Assume that the address allocator program assigns the address 0x00000004 to the HALFSEL data access primitive 505. The data access technology mapper program converts the HALFSEL data access primitive 505 into a single address-matching function and two lane-matching functions. The address-matching function matches either address 0x00000004 or 0x00000005 (two bytes in the halfword). The first lane matching function matches only transactions that include the byte at 0x00000004. The second lane matching function matches only transactions that include the byte at 0x00000005. The data access technology mapper program also produces sixteen connections to the data-write (DW) port and sixteen connections to the data-read (DR) port. It also produces other auxiliary logic and connections. For example, in a 32-bit system bus, there are four transactions that pass the address matching function (i.e., matches): 1. A word-wide transaction at 0x00000004 2. A halfword-wide transaction at 0x00000004 3. A byte-wide transaction at 0x00000004 4. A byte-wide transaction at 0x00000005.The transactions 1, 2, and 3 match the first lane-matching function (transactions that contain 0x00000004). The transactions 1, 2, and 4 match the second lane-matching function (transactions that contain 0x00000005).FIG. 6 illustrates an embodiment of a computer system that can be used with the present invention. The various components shown in FIG. 6 are provided by way of example. Certain components of the computer in FIG. 6 can be deleted from the addressing system for a particular implementation of the invention. The computer shown in FIG. 6 may be any type of computer including a general-purpose computer.FIG. 6 illustrates a system bus 600 to which various components are coupled. A processor 602 performs the processing tasks required by the computer. Processor 602 may be any type of processing device capable of implementing the method discussed above. An input/output (I/O) device 604 is coupled to bus 600 and provides a mechanism for communicating with other devices coupled to the computer. A read-only memory (ROM) 606 and a random access memory (RAM) 608 are coupled to bus 600 and provide a storage mechanism for various data and information used by the computer. Although ROM 606 and RAM 608 are shown coupled to bus 600, in alternate embodiments, ROM 606 and RAM 608 are coupled directly to processor 602 or coupled to a dedicated memory bus (not shown).A video display 610 is coupled to bus 600 and displays various information and data to the user of the computer. A disk drive 612 is coupled to bus 600 and provides for the long-term mass storage of information. Disk drive 612 may be used to store various software programs including the data access technology mapper program and the address allocator program. Disk drive 612 may also store the data access primitives and the source HDL programs used by the logic designer to model the circuit. Disk drive 612 may also store a synthesis program. A keyboard 614 and pointing device 616 are also coupled to bus 600 and provide mechanisms for entering information and commands to the computer. A printer 618 is coupled to bus 600 and is capable of creating a hard-copy of information generated by or used by the computer.FIG. 7 illustrates an embodiment of a computer-readable medium 700 containing various sets of instructions, code sequences, configuration information, and other data used by a computer or other processing device. The various information stored on medium 700 is used to perform various data processing operations. Computer-readable medium 700 is also referred to as a processor-readable medium. Computer-readable medium 700 can be any type of magnetic, optical, or electrical storage medium including a diskette, magnetic tape, CD-ROM, memory device, or other storage medium.Computer-readable medium 700 includes interface code 705 that controls the flow of information between various devices or components in the computer system. Interface code 705 may control the transfer of information within a device (e.g., between the processor and a memory device), or between an input/output port and a storage device. Additionally, interface code 705 may control the transfer of information from one device to another. Computer-readable medium 700 may also includes the data access techlology mapper program 710, the address allocator program 715, and the data access primitives 720.Thus, using the method disclosed, the logic designer can leave the address of a memory-mapped device unspecified, allowing the address allocator and data access technology mapper to decide the details of address-matching, lane-matching, and bus connectivity. The logic designer can change the addresses assigned to memory-mapped logic devices without changing the logic design.From the above description and drawings, it will be understood by those of ordinary skill in the art that the particular embodiments shown and described are for purposes of illustration only and are not intended to limit the scope of the invention. Those of ordinary skill in the art will recognize that the invention may be embodied in other specific forms without departing from its spirit or essential characteristics. References to details of particular embodiments are not intended to limit the scope of the claims.
A method and system for debugging an executing service on a pipelined CPU architecture are described. In one embodiment, a breakpoint within an executing service is set and a minimum state of the executing service is saved. In addition, a program counter of the executing service is altered. The program counter is restored and the state of the executing service is restored.
CLAIMSWhat is claimed is: 1. A method of debugging an executing service on a pipelined CPU architecture, the method comprising: setting a breakpoint within an executing service; saving a minimum state of the executing service; altering a program counter of the executing service; restoring the program counter of the executing service; and restoring the state of the executing service.2. The method of claim 1 further comprising: executing debug commands within the executing service.3. The method of claim 1 wherein setting the breakpoint further comprises: locating an original instruction within the executing service to set the breakpoint; inserting a breakpoint instruction at the breakpoint; starting the executing service; waiting for the breakpoint to execute; waiting for memory fetches and configuration loads to complete; and restoring the original instruction at the breakpoint location.4. The method of claim 1 wherein setting the breakpoint comprises: altering an instruction within the executing service at a breakpoint location; and invalidating a page cache of the executing service. 5. The method of claim 1 wherein setting the breakpoint comprises: setting a breakpoint register to point to a breakpoint location.6. The method of claim 1 wherein saving a minimum state comprises: saving the executing service registers; and flushing a pipeline of the executing service.7. The method of claim 6 wherein flushing the pipeline further comprises: determining if registers are unstable; if registers are unstable, saving the value of any registers that change after each pipeline cycle; and if the breakpoint location is set on a location that uses old values of registers, saving the old values of the registers before new values are written to the registers. 8. The method of claim 7 wherein registers are scalar registers or predicate registers. 9. The method of claim 1 wherein altering the program counter further comprises: setting the program counter of the executing service to point to a save stub; starting execution of the executing service ; executing the breakpoint; storing configuration registers of the executing service; saving values of scalar and predicate registers; saving pipeline registers; and storing a stack pointer value for a breakpoint location.10. The method of claim 1 wherein restoring the program counter further comprises: setting the program counter of the executing service to point to a restore stub; and starting the executing service at the breakpoint. 11. The method of claim 1 wherein restoring the state further comprises: if a breakpoint location is on an instruction that does not make use of old values, restoring stable registers; if the breakpoint location is on an instruction that does make use of old values, restoring unstable registers, and reloading the pipeline; altering the program counter of the executing service to point to the breakpoint location; and starting execution of the executing service at the breakpoint location.12. The method of claim 1 further comprising: fetching a page of memory of the executing service into an instruction cache; checking for a checksum error within the page of memory; and if the executing service is set to reject the checksum error, saving the page of memory, inserting a breakpoint into the saved page of memory, altering an instruction pointer to the saved page of memory, and processing the saved page of memory. 13. A method of debugging an executing service on a pipelined CPU architecture, the method comprising: setting a breakpoint at a last safe point; saving a minimum state of the executing service; simulating instructions of the executing service from the last safe point to the breakpoint; executing debug commands within the executing service; and restoring the state of the executing service.14. The method of claim 13 wherein restoring further comprises: storing the simulated state of the executing server to the CPU; and restoring an original execution.15. The method of claim 13 wherein simulating further comprises single stepping through a set of unsafe instructions, the set of unsafe instructions are between the last safe point and a next safe point. 16. A method of debugging an executing service on a pipelined CPU architecture without hardware interlocks, the method comprising: fetching a page of memory of the executing service into an instruction cache; checking for a checksum error within the page of memory; and if the executing service is set to reject the checksum error, saving the page of memory, inserting a breakpoint into the saved page of memory, altering an instruction pointer to the saved page of memory, and processing the saved page of memory.17. The method of claim 16 wherein processing further comprises: setting a breakpoint within an executing service; saving a minimum state of the executing service; altering a program counter of the executing service; executing debug commands within the executing service; restoring the program counter of the executing service; and restoring the state of the executing service. 18. A system for debugging an executing service on a pipelined CPU architecture without hardware interlocks, the system comprising: a debugger to set a breakpoint within an executing service and execute debug commands within the executing service; a save stub to save a minimum state of the executing service and alter a program counter of the executing service; a processing engine to execute the breakpoint; and a restore stub to restore the state of the executing service.19. The system of claim 18 wherein the debugger is further operable to locate an original instruction within the executing service to set the breakpoint, insert a breakpoint instruction at the breakpoint, start the executing service, wait for the breakpoint to execute, wait for memory fetches and configuration loads to complete, and restore the original instruction at the breakpoint location.20. The system of claim 18 wherein the debugger is further operable to alter an instruction within the executing service at a breakpoint location, and. invalidate a page cache of the executing service.21. The system of claim 18 wherein the debugger is further operable to set a breakpoint register to point to a breakpoint location. 22. The system of claim 18 wherein the save stub is further operable to save the executing service registers.23. The system of claim 18 wherein the processing engine is further operable to flush a pipeline of a set of pipeline instructions of the executing service.24. The system of claim 22 wherein the debugger is further operable to determine if registers are unstable, save the value of any registers that change after each pipeline cycle if registers are unstable, save the old values of the registers before new values are written to the registers, and if the breakpoint location is set on a location that uses old values of registers.25. The method of claim 24 wherein registers are scalar registers or predicate registers.26. The system of claim 18 wherein the debugger is further operable to set the program counter of the executing service to point to a save stub, start execution of the executing service, execute the breakpoint, store configuration registers of the executing service, save values of the scalar and predicate registers, and save pipeline registers.27. The system of claim 18 wherein the debugger is further operable to set the program counter of the executing service to point to a restore stub, and start the executing service at the breakpoint. 28. The system of claim 18 wherein the restore stub is further operable to: if a breakpoint location is on an instruction that does not make use of old values, restore stable registers; if the breakpoint location is on an instruction that does make use of old values, restore unstable registers, and reload the pipeline; alter the program counter of the executing service to point to the breakpoint location; and start execution of the executing service at the breakpoint location.29. The system of claim 28 wherein the restore stub is further operable to reload the pipeline state directly.30. The system of claim 28 wherein the restore stub is further operable to re-execute the original instructions within the pipeline to recreate the pipeline at a time of the breakpoint.31. The system of claim 18 wherein the processing engine is further operable to: fetch a page of memory of the executing service into an instruction cache; and check for a checksum error within the page of memory. 32. The system of claim 18 wherein the debugger is further operable to: if the executing service is set to reject the checksum error, save the page of memory, insert a breakpoint into the saved page of memory, alter an instruction pointer to the saved page of memory, and process the saved page of memory.33. A system for debugging an executing service on a pipelined CPU architecture, the system ~ a save stub to save a minimum state of the executing service; a restore stub to restore the state of the executing service; and a debugger to set a breakpoint at a last safe point, simulate instructions of the executing service from the last safe point to the breakpoint, and execute debug commands within the executing service.34. The system of claim 33 wherein the restore stub further stores the simulated state of the executing service to the CPU, and resumes an original execution.35. The system of claim 33 wherein the debugger is further operable to single step through a set of unsafe instructions, the set of unsafe instructions are between the last safe point and a next safe point. 36. A system for debugging an executing service on a pipelined CPU architecture, the system comprising: a processing engine to fetch a page of memory of the executing service into an instruction cache, and check for a checksum error within the page of memory; and if the executing service is set to reject the checksum error, a debugger operable to save the page of memory, insert a breakpoint into the saved page of memory, alter an instruction pointer to the saved page of memory, and process the saved page of memory.37. The system of claim 36 wherein the debugger is further operable to set a breakpoint within an executing service, save a minimum state of the executing service, alter a program counter of the executing service, and execute debug commands within the executing service.38. The system of claim 37 further comprising: a restore stub operable to restore the program counter of the executing service, and restore the state of the executing service. 39. A system for debugging an executing service on a pipelined CPU architecture, the system comprising: means for setting a breakpoint within an executing service; means for saving a minimum state of the executing service; means for altering a program counter of the executing service; means for restoring the program counter of the executing service; and means for restoring the state of the executing service.40. A system for debugging an executing service on a pipelined CPU architecture, the system comprising: means for setting a breakpoint at a last safe point; means for saving a minimum state of the executing service; means for simulating instructions of the executing service from the last safe point to the breakpoint; means for executing debug commands within the executing service; and means for restoring the state of the executing service. 41. A system for debugging an executing service on a pipelined CPU architecture, the system comprising: means for fetching a page of memory of the executing service into an instruction cache; means for checking for a checksum error within the page of memory; and if the executing service is set to reject the checksum error, means for saving the page of memory, means for inserting a breakpoint into the saved page of memory, means for altering an instruction pointer to the saved page of memory, and means for processing the saved page of memory.42. A computer readable medium comprising instructions, which when executed on a processor, perform a method for debugging an executing service on a pipelinedCPU architecture, comprising: setting a breakpoint within an executing service; saving a minimum state of the executing service; altering a program counter of the executing service; restoring the program counter of the executing service; and restoring the state of the executing service. 43. A computer readable medium comprising instructions, which when executed on a processor, perform a method for debugging an executing service on a pipelinedCPU architecture, comprising: setting a breakpoint at a last safe point; saving a minimum state of the executing service; simulating instructions of the executing service from the last safe point to the breakpoint; executing debug commands within the executing service; and restoring the state of the executing service.44. A computer readable medium comprising instructions, which when executed on a processor, perform a method for debugging an executing service on a pipelinedCPU architecture, comprising: fetching a page of memory of the executing service into an instruction cache; checking for a checksum error within the page of memory; and if the executing service is set to reject the checksum error, saving the page of memory, inserting a breakpoint into the saved page of memory, altering an instruction pointer to the saved page of memory, and processing the saved page of memory.
MULTI-CHANNEL, MULTI-SERVICE DEBUG ON A PIPELINED CPU ARCHITECTUREFIELD OF THE INVENTIONThe present invention relates to interactive debugging and more specifically to interactive debugging in a multi-channel, multi-service environment on a pipelinedCPU architecture without hardware interlocking. BACKGROUND OF THE INVENTIONTraditionally, Digital Signal Processors (DSPs) have been used to run single channels, such as, for example, a single DSO or time division multiplexed (TDM) slot, that handle single services, such as modem, vocoder, or packet processing. Multiple services or multiple channels require multiple DSPs, each running its own small executive program (small kernel) and application. The executive programs reserve some area in memory for application code. When applications need to be switched, these executive programs overlay this memory with the new application. Channels may take one of the following forms: one channel carried on a physical wire or wireless medium between systems (also referred to as a circuit); time division multiplexed (TDM) channels in which signals from several sources such as telephones and computers are merged into a single stream of data and separated by a time interval ; and frequency division multiplexed (FDM) channels in which signals from many sources are transmitted over a single cable by modulating each signal on a carrier at different frequencies. Recent advances in processing capacity now allow a single chip to run multiple channels. With this increase in capacity has come a desire to run different services simultaneously and to switch between services. A current method to implement multiple services or multiple channels involves writing all control, overlay, and task-switching code for each service or channel. This requirement causes additional engineering overhead for development and debugging of the applications. In addition, not all services may fit into the memory available to theDSP, and the services must be swapped in from the host system. This swappingoverlaying-adds significant complexity to the implementation of the DSP services. The extra development activity consumes DSP application development time. The fact that DSPs have a single thread of control creates problems to developing and debugging in the multi-channel, multi-service environment. Typically, debugging an application on a single chip stops all other applications and channels running on the chip. If the chip is running, real-time diagnostics on a channel or service cannot be obtained without interfering with the operation of the other channels and services. In addition, a debugging system typically needs to have direct access to the chip being diagnosed. That is, a conventional debugging system uses a special development board or a physical debug interface (such as Joint Test Access Group (JTAG) interface) to provide debugging access. This makes debugging in a production environment an inflexible and cumbersome process. Debugging optimized code developed on pipelined architectures without hardware interlocking is rather difficult as the pipelines typically have bypass paths that allow instructions to use values before they have flowed through the pipeline. Debuggers rarely have access to these bypass paths making it difficult for a debugger to save and restore the pipeline. This adds complexity to the debugging process. SUMMARY OF THE INVENTIONA method and system for debugging an executing service on a pipelined CPU architecture are described. In one embodiment, a breakpoint within an executing service is set and a minimum state of the executing service is saved. In addition, a program counter of the executing service is altered. The program counter is restored and the state of the executing service is restored. BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation in the figures of the accompanying drawings in which like reference numerals refer to similar elements. Figure 1 is a system architecture of one embodiment for a multi-channel, multiservice system;Figure 2 is a block diagram of one embodiment for a processing chip of Figure 1;Figure 3 is a block diagram of one embodiment for multiple sockets/services within a processing chip;Figure 4 is an exemplary diagram of channel sockets within the multi-channel, multi-service system of Figure 1;Figure 5a is a block diagram of one embodiment for an interactive debugging system;Figure 5b is a block diagram of one embodiment for an interactive debugging system operating over a network ;Figure 6 is a block diagram of another embodiment for a multi-channel, multi/ service system;Figures 7-9 are exemplary optimized code fragments ;Figure 10 is a block diagram of one embodiment for a minimum buffer basic functional unit state of the system of Figure 1;Figure 11 is a flow diagram of one embodiment for debugging optimized code;Figure 12 is a flow diagram of one embodiment for debugging optimized code using safe points; and Figure 13 is a flow diagram of one embodiment for processing breakpoints in a multi-channel, multi-service environment. DETAILED DESCRIPTIONA method and system for debugging an executing service on a pipelined CPU architecture without hardware interlocks are described. In one embodiment, a breakpoint within an executing service is set and a minimum state of the executing service is saved. In addition, a program counter of the executing service is altered. The program counter is restored and the state of the executing service is restored. In the following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In some instances, wellknown structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. Some portions of the detailed descriptions that follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as"processing"or"computing"or"calculating"or"determining"or "displaying"or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below.In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. Figure 1 is a system architecture of one embodiment for a multi-channel, multiservice system 100. Referring to Figure 1, system element 102 is connected via system bus 104 and bridge 106 to a plurality of processing chips 108,110,112,114. In addition, bridge 106 is connected to buffer memory 116. System element may be another bridge 106 configuration or other suitable component. Bridge 106 is connected via bus 118 to the processing chips 108-114. In one embodiment, processing chips 108-114 are connected via bus 120 to time division multiplexing (TDM) interface 122: In embodiments, chips 108-114 may be connected to a digital signal 0 (DSO) interface or other applicable interface. In one embodiment, TDM interface 122 is connected to a number of modules and ports installed on the TDM bus 124. In addition, TDM interface 122 may optionally be connected to TDM signaling interface 126. TDM is a base-band technology in which individual channels of data or voice are interleaved into a single stream of bits (or framed bits) on a communications channel.Each input channel receives an interleave time segment in order that all channels equally share the medium that is used for transmission. If a channel has nothing to send, the slot is still dedicated to the channel and remains empty. In one embodiment, an operating system running within multi-channel, multiservice system 100 supports telecommunication and data communication applications. These applications involve running multiple channels of protocol stacks built from multiple services. Multi-channel, multi-service system 100 enables the dynamic configuration of services within the embedded telecommunication and data communication environment. In addition, the operating system automatically defines the allocation of resources for the channels within system 100. Figure 2 is a block diagram of one embodiment for a processing chip 108. Each processing chip 108 contains clusters 202 and main processor 204. Each cluster 202 contains a cluster processor 208 and a number of processing engines (PEs) 210. Main processor 204 is configured to perform all control code and operations including receiving control messages from host 102 and allocating channels to the various clusters 202. Processing chip 108 also includes a shared static random access memory (sharedSRAM) 206. Shared SRAM 206 may be accessed directly by all the cluster processors 202 and main processor 204. An instruction store contained within the PEs 210 can also access shared SRAM 206. Shared SRAM 206 is used for storing operating system and application code as well as hosting the data for code running on main processor 204. Each cluster 202 contains cluster SRAM 212. Cluster SRAM 212 is responsible for maintaining channel data running on each individual cluster 202. Cluster SRAM 212 includes I/O buffers and programming stacks. The operating system of system 100 uses the hardware to enforce memory protection to prevent a channel from inadvertently corrupting another channel's data or code. External dynamic random access memory (DRAM) 214 may be used for application data too large to fit on the on-chip cluster SRAM 212 or shared SRAM 206 and may be used as a swap area for application code. Each processing chip 108 includes two line side ports 216 and two bus ports 218.These ports are used for packet side data and control transport. In addition, host port 220 is used to communicate with the host 102 and is accessible only from main processor 204 and serial boot port 222 that is used to send the boot stream to the chip. Figure 3 is a block diagram of another embodiment for a portion of a multichannel, multi-service system 100. Referring to Figure 3, service 302 is a self contained set of instructions that has data input/output, control, and a defined interface. Service 302 performs defined processing upon a certain amount and a certain format of data. In addition, service 302 emits a certain amount and a certain format of data : In alternate embodiment, service 302 may process data in a bidirectional manner. Service stack 304 is a linked set of services 302 that provide a larger processing unit. Service stack 304 is a unique, ordered collection of services 302, such as, for example, echo cancellation services, tone detection services, and voice conferencing services. The services 302 within the service stack 304 are processed in-order. Socket 306 is a virtual construct that provides a set of services 302 in the form of a service stack 304. The operating system processes services 302 that are encapsulated in socket 306 including connecting the line and/or packet data flow. Processing within socket 306 is data driven. That is, services 302 are invoked by sockets 306 only after the required data has arrived at socket 306. In one embodiment, applications may build protocol stacks by installing a service stack 304 into a socket 306. Services 302, service stacks 304, and sockets 306 are allocated and de-allocated as required by system 100. Figure 4 is an exemplary diagram of channel sockets (CSs) 430 (422,424,426) within system 100. CSs 430 are specialized sockets 306 that direct the flow of information through the system 100 between two or more devices or end points 402, 404,406,408. End points may be, for example, physical devices. CS 430 is a socket 306 that accepts a service stack 304 and processes channel data. CS 430 connects any line side slot or bus channel on one end of CS 430 to any other line side slot or bus channel on the opposite end of CS 430. CS 430 is defined by external, physical interface points and provides the ability to process the service stack 304. Information may flow from a physical end point 402 via connection 418 to CS 424. The information is processed by services 302 within CS 424 and is transferred via connection 420 to end point 406. operating system may dynamically change the flow of information through differentCSs 430 depending upon the needs of the end points 402-408. For example, data may be initially set to flow from end point 404 via connection 410 through CS 422 and via connection 412 to end point 408. However, if service stack 304 within CS 422 is incompatible with the data, CS 422 notifies the operating system to break the flow and redirect the information. The operating system then redirects the flow to an existing CS 430 with the proper service stack 304 or creates a new CS 430. Referring to Figure 4, the operating system may redirect the flow from end point 404 to end point 408 through connection 414, CS 426, and connection 416. In addition, the operating system may replace the service stack in CS 422 with another stack compatible with the data. A CS 430 is defined by the external, physical interface end points 402,404,406, and 408 and the data flowing through the CS 430. Each end point 402-408 may be different physical devices or the same physical interface or device. CS 422 services may perform a conversion of data. The CS 430 mechanism allows a service stack 304 to be built into the information flow in which services 302 may direct or process the data as it flows through the system. For example, if a first service outputs a 40 byte data frame and a second service uses an 80 byte frame, in one embodiment, the second service waits until the first service outputs enough data in order for the second service to process the data. In an alternate embodiment, the first service delays sending data to the second service until it accumulates enough data. Services 302 are independent modules and are standalone plug-ins. Thus, in one embodiment, services 302 may be dynamically downloaded into shared SRAM 206 in real-time to build CSs 430 as required by the data. Applications may be written without regard for particular input/output channels or physical interfaces. The operating system is in charge of dynamically allocating and deallocating sockets and connecting input/output components. Thus, the CS 430 mechanism provides single channel programming with multiple channel execution. In addition, an application may be written to provide flow of information between end points 402-408 independent of the type of the operating system and independent of the type of data being processed. CS 430 are independent of both the operating system and the hardware configuration. The mechanism also relieves applications of the management of channels and places the management into the operating system, thus producing channel independent applications. In addition, the CS 430 mechanism allows the applications and services 302 to be platform independent.In one embodiment, the CS 430 mechanism is used in debugging of applications and services. Since services may be loaded dynamically, the user may choose not to have the debugger in the system if there is no need for debugging operations. Figure 5a is a block diagram of one embodiment for an interactive debugging system. Referring to Figure 5a, debugging system 500 includes debug core 520, graphical user interface (GUI) 510, and abstract machine interface (AMI) 530. Debug core 520 is coupled to GUI 510 via a text-based bi-directional interface 505. GUI 510 provides an application developer with a simple and convenient way of debugging an application or a service. The tools provided by GUI 510 may include, for example, toplevel menus, context menus, windows, dialog boxes, and setting of user preferences.Text-based interface 505 provides two-way communication between debug core 520 and GUI 510. In one embodiment, GUI 510 may receive a command from the application developer and send it to debug core 520 using text-based interface 505.Debug core 520, in turn, may send data to GUI 510 using text-based interface 505. GUI 510 may then display this data to the application developer in various ways. For example, debug core 520 may pass information about currently running sockets and services to GUI 510. GUI may then display this information, allow the application developer to select a socket or service for debugging, and transfer data identifying the selected socket or service back to debug core 520. Debug core 520 is coupled to AMI 530 via text-based bi-directional interface 525.AMI 530 directly communicates with chip 550 or simulator 540. Chip 550 represents processing chips 108-114. Simulator 540 may be used to perform diagnostics of an application or a service in a simulated environment. Simulator 540 allows loading and running an application as if it were running on the chip itself. All the features and capabilities inherent in chip 550 are available through simulator 540. In one embodiment, AMI 530 provides an abstract view of multi-channel, multiservice system 100 at the hardware and operating system level. AMI 530 may work with a single target chip or simulator at a time and may view the target chip or simulator as a single entity. AMI 530 allows debug core 520 to provide an isolated debugging environment for each socket or service. For example, debug core 520 may maintain a separate context (e. g., breakpoints, watchpoints, and variable displays) for each socket or service. In one embodiment, debug core 520 uses AMI 530 to provide an application developer with the ability to control all possible debugging and diagnostic activity on a target socket or service. Text-based interface 525 enables a two-way communication between debug core 520 and AMI 530. The use of text-based interface 525 simplifies the development process by designing debug core 520 and AMI 530 as independent modules. In addition, text-based interface 525 allows running debug core 520 and AMI 530 as stand alone applications. Text-based interface 525 may also improve the quality assurance (QA) process by providing a QA user with the ability to enter the command and get the response back in an automated environment. In one embodiment, debugging system 500 may operate in various modes. For example, a simulator direct mode (Simulator Direct) allows debug core 520 to communicate with simulator 540 using AMI 530. This mode may provide significant visibility into the PEs 210 and the state of the system 108, but may not be aware of sockets and other high-level operating system constructs. Simulator Direct provides full control over the simulator. Hence, debug core 520 may obtain all performance analysis results that are supported by the simulator. In one embodiment, AMI 530 may analyze the run-time state of system 108 to determine information about sockets and services directly from the data structures of the operating system. Debugging system 500 may also operate in an in-circuit emulator mode (ICE).ICE allows debug core 520 to communicate with chip 550 through AMI 530 using an access interface of chip 550 such as, for example, the Joint Test Access Group (JTAG) interface. ICE supports debugging of the operating system by controlling the cluster processors 208. ICE does not provide access to PEs 210 and is not capable of controlling or accessing sockets. Another exemplary mode is an application debug mode (Application Debug).Application Debug may work with either simulator 540 or chip 550. Application Debug relies on the assistance of the operating system to provide access to system resources (e. g., PEs 210 and cluster processors 208). Application Debug is capable of controlling and accessing sockets and allows debug core 520 to maintain information about running sockets and services. Debug core 520 may communicate the information to GUI 510.GUI 510 may then present this information to the application developer for selecting a target construct on which to perform debugging operations. It will be recognized by one skilled in the art that the modes described above are merely exemplary and that a wide variety of modes other than those discussed above may be used by debugging system 500 without loss of generality. Figure 5b is a block diagram of one embodiment for an interactive debugging system operating over a network. Referring to Figure 5b, host computer system 560 includes a debugger which communicates with computer system 570 over a network connection 565. In one embodiment, host 560 contains debug core 520 and GUI 510.Network connection 565 may include, for example, a local area network and a wide area network. Computer system 570 includes chips 576 which communicate over bus 572 via interface 574 with host 560. In one embodiment, bus 572 is a peripheral component interconnect (PCI) bus with host 560. In alternate embodiments, bus 572 may be an industry standard architecture (ISA) bus, a VESA local bus, or a micro channel architecture (MCA) bus. Interface 574 enables communication between chips 576 and bus 572. In one embodiment, embodiment, debugger debugger operate operate ICE debugging mode. In this embodiment, interface 574 communicates commands from host 560 to cluster processors of chips 576 and then communicates the resulting data from chips to host 560. Alternatively, the debugger may operate in Application Debug mode. InApplication Debug mode, a debugging request from host 560 is sent over network 565 to computer system 570. Interface 574 communicates the request directly to chip 576.The operating system on chip 576 interprets the request into commands (e. g., set breakpoints or watchpoints, stop the execution, read memory, get status, or display a variable), performs these commands, and generates the appropriate response. The response is then transferred back to host 560 over network connection 565. Network connection 565 may be packet-based (e. g. TCP/IP), cell-based (e. g. ATM) or serial based (e. g. SpiceBus or Utopia). In one embodiment, in a multi-channel, multi-service environment, the operating system on chip 576 may transfer information about running services to host 560 over network connection 565 and allow the debugger on host 560 to operate on an individual service or on a set of services. Figure 6 is a block diagram of another embodiment for a multi-channel, multiservice debugging system 600. Referring to Figure 6, system 600 may have a number of processing elements (or constructs) (610,060) running within a cluster 202. In one embodiment, executing service 610 may run a real time application and debugger 660 may run a control task or an operating system task. A number of executing services 610 may be running within basic functional unit (PE) 670. PE 670 includes save stub 662 and restore stub 664. Save stub 662 is an executable program written to save the minimum state of construct 610. Restore stub 664 restores the minimum state from memory 620. The minimum PE state (MPES) is a minimum amount of executing service 610 state registers which are saved and restored in order to halt service 610 execution and restart it again without altering the functional behavior of service 610. Debug 660 runs on a processor other than the PE 670. Debugger 660 interacts with save stub 662 and restore stub 664 to read and/or modify service 610 state information and control service 610 execution. In one embodiment, executing service 610 has independent local memory 620 and debugger 660 has independent local memory 640. In one embodiment, executing service 610 and debugger 660 may have shared memory 630, in which separate portions of memory 630 may be assigned to executing service 610 and debugger 660, respectively. Within system 600, executing service 610 has a state 650 which contains the information for running service 610. In one embodiment, debugger 660 may have the capability of accessing data related to the operation of service 610. In addition, save stub 662 and restore stub 664 access, save, and restore certain information from state 650 during a breakpoint operation. Debugger 660 may communicate with host 102, or host 560 over a network, and perform the commands received from host 102 or 560 in order to effectuate a breakpoint or watchpoint. In one embodiment, debugger 660 may access the data related to the operation of executing service 610 without affecting the real time environment of executing service 610. For example, debugger 660 may be able to look at ("snoop"on) local memory 620, state 650, and the portion of shared memory 630 which is assigned to executing service 610. In addition, debug 660 may directly access the following state information of construct 610 without altering the state of construct 610: program counter, next program counter, PC delay slot enable signal, page numbers, tags, valid bit, fetch bit, and LRU information, memory contents, breakpoint and/or watchpoint registers and enable bits, construct 610 status, configuration contents, address unit configuration contents that may not be read by instructions, and two performance registers and their control \ In one embodiment, the debugging process may directly intercede with the real time environment of executing service 610. Debugger 660 may, for example, modify state 650 to set a breakpoint register or a watchpoint register, request a notification when target construct 610 hits a breakpoint, and stop the operation of executing service 610. Subsequently, debugger 660 may restart the operation of executing service 610 upon receiving a command from host 102 or 560. Figures 7-9 are exemplary optimized code fragments executed by services 306 within system 100. Referring to Figure 7, instructions 1,2, and 3 are load from main memory instructions. Within system 100, these load from memory instructions require multiple pipeline cycles to complete from the time they are initially executed until the data is available in the register. Thus, line 1 is executed and requires a certain amount of pipeline cycles in order for the value loaded into register 3 to be available. In one embodiment, main memory loads require three delay slots (pipeline cycles) between the load instruction and an instruction that uses the returned value. Thus, the load of register 3 in line 1 is not available at line 4 for the add of registers 3 and 4 into register 6.Line 4 uses the old values of registers 3 and 4 (those values that existed as a result of operations executed prior to line 1) to add into register. In one embodiment, an instruction at line 5 could use the value returned from memory as a result of the load instruction at line 1. The pipeline may be designed such that the instruction at line 5 receives the"new"value of register 3 via a bypass path before register 3 is actually written in the register file. A debugger may not have visibility of the bypass, thus, making it difficult to ascertain the value of register 3 at line 5. Referring to Figure 8, a typical code fragment of optimized code is shown in which values in memory pointed to by register 1 are loaded into register 3 in lines 1-4.In lines 5-8, the resulting register 3 values are stored back into the memory locations pointed to by register 1. In this code fragment, the value loaded in line 1 is available for the store operation at line 5; the load operation of line 2 is available for the store operation of line 6; the load operation of line 3 is available for the store operation of line 7; and the load operation of line 4 is available for the store operation of line 8. As noted above, these"new"values of register 3 may be available via bypass paths buried within the CPU micro-architecture, thus, making an external debugger difficult as it can not determine the value of register 3 until it is written into the"debugger visible"register file. Figure 9 is another exemplary optimized code fragment. In the multi-channel, multi-service system 100, if a breakpoint is inserted at line 5, the debugger 660 needs to store the old values of registers 5 and 6 that existed prior to the executions of lines 3 and 4. After a breakpoint is initiated, PE 670 flushes or clears all information in the pipeline.The old values in registers in transition need to be saved in order to recreate the pipeline after control is returned to PE 670. These old values are needed to reinitialize the pipeline in order for the service executing the code of Figure 9 to properly add the correct values of registers 5 and 6 into register 7. The breakpoint mechanism will be described below. Registers 5 and 6 are termed unstable registers and line 5 is termed an unstable register point. An unstable register point is a point in a code fragment where service 610 instruction is using a register which is in the process of changing, but the new register value will not be available until one or more cycles later. Debugger 660 reads the scalar registers (registers 5 and 6) and creates a pipeline restore array in theMPES prior to calling save stub 662. When debugger 660 islready to reinitialize service 610 after debug operations have been executed, debugger 660 swaps the old values of registers with the new values stored in the MPES. After execution of restore stub 664, the pipeline restore array is filled with the new values of unstable registers and then a series of stack"pop"operations are executed to refill the pipeline. After execution of the four"pop"operations, service 610 continues normal operation. The pipeline restore array is a 16 byte array in the MPES. In one embodiment, the pipeline restore array contains the three potentially unstable scalar registers followed by the value of the stack pointer at the time of the breakpoint. Figure 10 is a block diagram of one embodiment for a minimum buffer PE state (MPES) 1000. MPES 1000 is the minimum amount of information saved and restored by debugger 660 to allow service 610 to continue execution following a breakpoint without affecting the functional behavior of the code executing on service 610. Breakpoints are implemented in a manner that they do not negatively affect program behavior except for real-time timing issues. MPES 1000 is stored in a cluster memory location accessible to the service and the OS. Referring to Figure 10, MPES 1000 includes scalar registers 1002, predicate registers 1004, vector registers 1006, least significant 32 bits of accumulator 0 (1008), most significant eight bits of accumulator 0 (1010), least significant 32 bits of accumulator 1 (1012), most significant eight bits of accumulator 1 (1014), least significant 32 bits of multiplier output register 1016, most significant one bit of multiplier output register 1018, loop count value 1020, vector count value 1022, exponent register 1024, configuration registers 1026, vector unit VREG A and VREG B registers 1028, VAO through VA3 states 1030, MAU state 1032, and old values of pipeline registers 1034 (potentially unstable scalar registers). Save stub 662 is responsible for saving vector registers 1006, the least and most significant bits of accumulator 0 and accumulator 1 (1008-1014), the least significant and most significant bits of multiplier output register (1016,1018), loop count value 1020, vector count value 1022, exponent register 1024, vector unit VREG A and VREG B registers 1028, VAO-VA3 state 1030, and MAU state 1032. Debugger 660 is responsible for saving scalar registers 1002, predicate registers 1004, configuration registers 1026, and pipeline registers 1034. Debugger 660 is also responsible for restoring pipeline registers 1034. Restore stub 664 restores MPES 1000, but leaves the stack pointer pointing to the pipeline registers array. Debugger 660 single steps (executes) four instructions which"pop"the pipeline registers off the stack in four cycles. After these four instructions have been executed, target 610 stack pointer will point to the desired location. If debugger 660 is not performing a debugger 660 invoke function call, the stack pointer will be equal to the value that it contained at the time of the breakpoint. Service 610 cannot save the old state of the scalar register 1002 and the predicate register 1004 in a single cycle. Thus, debugger 660 must save either (or both) the predicate registers 1004 or the scalar registers 1002. In one embodiment, predicate registers 1004 and scalar registers 1002 are saved by debugger 660. In one embodiment, debugger 660 saves the configuration registers 1026 as target 610 has no instruction capable of saving its own configuration registers 1026. Debugger 660 saves pipeline registers 1034 as only debugger 660 knows what set of three potentially unstable registers must be saved for a given breakpoint. Target 610 does not know which three potentially unstable registers are in transition. Save stub 662, in one embodiment, may be written to handle all possible permutations of these three unstable register loads. Figure 11 is a flow diagram of one embodiment for debugging optimized code.Initially at processing block 1102 debugger 660 sets a breakpoint within executing service 670. Debugger 660 locates an instruction to insert the breakpoint and sets the breakpoint at the location. In one embodiment, debugger 660 starts the PE and waits for the PE to halt at the breakpoint location. After the PE reaches a breakpoint, debugger 660 waits for the PE memory fetches and configuration loads to complete.Debugger 660 then removes the breakpoint from executing service 610. At processing block 1104, debugger 660 saves the state of PE 670. Debugger 660 saves PE's 670 scalar registers, predicate registers, and configuration registers. In one embodiment, debugger 660 determines if any of the scalar registers are in transition.When a breakpoint occurs, there may be several scalar register write-backs in the pipeline waiting to be executed or to finish execution. Debugger 660 cannot access the pipeline but must flush the pipeline, reading the scalar registers after each cycle. In one embodiment, there may be up to three unstable scalar registers in the pipeline at any time. In one embodiment, there is a three cycle load delay for scalar registers.However, in this embodiment, an additional two cycle latency is also required in order to complete the flush of the pipeline. In one embodiment, debugger 660 single steps two PE 670 instructions before it may safely read the"old value"of the unstable registers. Debugger 660 saves the values contained within the unstable registers and performs a series of no-op instructions to flush the pipeline. Thus, in this embodiment, a total of five instructions are required to flush the pipeline and store the register values. Debugger 660 may record any predicate changes following each first no-op.Debugger 660 records up to three scalar register changes following the third, fourth and fifth no-ops. If the breakpoint occurred on an instruction that uses old values of scalar registers (uses values of registers as they existed before the values were changed with instructions still in the pipeline), debugger 660 executes a sequence of instructions which will record the old values of these registers. In one embodiment, two registers may be in transition at any point. Debugger 660 places the correct values of these scalar registers into the MPES before calling the restore step below. In one embodiment, debugger 660 executes four instructions to record the values of these scalar registers which includes two no-op instructions. In one embodiment, only two scalar registers that depend upon old values may be within the pipeline at any time. After the four instructions are executed, the original values of the two scalar registers are saved in debugger 660 registers. Debugger 660 uses these values to restore the pipeline after debug operations are completed. At processing block 1106, debugger 660 alters the program counter of the PE to point to save stub 662. Debugger waits for PE 670 to execute the breakpoint instruction.After PE 670 executes the breakpoint instruction, debugger 660 stores the saved configuration registers 1026 into MPES stack frame 1000. In one embodiment, debugger 660 factors the 33-bit memory register field in the MPES frame 1000 into a pair of 16-bit numbers and stores them into multiplier output register field 1016 and 1018. Debugger 660 determines the correct value of all 16 scalar registers and stores their state into scalar registers 1002. These values are what is expected by the very first service instruction to be executed after control returns to service 610. In most cases, this will be the instruction on which the breakpoint was originally set. Registers to be loaded into the pipeline before returning to the service are stored in the MPES 1060. The values stored are the"new values"of the unstable registers which were retrieved from the MPES and saved in the previous processing block. The three registers must be stored in the correct order to recreate the pipeline properly.Debugger 660 stores the value of the stack pointer at the time of the breakpoint withinMPES. This value will allow the initial stack pointer to be restored properly after refilling the pipeline. At processing block 1108, debugger 660 optionally executes debug commands and optionally changes one or more items in the MPES. Alternatively, MPES 1000 information may be transferred to host 560 for display. Items changed may be, for example, scalar registers, vector registers, or the like. The debug commands are issued from debugger 660. At processing block 1110, debugger alters the program counter of PE 670 to point to the restore stub of debugger 660. Debugger 660 begins PE 670 execution and waits for the PE to execute the breakpoint instruction. At processing block 1112, the debugger restores state 650 to the original state.Debugger 660 processes restore stub 664 to restore the state of PE 670. Restore stub 664 restores everything in the MPES except for the pipeline registers. If the breakpoint originally occurred in an instruction that does not make use of old values of scalar registers, the host debugger must single step through instructions to restore values of the three possibly unstable scalar registers. After these registers have been restored, the original pipeline at the time of the breakpoint for these possibly unstable registers will have been recreated and a stack pointer at the time of the breakpoint will have been restored. If the breakpoint occurred on an instruction that does make use of old values of scalar registers, debugger 660 restores these scalar registers, stores the original values of the remaining scalar registers into the MPES pipeline registers and loads the pipeline from the stored registers. After the pipeline has been restored, debugger 660 alters the PE's program counter to point to the original breakpoint location and starts PE 670 execution. Figure 12 is a flow diagram of one embodiment for debugging optimized code using safe points. Initially a processing block 1202, debugger 660 attempts to set a breakpoint. If the debugger attempts to set the breakpoint at an unsafe location, in one embodiment, the debugger does not allow the breakpoint to be set at the unsafe location, but rather attempts to find the nearest safe location (prior to the desired location) to set the breakpoint. Referring to Figure 9, if a breakpoint is attempted to be set at line 5, debugger 660 will search back within the code to a point in which registers are not in transition within the pipeline. In the example of Figure 9, this safe point would be prior to line 3 as line 5 uses the"old values"of registers 5 and 6 in the addition. At processing block 1204, debugger 660 locates the previous safe point within the instructions and sets the breakpoint at that location. Unsafe breakpoint locations are points in the instruction set where the host debugger must disallow breakpoints. After debugger 660 sets the breakpoint at a safe location, debugger 660 starts PE 670 and waits for PE 670 to execute the breakpoint instruction. At processing block 1206, the debugger saves the state of the PE to a simulator.In one embodiment, the simulator is on a remote host. In one embodiment, the debugger saves PE registers and other state information in order to restore the PE to its state after debugging has occurred. Values saved are similar to those described in reference to Figure 11 above. At processing block 1208, debugger 660 simulates the instructions from the safe point found at processing block 1204 to the next safe point past the breakpoint in the instruction code. In addition, the debugger may insert commands to debug the code as described above. Once debugger 660 has executed the code in the simulation, debugger 660 returns control to PE 670. At processing block 1210, the debugger stores the simulated state to state 650.Operations are similar to those described in reference to Figure 11. After the debugger stores state 650, debugger 660 starts PE 670 execution at the breakpoint instruction. Figure 13 is a flow diagram of one embodiment for processing breakpoints in a multi-channel, multi-service environment. Initially at processing block 1302, PE 670 fetches a page of instruction code into memory 620 for execution. In a multi-channel, multi-service environment, multiple PEs may be executing the same set of instruction code for a given service 306. Within system 100, only one program memory exists for a given service 306. Each PE fetches a memory page into its own cache for processing.Thus, any breakpoint inserted into the instruction code will be executed by all PEs. In order to execute breakpoints for only a given PE, each PE performs a checksum for a fetched memory page as it is being fetched. At processing block 1304, after PE 670 checks the page of memory for its checksum, it is determined whether the checksum has passed or failed. If the checksum test has passed, PE 670 continues execution of the page of memory and eventually returns to processing block 1302 for fetching a next page of memory. However, if the checksum test fails, execution continues at processing block 1306. At processing block 1306, it is determined whether PE 670 is to accept or reject the checksum error. Host debugger 660 may send commands to individual PEs to ignore checksum errors. If PE 670 has received a command to ignore checksum errors, processing continues at processing block 1302. However, if PE 670 has received a command to reject checksum errors, processing continues at processing 1308. At processing block 1308, debugger 660 copies the page of memory from the PE cache into a separate cache area. The separate cache area may be within debugger 660 or within PE 670. At processing block 1310, debugger 660 inserts the breakpoint into the saved memory page. Debugger 660 alters the program counter of PE 670 to point to the saved memory page and initiates execution of the PE within the saved memory at processing block 1312. At processing block 1314, debugger 660 begins the processing of the saved memory page. The processing of the saved memory is as in steps 1104 through 1112 ofFigure 11. After PE 670 executes the altered page of memory, PE 670 will load a new page of memory at processing block 1302 and continue processing. Within the processing of multi-channel, multi-service environment, debugger660 will process the breakpoint of all services running on a single processor. In an alternate embodiment, only one service will be running on a PE at any particular time. Several variations in the implementation of the method for interactive debugging have been described. The specific arrangements and methods described here are illustrative of the principles of this invention. Numerous modifications in form and detail may be made by those skilled in the art without departing from the true spirit and scope of the invention. Although this invention has been shown in relation to a particular embodiment, it should not be considered so limited. Rather it is limited only by the appended claims.
System and method for communicating between graphical programs executing on respective devices, e.g., a programmable hardware element (PHE) and a controller. The system includes a first node representing a first in, first out (FIFO) structure, and a second node providing a controller interface to the FIFO structure. A first portion of the FIFO is implemented on the PHE, and a second portion of the FIFO is implemented in memory of the controller. The first and second nodes are operable to be included respectively in first and second graphical programs, where the first graphical program is deployable to the PHE, where the second graphical program is deployable to the controller, and where the graphical programs communicate via the FIFO in cooperatively performing a specified task. The FIFO may implement a Direct Memory Access (DMA) FIFO, where at least part of a DMA controller is implemented on or coupled to the PHE.
WHAT IS CLAIMED IS: 1. A computer-implemented method for communicating between programs executing respectively on a controller and a programmable hardware element, the method comprising: creating a first graphical program in response to first user input, wherein the first graphical program comprises a first plurality of interconnected nodes that visually indicate functionality of the first graphical program, wherein the first graphical program includes a first node that represents a first in first out (FIFO) structure; creating a second graphical program in response to second user input, wherein the second graphical program comprises a second plurality of interconnected nodes that visually indicate functionality of the second graphical program, wherein the second graphical program includes a second node that provides an interface to the FIFO structure; deploying the first graphical program to the programmable hardware element; configuring at least a portion of the FIFO structure on the programmable hardware element; deploying the second graphical program to the controller; wherein the first graphical program and the second graphical program are executable to communicate via the FIFO structure to cooperatively perform a specified task. 2. The method of claim 2, the method further comprising: automatically generating at least a portion of data transfer logic in response to said including the first node and in accordance with configuration information for the FIFO structure, wherein the at least a portion of data transfer logic is deployable to the programmable hardware element to implement data transfer functionality for the FIFO structure. 3. The method of claim 2, further comprising: deploying the at least a portion of data transfer logic to the programmable hardware element, wherein said executing the first graphical program on the programmable hardware element further comprises executing the at least a portion of the data transfer logic to facilitate communications between the first and second graphical programs. 4. The method of claim 2, wherein the FIFO structure is a Direct Memory Access (DMA) FIFO; and wherein the data transfer logic comprises DMA logic. 5. The method of claim 4, wherein the DMA logic comprises at least a portion of a DMA controller. 6. The method of claim 1, wherein data are transferred between the first and second graphical programs via program instructions executed by the controller. 7. The method of claim 6, wherein the data are transferred between the first and second graphical programs via program instructions executed by the controller using one or more of: programmed I/O; or interrupt-driven I/O. 8. The method of claim 1, wherein said deploying the first graphical program to the programmable hardware element comprises: generating a hardware configuration program based on the first graphical program; and deploying the hardware configuration program on the programmable hardware element. 9. The method of claim 8, wherein the first node is configurable to specify one or more of: depth of the FIFO structure; direction of the FIFO structure, comprising one of: controller memory to programmable hardware element; and programmable hardware element to controller memory; and data type of the FIFO structure. 10. The method of claim 9, wherein the depth of the FIFO structure comprises: a hardware depth, comprising a depth of the first portion of the FIFO structure; a memory depth, comprising a depth of the second portion of the FIFO structure; and wherein the depth comprises the sum of the hardware depth and the memory depth. 1 1. The method of claim 10, wherein the hardware depth of the FIFO structure is configurable at compile time; and wherein the memory depth of the FIFO structure is configurable at run time. 12. The method of claim 9, wherein the FIFO structure comprises a front, from which data may be read, and a rear, to which data may be written; and wherein, if the direction of the FIFO structure is configured to be memory to programmable hardware element, the first portion of the FIFO structure includes the front of the FIFO structure and the second portion of the FIFO structure includes the rear of the FIFO structure , and if the direction of the FIFO structure is configured to be programmable hardware element to memory, the first portion of the FIFO structure includes the rear of the FIFO structure and the second portion of the FIFO structure includes the front of the FIFO structure . 13. The method of claim 1 , wherein the second node is configurable to specify a desired function of the FIFO structure. 14. The method of claim 13, wherein the desired function of the FIFO structure comprises one or more of: read operations; write operations; start operations; stop operations; and configure operations. 15. The method of claim 14, wherein said specifying a desired function of the FIFO structure comprises: providing one or more selectable options for specifying the desired function of the FIFO structure; and receiving input selecting one of the one or more selectable options to specify the desired function of the FIFO structure; wherein, after said selecting, the second node is executable to invoke the desired function of the FIFO structure. 16. The method of claim 15, wherein at least one of the one or more selectable options specifies a first function that requires one or more corollary functions; and wherein, if the second node is configured to invoke the first function, the second node is executable to automatically invoke the one or more corollary functions in addition to the first function. 17. The method of claim 15, wherein providing the one or more selectable options for specifying the desired function of the FIFO structure comprises: determining the FIFO structure's configuration; and providing only options that are in accordance with the FIFO structures's configuration. 18. The method of claim 17, wherein said determining the FIFO structure's configuration comprises one or more of: edit time source code of the first node; and a compiled bit file generated from the source code of the first node. 19. The method of claim 17, wherein said determining the FIFO structure's configuration is performed by one or more of: edit time code for the second node; program code associated with the second node; and a development environment of the second graphical program. 20. The method of claim 13, wherein the second graphical program includes one or more additional second nodes, each operable to provide a respective additional controller interface to the FIFO structure, and wherein each additional second node is configurable to specify a respective desired function of the FIFO structure; wherein, after being configured, each additional second node is executable to invoke the respective desired function of the FIFO structure. 21. The method of claim 1 , further comprising: configuring at least a portion of the FIFO structure on the controller. 22. A memory medium that stores program instructions for communicating between programs executing respectively on a controller and a programmable hardware element, wherein the program instructions are computer-executable to perform: creating a first graphical program in response to first user input, wherein the first graphical program comprises a first plurality of interconnected nodes that visually indicate functionality of the first graphical program, wherein the first graphical program includes a first node that represents a first in first out (FIFO) structure; creating a second graphical program in response to second user input, wherein the second graphical program comprises a second plurality of interconnected nodes that visually indicate functionality of the second graphical program, wherein the second graphical program includes a second node that provides an interface to the FIFO structure; deploying the first graphical program to the programmable hardware element; deploying the second graphical program to the controller; wherein the first graphical program and the second graphical program are executable to communicate via the FIFO structure to cooperatively perform a specified task. 23. The memory medium of claim 22, wherein the program instructions are further computer- executable to perform: automatically generating at least a portion of data transfer logic in response to said including the first node and in accordance with configuration information for the FIFO structure; and deploying the at least a portion of data transfer logic to the programmable hardware element to implement data transfer functionality for the FIFO structure. 24. The memory medium of claim 23, wherein the FIFO structure is a Direct Memory Access (DMA) FIFO; and wherein the data transfer logic comprises DMA logic. 25. The memory medium of claim 24, wherein the DMA logic comprises at least a portion of a DMA controller. 26. The memory medium of claim 22, wherein data are transferred between the first and second graphical programs via program instructions executed by the controller. 27. The memory medium of claim 26, wherein the data are transferred between the first and second graphical programs via program instructions executed by the controller using one or more of: programmed I/O; or interrupt-driven I/O. 28. The memory medium of claim 22, wherein said deploying the first graphical program to the programmable hardware element comprises: generating a hardware configuration program based on the first graphical program; and deploying the hardware configuration program on the programmable hardware element. 29. A system for communicating between programs executing respectively on a controller and a programmable hardware element, the system comprising: a first node representing a FIFO structure, wherein a first portion of the FIFO structure is operable to be implemented on the programmable hardware element, wherein a second portion of the FIFO structure is operable to be implemented in memory of the controller, wherein the first node is operable to be included in a first graphical program comprising a first plurality of interconnected nodes that visually indicate functionality of the first graphical program; and a second node operable to provide a controller interface to the FIFO structure, wherein the second node is operable to be included in a second graphical program comprising a second plurality of interconnected nodes that visually indicate functionality of the second graphical program; wherein the first graphical program, including the first node, is deployable to the programmable hardware element, wherein the second graphical program, including the second node, is deployable to the controller, and wherein the first and the second graphical program are executable to communicate via the FIFO structure to cooperatively perform a specified task. 30. The system of claim 29, wherein the first node is configurable to specify one or more of: depth of the FIFO structure; direction of the FIFO structure, comprising one of: controller memory to programmable hardware element; and programmable hardware element to controller memory; and data type of the FIFO structure. 31. The system of claim 30, wherein the depth of the FIFO structure comprises: a hardware depth, comprising a depth of the first portion of the FIFO structure; a memory depth, comprising a depth of the second portion of the FIFO structure; and wherein the depth comprises the sum of the hardware depth and the memory depth. 32. The system of claim 29, wherein the FIFO structure comprises a front, from which data may be read, and a rear, to which data may be written; and wherein, if the direction of the FIFO structure is configured to be memory to programmable hardware element, the first portion of the FIFO structure includes the front of the FIFO structure and the second portion of the FIFO structure includes the rear of the FIFO structure, and if the direction of the FIFO structure is configured to be programmable hardware element to memory, the first portion of the FIFO structure includes the rear of the FIFO structure and the second portion of the FIFO structure includes the front of the FIFO structure. 33. The system of claim 28, wherein the second node is configurable to specify a desired function of the FIFO structure, wherein the desired function of the FIFO structure comprises one or more of: read operations; write operations; start operations; stop operations; and configure operations. 34. The system of claim 33, further comprising: a computer system, comprising: a processor; and memory, coupled to the processor, wherein the memory stores program instructions; wherein, to specify a desired function of the FIFO structure, the program instructions are executable by the processor to: provide one or more selectable options for specifying the desired function of the FIFO structure; and receive input selecting one of the one or more selectable options to specify the desired function of the FIFO structure; wherein, after said selecting, the second node is executable to invoke the desired function of the FIFO structure. 35. The system of claim 34, further comprising: the controller, comprising: a processor; and the memory, coupled to the processor; and the programmable hardware element, coupled to the controller. 36. The system of claim 35, wherein the FIFO structure is a Direct Access Memory (DMA) FIFO, the system further comprising: a DMA controller comprised on or coupled to the programmable hardware element, wherein the DMA controller is operable to receive instructions from the first node and the second node and directly transfer data between the programmable hardware element and the memory of the controller system in accordance with the received instructions. 37. The system of claim 36, wherein the DMA controller comprises: first DMA logic, coupled to or comprised on the programmable hardware element, wherein the first DMA logic implements DMA functionality; and second DMA logic, comprised on the programmable hardware element, wherein the second DMA logic implements structure functionality for the first DMA logic. 38. The system of claim 37, wherein the program instructions stored in the memory of the computer system are further executable by the processor of the computer system to: automatically generate the second DMA logic in response to inclusion of the first node in the first graphical program and in accordance with configuration information for the DMA structure, wherein the second DMA logic is deployable with the first graphical program to the programmable hardware element. 39. The system of claim 38, wherein the program instructions stored in the memory of the computer system are further executable by the processor of the computer system to: deploy the first graphical program and the second DMA logic onto the programmable hardware element; and deploy the second graphical program to the controller. 40. The system of claim 39, wherein the programmable hardware element is operable to execute the first graphical program and the second DMA logic; and wherein the controller is operable to execute the second graphical program concurrently with execution of the first graphical program on the programmable hardware element to cooperatively perform the specified task. 41. The system of claim 35, wherein the computer system comprises the controller. 42. The system of claim 29, wherein data are transferred between the first and second graphical programs via program instructions executed by the processor of the computer system. 43. The method of claim 42, wherein the data are transferred between the first and second graphical programs via program instructions executed by the processor of the computer system using one or more of: programmed I/O; or interrupt-driven I/O.
GRAPHICAL PROGRAMS WITH FIFO STRUCTURE FOR CONTROLLER/FPGACOMMUNICATIONSBACKGROUNDField of the Invention[0001] The present invention relates to the field of graphical programming, and more particularly to a system and method for enabling a graphical program executing on a controller to communicate with a graphical program executing on a programmable hardware element, e.g., a field programmable gate array (FPGA).Description of the Related Art[0002] Traditionally, high level text-based programming languages have been used by programmers in writing application programs. Many different high level text-based programming languages exist, including BASIC, C, C++, Java, FORTRAN, Pascal, COBOL, ADA, APL, etc. Programs written in these high level text-based languages are translated to the machine language level by translators known as compilers or interpreters. The high level text-based programming languages in this level, as well as the assembly language level, are referred to herein as text-based programming environments.[0003] Increasingly, computers are required to be used and programmed by those who are not highly trained in computer programming techniques. When traditional text-based programming environments are used, the user's programming skills and ability to interact with the computer system often become a limiting factor in the achievement of optimal utilization of the computer system.[0004] There are numerous subtle complexities which a user must master before he can efficiently program a computer system in a text-based environment. The task of programming a computer system to model or implement a process often is further complicated by the fact that a sequence of mathematical formulas, steps or other procedures customarily used to conceptually model a process often does not closely correspond to the traditional text-based programming techniques used to program a computer system to model such a process. In other words, the requirement that a user program in a text-based programming environment places a level of abstraction between the user's conceptualization of the solution and the implementation of a method that accomplishes this solution in a computer program. Thus, a user often must substantially master different skills in order to both conceptualize a problem or process and then to program a computer to implement a solution to the problem or process. Since a user often is not fully proficient in techniques for programming a computer system in a text-based environment to implement his solution, the efficiency with which the computer system can be utilized often is reduced.[0005] To overcome the above shortcomings, various graphical programming environments now exist which allow a user to construct a graphical program or graphical diagram, also referred to as a block diagram, U.S. Patent Nos. 4,901 ,221 ; 4,914,568; 5,291 ,587; 5,301 ,301 ; and 5,301,336; among others, to Kodosky et al disclose a graphical programming environment which enables a user to easily and intuitively create a graphical program. Graphical programming environments such as that disclosed in Kodosky et al can be considered a higher and more intuitive way in which to interact with a computer. A graphically based programming environment can be represented at a level above text-based high level programming languages such as C, Basic, Java, etc. [0006] A user may assemble a graphical program by selecting various icons or nodes which represent desired functionality, and then connecting the nodes together to create the program. The nodes or icons may be connected by lines representing data flow between the nodes, control flow, or execution flow. Thus the block diagram may include a plurality of interconnected icons such that the diagram created graphically displays a procedure or method for accomplishing a certain result, such as manipulating one or more input variables and/or producing one or more output variables. In response to the user constructing a diagram or graphical program using the block diagram editor, data structures and/or program instructions may be automatically constructed which characterize an execution procedure that corresponds to the displayed procedure. The graphical program may be compiled or inteipreted by a computer. [0007] A graphical program may have a graphical user interface. For example, in creating a graphical program, a user may create a front panel or user interface panel. The front panel may include various graphical user interface elements or front panel objects, such as user interface controls and/or indicators, that represent or display the respective input and output that will be used by the graphical program, and may include other icons which represent devices being controlled. [0008] Thus, graphical programming has become a powerful tool available to programmers. Graphical programming environments such as the National Instruments Lab VIEW product have become very popular. Tools such as Lab VIEW have greatly increased the productivity of programmers, and increasing numbers of programmers are using graphical programming environments to develop their software applications. In particular, graphical programming tools are being used for test and measurement, data acquisition, process control, man machine interface (MMI), supervisory control and data acquisition (SCADA) applications, modeling, simulation, image processing / machine vision applications, and motion control, among others.[0009] In parallel with the development of the graphical programming model, programmable hardware elements have increasingly been included in devices, such as simulation, measurement, and control devices, where the programmable hardware element is configurable to perform a function, such as simulation or modeling of a device, a measurement and/or control function, modeling or simulation function, or any other type of function. Typically, a software program, e.g., a text based program or a graphical program, such as may be developed in National Instruments Corporation's LabVIEW graphical development environment, is developed either manually or programmatically, and converted into a hardware configuration program, e.g., a netlist or bit file, which is then deployed onto the programmable hardware element, thereby configuring the programmable hardware element to perform the function. For example, the programmable hardware element may be a field programmable gate array (FPGA). Similarly, the program may be an FPGA VI, operable to be deployed to the FPGA.[0010] In many applications, a task, such as a measurement task, may be performed conjunctively by programs executing respectively on a computer system and a programmable hardware element coupled to the computer system, and thus may require communication between the programs during performance of the task. For example, LabVIEW FPGA is an add-on module for the LabVIEW development environment that allows LabVIEW users to run graphical programs on FPGA hardware. The FPGAs that LabVIEW can run on are computing nodes that are distinct from other computing nodes in the system, such as Windows or LabVIEW RT (LabVIEW "Real Time") nodes. One specific example of this is the NI PXI-7831R FPGA board, which is a PXI board that includes an FPGA that is targetable by LabVIEW FPGA. The PXI-7831R itself is typically installed in a PXI chassis with a controller (i.e., an embedded computer) that runs either Windows or LabVIEW RT. Therefore, there are two computing nodes in the system, the FPGA (that runs Lab VIEW FPGA), and the controller (that runs Lab VIEW or LabVIEW RT). These two nodes are distinct, and yet may need to work together and communicate with each other. [0011] In prior art systems, such communication has generally been performed via either interrupts or register accesses. For example, interrupts may be used to allow an FPGA node to send an event to the controller node, which may then respond to the event and perform an action. Interrupts have the drawback of not being able to send data with the interrupt. Register access are often used to send data to and from the FPGA device. However, register accesses have the drawback of being slow, especially in the case of very large amounts of data. For example, if 1 ,000,000 samples are to be transferred between the FPGA and the controller, 1 ,000,000 individual register accesses must typically be performed. [0012] Thus, improved systems and methods are desired for communicating between programs executing respectively on a computer system and a programmable hardware element.SUMMARY OF THE INVENTION [0013] One embodiment of the present invention comprises a system and method for communicating between programs executing respectively on a controller and a programmable hardware element (or alternatively, on respective programmable hardware elements).[0014] A first node representing a first in, first out data structure (FIFO) may be included in a first graphical program in response to user input. In other words, the first node may comprise a graphical representation of the FIFO. The first graphical program may comprise a first plurality of interconnected nodes that visually indicate functionality of the first graphical program. The first graphical program is intended for deployment and execution on a programmable hardware element, e.g., such as on reconfigurable device. The reconfigurable device may be coupled via a bus to a computer system (controller). Note that the bus may be any type of transmission medium desired, including for example a transmission cable, a local area network (LAN), a wide area network (WAN), e.g., the Internet, etc., including wired or wireless transmission means, as desired. For example, in preferred embodiments, at least a first portion of the FIFO is operable to be implemented on a programmable hardware element. For example, at least a first portion of the FIFO data storage elements may be operable to be implemented on a programmable hardware element, e.g., an FPGA, of the reconfigurable device.[0015] In one embodiment, the FIFO may be implemented as a DMA FIFO, although it should be noted that this is but one of numerous ways to implement the FIFO. In this embodiment, the reconfigurable device may also include (e.g., be configured to include) a DMA controller, described in more detail below. As described further below, in other embodiments various data transfer logic may be implemented on the FIFO to facilitate use of the FIFO by the first and second graphical programs. Alternatively, or in addition, a portion or all of the FIFO interface may be implemented in software. [0016] In some embodiments, other techniques of data transfer may be used to interface to the FIFO. For example, in some embodiments, the reconfigurable device may not include data transfer logic, e.g., DMA logic, coupled to or included in the programmable hardware element, and such logic may not be used to transfer the data. There are two other common methods of doing device I/O other than DMA, known as programmed I/O and interrupt-driven I/O. Programmed I/O is completely under the control of the host processor (CPU) and the program that is running on it. The processor (CPU) may move data to and from the device by performing reads and writes to the device, e.g., via messages and/or registers. The processor may retrieve status information from the device (such as whether the data are ready) by also performing reads to the device, where reads and writes to the device may occur one after the other. Note that is a relatively slow method of moving data. For example, in waiting for a block of data on the device, the device may have to be continuously polled to check the status until the data are ready, and then move the data point by point by reading the device to put the data in host memory.[0017] Interrupt-driven I/O is similar to programmed I/O in that the processor or CPU still moves data to and from the device by reading and writing to the device. However, in this approach status information may be received from the device by having the device send interrupts to the processor. This can be much more efficient than programmed I/O. Using the same example as for programmed I/O, in waiting for a block of data on the device, to check the status the device does not have to be continuously polled until the data are ready, rather, a process would simply register to receive an interrupt from the device and put the process thread to sleep until the interrupt was received, with no polling required. Data is still moved point by point by the processor by reading the device to put the data into host memory. [0018] Thus, in some embodiments, the programmable hardware element may not be configured to control the data transfers. For example, in some embodiments, the data transfers may be performed via the controller's processor, e.g., the processor of the computer system, or that of a different controller. In other words, instead of using DMA to transfer data to and from the FIFO, the controller's processor executes software instructions to perform the data transfers, referred to as programmed I/O. [0019] Note that in some embodiments, the first node, e.g., the FIFO node, may be configurable to specify some attributes of the FIFO, e.g., may be configurable to specify one or more of: depth of the FIFO structure (described in more detail below), direction of the FIFO structure, i.e., controller memory to programmable hardware element, or programmable hardware element to controller memory, and the data type of the FIFO structure, among others. The FIFO structure node may also be operable to provide status information for the FIFO structure, such as whether the FIFO (or the portion implemented on the programmable hardware element) is full, and so forth. [0020] A second node may be included in a second graphical program in response to second user input, where the second node is operable to provide a controller interface to the FIFO structure. Like the first graphical program, the second graphical program may comprise a second plurality of interconnected nodes that visually indicate functionality of the second graphical program. The second graphical program is intended for deployment and execution on a controller, such as the computer system (or another computer system) or another controller. The second graphical program (i.e., block diagram) may include a loop structure or other graphical program construct(s) as desired, and the second node, which may be referred to as a FIFO manager node, may be contained therein. [0021] In some embodiments, a second portion of the FIFO structure is operable to be implemented in memory of the controller, e.g., the computer system. For example, a second portion of the FIFO, e.g., a second portion of the FIFO's data storage elements, may be operable to be implemented in the memory of the controller (or computer system or another computer system). Thus, the FIFO structure may be comprised on both the programmable hardware element and the controller, and thus may comprise a distributed FIFO.[0022] In some embodiments, the second node, e.g., the FIFO manager node may be configurable to specify a desired function of the FIFO structure. For example, the second node may be operable to receive input specifying FIFO read operations, FIFO write operations, FIFO start operations, FIFO stop operations, and FIFO configure operations, among other FIFO methods or functionality. For example in one embodiment, to specify a desired fi[alpha]nction of the FIFO structure, one or more selectable options for specifying the desired function of the FIFO structure may be provided, and input, e.g. user input, may be received selecting one of the one or more selectable options to specify the desired function of the FIFO structure, after which, the second node may be executable to invoke or perform the desired function of the FIFO structure.[0023] In various embodiments, the selectable options may be provided by program code, e.g., program instructions, stored in the memory of the computer system, e.g., comprised in the development environment in which the graphical program is being written and/or by the second node or program code associated with the second node. For example, in preferred embodiments, e.g., where the second node functions as a user interface node (i.e., is capable of displaying information and/or receiving input), the node may include both edit time and runtime program code, where the edit time code implements functionality that may operate at edit time, and where the runtime code operates at runtime, the edit time code of the node may execute to provide the options. In preferred embodiments, such edit time code of the second node may operate in conjunction with other program code, e.g., program code comprised in the development environment, e.g., the graphical program editor, to manage the presentation and selection of the options.[0024] In some embodiments, various attributes or fields of the FIFO structure may be displayed by the node, e.g., "FIFO Read", "Number of Elements", "Timeout", "Data", and "Elements Remaining", although other fields or attributes may be used as desired. Note that provision of the selectable options may be invoked in any of a variety of ways. For example, in one embodiment, the user may click (e.g., left-click, right-click, double click, etc., of a mouse or other pointing device) on the node to invoke display of the options, e.g. in a drop-down display of the node. The user may then select one of the options to specify the desired functionality of the FIFO structure, e.g., by clicking on the desired option. Of course, any other means for providing, displaying, and/or selecting the selectable options are also contemplated, the above being but an exemplary manner of doing so. [0025] Once the selection has been made, i.e., once the node/FIFO structure has been configured to provide the desired functionality, the second node may represent the specified functionality of the FIFO structure in the second graphical program. For example, if FIFO read functionality were selected, the second node may then function as a FIFO read node in the second graphical program. In one embodiment, the appearance of the second node may be automatically modified to reflect or indicate the specified functionality, e.g., the node's icon, color, shape, or label, may be modified in accordance with the selected option. [0026] In some embodiments, to provide the one or more selectable options for specifying the desired function of the FIFO structure, program code, e.g., comprised in the development environment and/or the second node, and/or associated with the second node, may be operable to determine the FIFO structure's configuration, and only provide or present options that are in accordance with the FIFO structure's configuration. In other words, the options provided by or for the second node may be based on the FIFO structure's configuration. For example, in one embodiment, the development environment (e.g., editor), the second node, and/or program code associated with the second node, may access and analyze configuration information included in, or associated with, the FIFO structure node, i.e., the first node, described above. Based on this configuration information, only those options that are consonant with the configuration information, i.e., with the configured capabilities of the FIFO structure, may be presented. [0027] In some embodiments, , determining the FIFO structure's configuration may include accessing edit time source code of the first node, and/or a compiled bit file generated from the source code of the first node. For example, in one embodiment, the editor (of the development environment) may access the first graphical program source code, e.g., via a project that includes the source code for both the first and second graphical programs. As another example, the editor (or node or associated code) may access the compiled bit file generated from the source code of the first node, and thus this access may be performed after compilation.[0028] In some embodiments, at least one of the one or more selectable options may specify a first function that requires one or more corollary functions. For example, in one embodiment, FIFO read functionality may always require prior performance of a FIFO start function, for example, or a validate state function; thus, a selected option specifying FIFO read operations may automatically specify inclusion of the FIFO start or validate functionality in the graphical program, along with the FIFO read functionality, this being but one simple example. In preferred embodiments, this automatic inclusion of corollary functionality based upon selected FIFO function options is transparent to the user. For example, in some embodiments, the graphical program may not contain any visible graphical program elements specifically indicating or representing the corollary functionality. Thus, if the second node is configured to invoke the first function, the second node may be executable to automatically invoke the one or more corollary functions in addition to the first function. Alternatively, in other embodiments, in response to the selection of the option, the one or more additional graphical program elements, e.g., nodes, indicating or representing the corollary functionality associated with the selected option may automatically be included and displayed in the graphical program. [0029] It should be noted that the first graphical program, including the first node, is preferably deployable to the programmable hardware element, while the second graphical program, including the second node, is preferably deployable to the controller, or computer system, where the first and the second graphical program are executable to communicate via the FIFO structure to cooperatively perform a specified task. [0030] In various embodiments, the FIFO structure may be implemented in any of a variety of ways. For example, in some embodiments, the FIFO structure may require data transfer logic for transferring data between portions of the FIFO structure. In different embodiments, the data transfer logic may be implemented in software, and/or hardware, and may be comprised in one or both of the controller and the reconfigurable device. [0031] For example, as noted above, in one embodiment, the FIFO structure may be implemented as a Direct Memory Access (DMA) FIFO, where DMA is used to transfer data between the two portions of the FIFO. As is well known in the art of memory access and management, a DMA controller is generally used to facilitate direct access to memory in place of a processor. Thus, in embodiments of the present system where the FIFO structure is implemented as a DMA FIFO, the reconfigurable device may require a DMA controller, i.e., DMA logic, e.g., either coupled to and/or implemented on the programmable hardware element. For example, in one embodiment, the DMA controller may be included on the same circuit board as the programmable hardware element, and may be communicatively coupled thereto to facilitate direct memory access by the DMA FIFO, e.g., by the programmable hardware element, of the portion of the DMA FIFO comprised in the memory of the controller (or computer system). However, in some embodiments, the DMA controller may not inherently support or provide FIFO functionality, and so custom logic may need to be generated, as described below. [0032] Thus, in embodiments where data transfer logic, e.g., a memory controller, is required to transfer data between portions of the FIFO, at least a portion of this data transfer logic may be automatically generated in response to including the first node in the first graphical program, and may be generated in accordance with configuration information for the FIFO. For example, in embodiments where the FIFO is a DMA FIFO, at least a portion of the DMA controller, i.e., additional DMA logic, may be automatically generated in response to including the first node in the first graphical program, and may be generated in accordance with configuration information for the DMA FIFO. The at least a portion of DMA logic may be deployable to the programmable hardware element to implement FIFO functionality for the DMA controller, e.g., to implement the DMA FIFO functionality. [0033] The first graphical program, and optionally the at least a portion of data transfer logic, e.g., of DMA logic, may be deployed to the programmable hardware element. The second graphical program may be deployed to the controller (or computer system). Note that deploying the second graphical program to the computer system may simply mean compiling the program for execution by the processor, placing the program in a particular directory, or otherwise making sure that the second graphical program is properly executable by the computer system, since in preferred embodiments, the second graphical program is developed on the computer system, and thus may already be present. [0034] In some embodiments where the FIFO structure is a DMA FIFO, the first DMA controller portion may be coupled to the programmable hardware element, but may not actually be implemented on the programmable hardware element. In other embodiments, the first DMA controller portion may be deployed to and comprised on the programmable hardware element. Node that in various other DMA FIFO embodiments, the DMA controller may be comprised entirely on the programmable hardware element, or, alternatively, may not be comprised on the programmable hardware element at all, i.e., may simply be coupled to the programmable hardware element.[0035] Thus, in some embodiments, the system may include the computer system, where the computer system includes a processor and memory, the programmable hardware element, coupled to the computer system, and data transfer logic, in the form of a DMA controller comprised on and/or coupled to the programmable hardware element. In one embodiment, the DMA controller may include first DMA logic, coupled to or comprised on the programmable hardware element, where the first DMA logic implements DMA functionality, and second DMA logic, comprised on the programmable hardware element, where the second DMA logic implements FIFO functionality for the first DMA logic. Once the first and second graphical programs (and possibly some or all of the DMA controller logic) have been deployed, the DMA controller may be operable to receive instructions from the first node and the second node and directly transfer data between the programmable hardware element and the memory of the computer system in accordance with the received instructions.[0036] Finally, the first graphical program may be executed on the programmable hardware element, and the second graphical program may be executed on the controller concurrently with the execution of the first graphical program to cooperatively perform the specified task. Note that in embodiments where the FIFO structure is implemented as a DMA FIFO, the FIFO (possibly in conjunction with the DMA controller) preferably facilitates direct memory access of the controller memory, specifically, FIFO storage elements comprised in the memory of the controller, by the first graphical program, during execution. As noted above, in other embodiments, the controller transfers the data between the controller and the reconfigurable device, and so no special data transfer logic, e.g., such as a DMA controller, is needed. BRIEF DESCRIPTION OF THE DRAWINGS[0037] A better understanding of the present invention can be obtained when the following detailed description of the preferred embodiment is considered in conjunction with the following drawings, in which:[0038] Figure IA illustrates a computer system that may be suitable for implementing an embodiment of the present invention;[0039] Figure I B illustrates a network system comprising the computer system of Figure IA and a device suitable for implementing some embodiments of the present invention;[0040] Figure 2A illustrates an instrumentation control system according to one embodiment of the invention;[0041] Figure 2B illustrates an industrial automation system according to one embodiment of the invention;] Figure 3 A is a high-level block diagram of an exemplary system that may execute or utilize graphical programs;[0043] Figure 3B illustrates an exemplary system that may perform control and/or simulation functions utilizing graphical programs;[0044] Figure 4 is an exemplary block diagram of the computer systems of Figures IA, IB, 2A and 2B and 3B; [0045] Figure 5 is a flowchart diagram illustrating one embodiment of a method for enabling a graphical program executing on a controller to communicate with a graphical program executing on a programmable hardware element, according to one embodiment;[0046] Figures 6A and 6B illustrate exemplary graphical programs implementing a FIFO structure, respectively executable on a programmable hardware element and a controller, according to one embodiment; <-> [0047] Figures 7A and 7B illustrate systems implementing various embodiments of the present invention;[0048] Figures 8A and 8B illustrate simplified block diagrams of a FIFO structure, according to one embodiment; and[0049] Figure 9 illustrates an embodiment of a FIFO implemented on two reconfigurable devices including respective programmable hardware elements, according to one embodiment.[0050] While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.DETAILED DESCRIPTION Terms[0051] The following is a glossary of terms used in the present application: [0052] Memory Medium - Any of various types of memory devices or storage devices. The term "memory medium" is intended to include an installation medium, e.g., a CD-ROM, floppy disks 104, or tape device; a computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc.; or a non-volatile memory such as a magnetic media, e.g., a hard drive, or optical storage. The memory medium may comprise other types of memory as well, or combinations thereof. In addition, the memory medium may be located in a first computer in which the programs are executed, or may be located in a second different computer which connects to the first computer over a network, such as the Internet. In the latter instance, the second computer may provide program instructions to the first computer for execution. The term "memory medium" may include two or more memory mediums which may reside in different locations, e.g., in different computers that are connected over a network.[0053] Carrier Medium - a memory medium as described above, as well as signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a bus, network and/or a wireless link. [0054] Programmable Hardware Element - includes various types of programmable hardware, reconfigurable hardware, programmable logic, or field-programmable devices (FPDs), such as one or more FPGAs (Field Programmable Gate Arrays), or one or more PLDs (Programmable Logic Devices), such as one or more Simple PLDs (SPLDs) or one or more Complex PLDs (CPLDs), or other types of programmable hardware. A programmable hardware element may also be referred to as "reconfigurable logic". [0055] Medium - includes one or more of a memory medium, carrier medium, and/or programmable hardware element; encompasses various types of mediums that can either store program instructions / data structures or can be configured with a hardware configuration program. For example, a medium that is "configured to perform a function or implement a software object" may be 1) a memory medium or carrier medium that stores program instructions, such that the program instructions are executable by a processor to perform the function or implement the software object; 2) a medium carrying signals that are involved with performing the function or implementing the software object; and/or 3) a programmable hardware element configured with a hardware configuration program to perform the function or implement the software object.[0056] Program - the term "program" is intended to have the full breadth of its ordinary meaning. The term "program" includes 1 ) a software program which may be stored in a memory and is executable by a processor or 2) a hardware configuration program useable for configuring a programmable hardware element.[0057] Software Program - the term "software program" is intended to have the full breadth of its ordinary meaning, and includes any type of program instructions, code, script and/or data, or combinations thereof, that may be stored in a memory medium and executed by a processor. Exemplary software programs include programs written in text-based programming languages, such as C, C++, Pascal, Fortran, Cobol, Java, assembly language, etc.; graphical programs (programs written in graphical programming languages); assembly language programs; programs that have been compiled to machine language; scripts; and other types of executable software. A software program may comprise two or more software programs that interoperate in some manner.[0058] Hardware Configuration Program - a program, e.g., a netlist or bit file, that can be used to program or configure a programmable hardware element. [0059] Graphical Program - A program comprising a plurality of interconnected nodes or icons, wherein the plurality of interconnected nodes or icons visually indicate functionality of the program.[0060] The following provides examples of various aspects of graphical programs. The following examples and discussion are not intended to limit the above definition of graphical program, but rather provide examples of what the term "graphical program" encompasses: [0061] The nodes in a graphical program may be connected in one or more of a data flow, control flow, and/or execution flow format. The nodes may also be connected in a "signal flow" format, which is a subset of data flow. [0062] Exemplary graphical program development environments which may be used to create graphical programs include LabVIEW, DasyLab, DiaDem and Matrixx/SystemBuild from National Instruments, Simulink from the MathWorks, VEE from Agilent, WiT from Coreco, Vision Program Manager from PPT Vision, So[eta]WIRE from Measurement Computing, Sanscript from Northwoods Software, Khoros from Khoral Research, SnapMaster from HEM Data, VisSim from Visual Solutions, ObjectBench by SES (Scientific and Engineering Software), and VisiDAQ from Advantech, among others.] The term "graphical program" includes models or block diagrams created in graphical modeling environments, wherein the model or block diagram comprises interconnected nodes or icons that visually indicate operation of the model or block diagram; exemplary graphical modeling environments include Simulink, SystemBuild, VisSim, Hypersignal Block Diagram, etc.[0064] A graphical program may be represented in the memory of the computer system as data structures and/or program instructions. The graphical program, e.g., these data structures and/or program instructions, may be compiled or inteipreted to produce machine language that accomplishes the desired method or process as shown in the graphical program.[0065] Input data to a graphical program may be received from any of various sources, such as from a device, unit under test, a process being measured or controlled, another computer program, a database, or from a file. Also, a user may input data to a graphical program or virtual instrument using a graphical user interface, e.g., a front panel. [0066] A graphical program may optionally have a GUI associated with the graphical program. In this case, the plurality of interconnected nodes are often referred to as the block diagram portion of the graphical program. [0067] Node - In the context of a graphical program, an element that may be included in a graphical program. A node may have an associated icon that represents the node in the graphical program, as well as underlying code or data that implements functionality of the node. Exemplary nodes include function nodes, terminal nodes, structure nodes, etc. Nodes may be connected together in a graphical program by connection icons or wires.[0068] Data Flow Graphical Program (or Data Flow Diagram) - A graphical program or diagram comprising a plurality of interconnected nodes, wherein the connections between the nodes indicate that data produced by one node is used by another node. [0069] Graphical User Interface - this term is intended to have the full breadth of its ordinary meaning. The term "Graphical User Interface" is often abbreviated to "GUI". A GUI may comprise only one or more input GUI elements, only one or more output GUI elements, or both input and output GUI elements.[0070] The following provides examples of various aspects of GUIs. The following examples and discussion are not intended to limit the ordinary meaning of GUI, but rather provide examples of what the term "graphical user interface" encompasses: [0071] A GUI may comprise a single window having one or more GUI Elements, or may comprise a plurality of individual GUI Elements (or individual windows each having one or more GUI Elements), wherein the individual GUI Elements or windows may optionally be tiled together.[0072] A GUI may be associated with a graphical program. In this instance, various mechanisms may be used to connect GUI Elements in the GUI with nodes in the graphical program. For example, when Input Controls and Output Indicators are created in the GUI, corresponding nodes (e.g., terminals) may be automatically created in the graphical program or block diagram. Alternatively, the user can place terminal nodes in the block diagram which may cause the display of corresponding GUI Elements front panel objects in the GUI, either at edit time or later at run time. As another example, the GUI may comprise GUI Elements embedded in the block diagram portion of the graphical program.[0073] Front Panel - A Graphical User Interface that includes input controls and output indicators, and which enables a user to interactively control or manipulate the input being provided to a program, and view output of the program, while the program is executing.[0074] A front panel is a type of GUI. A front panel may be associated with a graphical program as described above.[0075] In an instrumentation application, the front panel can be analogized to the front panel of an instrument. In an industrial automation application the front panel can be analogized to the MMI (Man Machine Interface) of a device. The user may adjust the controls on the front panel to affect the input and view the output on the respective indicators. [0076] Graphical User Interface Element - an element of a graphical user interface, such as for providing input or displaying output. Exemplary graphical user interface elements comprise input controls and output indicators.[0077] Input Control - a graphical user interface element for providing user input to a program. Exemplary input controls comprise dials, knobs, sliders, input text boxes, etc.[0078] Output Indicator - a graphical user interface element for displaying output from a program. Exemplary output indicators include charts, graphs, gauges, output text boxes, numeric displays, etc. An output indicator is sometimes referred to as an "output control".[0079] Computer System - any of various types of computing or processing systems, including a personal computer system (PC), mainframe computer system, workstation, network appliance, Internet appliance, personal digital assistant (PDA), television system, grid computing system, or other device or combinations of devices. In general, the term "computer system" can be broadly defined to encompass any device (or combination of devices) having at least one processor that executes instructions from a memory medium, including, for example, a controller or embedded computer.[0080] Measurement Device - includes instruments, data acquisition devices, smart sensors, and any of various types of devices that are operable to acquire and/or store data. A measurement device may also optionally be further operable to analyze or process the acquired or stored data. Examples of a measurement device include an instrument, such as a traditional stand-alone "box" instrument, a computer-based instrument (instrument on a card) or external instrument, a data acquisition card, a device external to a computer that operates similarly to a data acquisition card, a smart sensor, one or more DAQ or measurement cards or modules in a chassis, an image acquisition device, such as an image acquisition (or machine vision) card (also called a video capture board) or smart camera, a motion control device, a robot having machine vision, and other similar types of devices. Exemplary "stand-alone" instruments include oscilloscopes, multimeters, signal analyzers, arbitrary waveform generators, spectroscopes, and similar measurement, test, or automation instruments.[0081] A measurement device may be further operable to perform control functions, e.g., in response to analysis of the acquired or stored data. For example, the measurement device may send a control signal to an external system, such as a motion control system or to a sensor, in response to particular data. A measurement device may also be operable to perform automation functions, i.e., may receive and analyze data, and issue automation control signals in response, [0082] Controller - generally refers to a computer system as defined above, and in some embodiments specifically refers to an embedded computer. An embedded computer may be considered a computer system without its own user interface / display capability. Figure IA - Computer System[0083] Figure IA illustrates a computer system 82 that may be suitable for implementing various embodiments of the present invention. More specifically, the computer system 82 may be operable to store, deploy, and/or execute graphical programs according to embodiments of the invention, and may also be useable to create such graphical programs. As shown in Figure IA, the computer system 82 may include a display device operable to display the graphical program as the graphical program is created and/or executed. The display device may also be operable to display a graphical user interface or front panel of the graphical program during execution of the graphical program. The graphical user interface may comprise any type of graphical user interface, e.g., depending on the computing platform. [0084] The computer system 82 may include a memory medium(s) on which one or more computer programs or software components according to one embodiment of the present invention may be stored. For example, the memory medium may store graphical programs implementing a FIFO structure for communicating between a programmable hardware element and a controller, as well as one or more graphical programs that are executable to perform embodiments of the methods described herein. Also, the memory medium may store a graphical programming development environment application used to create and/or execute such graphical programs. The memory medium may also store operating system software, as well as other software for operation of the computer system. Various embodiments further include receiving or storing instructions and/or data implemented in accordance with the foregoing description upon a computer accessible physical storage medium. [0085] It should be noted that the computer system 82, i.e., executing a development environment, may function as a development platform for creating the various graphical programs described herein, and may optionally also serve as a controller, executing various of the graphical programs in a cooperative manner with or more additional devices coupled to the computer, e.g., a reconfigurable device that includes a programmable hardware element. In some embodiments, graphical programs developed on the computer system 82 may be deployed to other devices for execution. For example, a first graphical program may be deployed to a controller (or other computer system) for execution by a processor, and a second graphical program may be deployed to a reconfigurable device coupled to the controller, where the device includes a programmable hardware element, e.g., an FPGA, and where the controller and the reconfigurable device execute their respective graphical programs to cooperatively perform a specified task, e.g., a measurement task.[0086] It should be noted that while the computer system 82 may operate as a controller, in some applications, a controller may not include such standard computer peripherals as a display or a hard drive. Figure 1 B - Computer Network[0087] Figure IB illustrates a system including a first computer system 82 that is coupled to a second system or device 90, e.g., via a network 84 (or a computer bus). The computer system 82 and the system 90 may each be any of various types, as desired. The network 84 can also be any of various types, including a LAN (local area network), WAN (wide area network), the Internet, or an Intranet, among others. The computer system 82 and the device 90 may execute one or more graphical programs in a distributed fashion. For example, computer 82 may execute a first graphical program and device 90 may execute a second graphical program, wherein the first and second graphical programs share a FIFO structure used for communication or data transfer between the first and second graphical programs. [0088] In preferred embodiments, described below in detail, a first graphical program may be executed on the computer system 82 (and/or computer system 90) or on a controller, and a second graphical program may be deployed to and executed on a reconfigurable device, e.g., device 90, wherein the device 90 includes a programmable hardware element, e.g., an FPGA, that is coupled to the computer system or controller. Exemplary Systems [0100] Embodiments of the present invention may be involved with performing test and/or measurement functions; controlling and/or modeling instrumentation or industrial automation hardware; modeling and simulation functions, e.g., modeling or simulating a device or product being developed or tested, etc. Exemplary test applications where the graphical program may be used include hardware-in-the-loop testing and rapid control prototyping, among others. [0101] However, it is noted that the present invention can be used for a plethora of applications and is not limited to the above applications. In other words, applications discussed in the present description are exemplary only, and the present invention may be used in any of various types of systems. Thus, the system and method of the present invention is operable to be used in any of various types of applications, including the control of other types of devices such as multimedia devices, video devices, audio devices, telephony devices, Internet devices, etc., as well as general purpose software applications such as word processing, spreadsheets, network control, network monitoring, financial applications, games, etc.[0102] Figure 2A illustrates an exemplary instrumentation control system 100 which may implement embodiments of the invention. The system 100 comprises a host computer 82 that connects to one or more instruments. The host computer 82 may comprise a CPU, a display screen, memory, and one or more input devices such as a mouse or keyboard as shown. The computer 82 may operate with the one or more instruments to analyze, measure or control a unit under test (UUT) or process 150. One or more of the instruments may include a programmable hardware element which may be configured with a graphical program. As discussed below, a first graphical program executing on the computer 82 may interact with a second graphical program executing on the programmable hardware element of the instrument using a FIFO structure. [0103] The one or more instruments may include a GPIB instrument 1 12 and associated GPIB interface card 122, a data acquisition board 114 and associated signal conditioning circuitry 124, a VXI instrument 116, a PXI instrument 1 18, a video device or camera 132 and associated image acquisition (or machine vision) card 134, a motion control device 136 and associated motion control interface card 138, and/or one or more computer based instrument cards 142, among other types of devices, where, for example, at least one of the instruments includes a programmable hardware element, e.g., an FPGA, as described below in more detail. The computer system may couple to and operate with one or more of these instruments. The instruments may be coupled to a unit under test (UUT) or process 150, or may be coupled to receive field signals, typically generated by transducers. The system 100 may be used in a data acquisition and control application, in a test and measurement application, an image processing or machine vision application, a process control application, a man-machine interface application, a simulation application, or a hardware-in-the-loop validation application, among others.[0104] Figure 2B illustrates an exemplary industrial automation system 160 that may implement embodiments of the invention. The industrial automation system 160 is similar to the instrumentation or test and measurement system 100 shown in Figure 2A. Elements which are similar or identical to elements in Figure 2A have the same reference numerals for convenience. The system 160 may comprise a computer 82 which connects to one or more devices or instruments, where, for example, at least one of the devices or instruments includes a programmable hardware element, e.g., an FPGA, as described below in more detail. The computer 82 may comprise a CPU, a display screen, memory, and one or more input devices such as a mouse or keyboard as shown. The computer 82 may operate with the one or more devices to a process or device 150 to perform an automation function, such as MMI (Man Machine Interface), SCADA (Supervisory Control and Data Acquisition), portable or distributed data acquisition, process control, advanced analysis, or other control, among others.[0105] The one or more devices may include a data acquisition board 1 14 and associated signal conditioning circuitry 124, a PXI instrument 1 18, a video device 132 and associated image acquisition card 134, a motion control device 136 and associated motion control interface card 138, a fieldbus device 170 and associated fieldbus interface card 172, a PLC (Programmable Logic Controller) 176, a serial instrument 182 and associated serial interface card 184, or a distributed data acquisition system, such as the Fieldpoint system available from National Instruments, among other types of devices.[0106] Figure 3A is a high-level block diagram of an exemplary system that may execute or utilize graphical programs. Figure 3A illustrates a general high-level block diagram of a generic control and/or simulation system that comprises a controller 92 and a plant 94. 92 represents a control system/algorithm the user may be trying to develop. The plant 94 represents the system the user may be trying to control. For example, if the user is designing an ECU for a car, the controller 92 is the ECU and the plant 94 is the car's engine (and possibly other components such as transmission, brakes, and so on.) As shown, a user may create a graphical program that specifies or implements the functionality of one or both of the controller 92 and the plant 94. For example, a control engineer may use a modeling and simulation tool to create a model (graphical program) of the plant 94 and/or to create the algorithm (graphical program) for the controller 92. In some embodiments, the controller 92 may also be coupled to a reconfigurable device that includes a programmable hardware element, as described in detail below. [0107] Figure 3B illustrates an exemplary system that may perform control and/or simulation functions. As shown, the controller 92 may be implemented by a computer system 82 or other device (e.g., including a processor and memory medium and/or including a programmable hardware element) that executes or implements a graphical program. In a similar manner, the plant 94 may be implemented by a computer system or other device 144 (e.g., including a processor and memory medium and/or including a programmable hardware element) that executes or implements a graphical program, or may be implemented in or as a real physical system, e.g., a car engine. [0108] In one embodiment of the invention, one or more graphical programs may be created which are used in performing rapid control prototyping. Rapid Control Prototyping (RCP) generally refers to the process by which a user develops a control algorithm and quickly executes that algorithm on a target controller connected to a real system. The user may develop the control algorithm using a graphical program, and the graphical program may execute on the controller 92, e.g., on a computer system or other device. The computer system 82 may be a platform that supports real time execution, e.g., a device including a processor that executes a real time operating system (RTOS), or a device including a programmable hardware element.[0109] In one embodiment of the invention, one or more graphical programs may be created which are used in performing Hardware in the Loop (MIL) simulation. Hardware in the Loop (HIL) refers to the execution of the plant model 94 in real time to test operation of a real controller 92. For example, once the controller 92 has been designed, it may be expensive and complicated to actually test the controller 92 thoroughly in a real plant, e.g., a real car. Thus, the plant model (implemented by a graphical program) is executed in real time to make the real controller 92 "believe" or operate as if it is connected to a real plant, e.g., a real engine.[0110] In the embodiments of Figures 2A, 2B, and 3B above, one or more of the various devices may couple to each other over a network, such as the Internet. In one embodiment, the user operates to select a target device from a plurality of possible target devices for programming or configuration using a graphical program. Thus the user may create a graphical program on a computer and use (execute) the graphical program on that computer or deploy the graphical program to a target device (for remote execution on the target device) that is remotely located from the computer and coupled to the computer through a network. In preferred embodiments, described below, the user may create two (or more) graphical computer programs, one of which may be deployed to a reconfigurable device, and another that may execute on the computer or be deployed for execution on a controller. Note that, as used herein, the terms "computer system", and "controller" may all be used to refer to the execution platform for the first graphical program, where the execution platform is coupled to the reconfigurable device for cooperative execution of the two programs. As noted earlier, the computer system 82 (or another computer system) may be used to develop the graphical programs described herein. [0111] Graphical software programs which perform data acquisition, analysis and/or presentation, e.g., for measurement, instrumentation control, industrial automation, modeling, or simulation, such as in the applications shown in Figures 2A and 2B, may be referred to as virtual instruments. Figure 4 - Computer System Block Diagram [0112] Figure 4 is a block diagram representing one embodiment of the computer system 82 and/or 90 illustrated in Figures IA and IB, or computer system 82 shown in Figures 2A or 2B. It is noted that any type of computer system configuration or architecture can be used as desired, and Figure 4 illustrates a representative PC embodiment. It is also noted that the computer system may be a general-purpose computer system, a computer implemented on a card installed in a chassis, or other types of embodiments. Elements of a computer not necessary to understand the present description have been omitted for simplicity. As noted above, the computer system may serve as a development platform, and/or a controller, as desired.[0113] The computer may include at least one central processing unit or CPU (processor) 160 which is coupled to a processor or host bus 162. The CPU 160 may be any of various types, including an x86 processor, e.g., a Pentium class, a PowerPC processor, a CPU from the SPARC family of RISC processors, as well as others. A memory medium, typically comprising RAM and referred to as main memory, 166 is coupled to the host bus 162 by means of memory controller 164. The main memory 166 may store graphical programs that implement embodiments of the present invention. The main memory may also store operating system software, as well as other software for operation of the computer system.[0114] The host bus 162 may be coupled to an expansion or input/output bus 170 by means of a bus controller 168 or bus bridge logic. The expansion bus 170 may be the PCI (Peripheral Component Interconnect) expansion bus, although other bus types can be used. The expansion bus 170 includes slots for various devices such as described above. The computer 82 further comprises a video display subsystem 180 and hard drive 182 coupled to the expansion bus 170. In some embodiments, the computer 82 may also include or be coupled to other buses and devices, such as, for example, GPIB card 122 with GPIB bus 1 12, an MXI device 186 and VXI chassis 1 16, etc., as desired.[0115] As shown, a device 190 may also be connected to the computer. The device 190 preferably includes a programmable hardware element. The device 190 may also or instead comprise a processor and memory that may execute a real time operating system. The computer system may be operable to deploy a graphical program to the device 190 for execution of the graphical program on the device 190. The deployed graphical program may take the form of graphical program instructions or data structures that directly represent the graphical program.[0116] Exemplary embodiments of the invention are described below with reference to Figures 7A and 7B, where computer system 82 is used as a controller, although it should be noted that in other embodiments, the controller may be separate and distinct from the computer system 82. Figure 5 - Flowchart of Method for Communicating Between Graphical Programs Executing Respectively on a Computer System and a Programmable Hardware Element[0117] Figure 5 illustrates a method for communicating between programs executing respectively on a controller and a programmable hardware element, according to one embodiment. The method shown in Figure 5 may be used in conjunction with any of the computer systems or devices shown in the above Figures, among other devices. In various embodiments, some of the method elements shown may be performed concurrently, in a different order than shown, or may be omitted. Additional method elements may also be performed as desired. In the exemplary embodiment shown in Figure 5, the method may operate as follows.[0118] First, in 502, a first node representing a FIFO structure (or simply "FIFO") may be included in a first graphical program in response to user input. In some embodiments, the first node may comprise a graphical representation of the FIFO. The first graphical program may comprise a first plurality of interconnected nodes that visually indicate functionality of the first graphical program. One simplified example of the first graphical program according to one exemplary embodiment is illustrated in Figure 6A.[0119] As may be seen, in the embodiment of Figure 6A, the graphical program (i.e., block diagram) includes a loop structure 603, and the first node 602, which may be referred to as a FIFO node 602, has been included in the graphical program inside the loop structure 603, although it should be noted that in other embodiments, the node may be included otherwise, and the first graphical program may include various other graphical program elements as desired. Note that the graphical program of Figure 6A also includes a stop node, so labeled, whereby the program execution may be stopped.[0120] The first graphical program is intended for deployment and execution on a programmable hardware element, e.g., such as on reconfigurable device 720, shown in Figures 7A and 7B (or device 190, shown in Figure 4, among other devices). As Figures 7A and 7B indicate, the reconfigurable device 720 (i.e., 720A or 720B) is coupled via bus 710 to computer system 82. Note that the bus 710 may be any type of transmission medium desired, including for example a transmission cable, a local area network (LAN), a wide area network (WAN), e.g., the Internet, etc., including wired or wireless transmission means, as desired. The embodiments of Figures 7A and 7B represent the system after deployment of the various components, e.g., the first graphical program 602, of the present invention to their respective execution platforms.[0121] For example, in preferred embodiments, at least a first portion of the FIFO structure 710A is operable to be implemented on a programmable hardware element. For example, as indicated in Figures 7A and 7B, at least a first portion of the FIFO data storage elements may be operable to be implemented on a programmable hardware element 716A, e.g., an FPGA, of the reconfigurable device 720. As also indicated in Figure 7A, in some embodiments, the FIFO structure may be implemented as a direct memory access (DMA) FIFO structure. In these embodiments, the reconfigurable device 720A may include a DMA controller 712, described in more detail below, although it should be noted that this is but one of numerous possible implementations of the FIFO structure contemplated. Note that as used herein, similar components distinguished from one another by use of label suffixes such as "A" and "B", e.g., reconfigurable devices 720A and 720B, may be referred to collectively or generically by the numeric label alone, e.g., reconfigurable device(s) 720.[0122] In other embodiments, other techniques of data transfer may be used to interface to the FIFO. For example, in the embodiment shown in Figure 7B, the reconfigurable device 720B does not include DMA logic coupled to or included in the programmable hardware element 716B, and DMA logic is not used to transfer the data. [0123] There are two other common methods of doing device I/O other than DMA, known as programmed I/O and interrupt-driven I/O. Programmed I/O is completely under the control of the host processor (CPU) and the program that is running on it. The processor (CPU) may move data to and from the device by performing reads and writes to the device, e.g., via messages and/or registers. The processor may retrieve status information from the device (such as whether the data are ready) by also performing reads to the device, where reads and writes to the device may occur one after the other. Note that is a relatively slow method of moving data. For example, in waiting for a block of data on the device, the device may have to be continuously polled to check the status until the data are ready, and then move the data point by point by reading the device to put the data in host memory.[0124] Interrupt-driven I/O is similar to programmed I/O in that the processor or CPU still moves data to and from the device by reading and writing to the device. However, in this approach status information may be received from the device by having the device send interrupts to the processor. This can be much more efficient than programmed I/O. Using the same example as for programmed I/O, in waiting for a block of data on the device, to check the status the device does not have to be continuously polled until the data are ready, rather, a process would simply register to receive an interrupt from the device and put the process thread to sleep until the interrupt was received, with no polling required. Data is still moved point by point by the processor by reading the device to put the data into host memory.[0125] Thus, in some embodiments, the programmable hardware element may not be configured to control the data transfers. For example, in the embodiment of Figure 7B, the data transfers are performed via the controller's processor, e.g., processor 160 of the computer system 82, or that of a different controller. In other words, instead of using DMA to transfer data to and from the FIFO, the controller's processor executes software instructions to perform the data transfers, referred to as programmed I/O.[0126] Note that in some embodiments, the first node, e.g., the FIFO structure node 602 may be configurable to specify some attributes of the FIFO structure, e.g., may be configurable to specify one or more of: depth of the FIFO structure (described in more detail below), direction of the FIFO structure, i.e., controller memory to programmable hardware element, or programmable hardware element to controller memory, and the data type of the FIFO structure, among others. The FIFO structure node 602 may also be operable to provide status information for the FIFO structure, such as whether the FIFO (or the portion implemented on the programmable hardware element) is full, and so forth.[0127] In 504, a second node may be included in a second graphical program in response to second user input, where the second node is operable to provide a controller interface to the FIFO structure. Like the first, the second graphical program may comprise a second plurality of interconnected nodes that visually indicate functionality of the second graphical program. The second graphical program is intended for deployment and execution on a controller, such as computer system 82 (or another computer system) or another controller. A simplified example of the second graphical program according to one exemplary embodiment is illustrated in Figure 6B. [0128] As may be seen, in this embodiment, the second graphical program (i.e., block diagram) includes a loop structure 605, and the second node, which may be referred to as a FIFO manager node, contained therein. As with the graphical program of Figure 6A, a stop node is provided for terminating execution of the program. Additionally, as shown, data from the FIFO manager node is provided to a waveform graph node for graphical display of the data. At the far left of the block diagram (outside the loop structure) is an FPGA target node, labeled "FPGA Target", that operates to open a communication session between the second graphical program and the programmable hardware element 716.[0129] In preferred embodiments, a second portion of the FIFO structure is operable to be implemented in memory of a controller 722, e.g., computer system 82. For example, as illustrated in Figures 7A and 7B, a second portion of the FIFO 710B, e.g., a second portion of the FIFO's data storage elements, may be operable to be implemented in the memory of the controller 722 (or computer system 82 or another computer system). Thus, the FIFO structure 710 may be comprised on both the programmable hardware element 716 (716A or 716B) and the controller 722, and thus may comprise a distributed FIFO.[0130] Note that in some ways, the first and second nodes are functionally equivalent, except that the second node (on the controller side) can read and write multiple points from the structure, e.g., FIFO, at a time. At a high level, both nodes operate to read and write data from the structure. However, at a deeper level, the first node (on the hardware element side) is responsible for instantiating the hardware part of the structure, and in some embodiments (e.g., see Figure 7A), for creating data transfer logic, e.g., custom DMA logic, while the first node interacts with the hardware to signal when data is ready, or is ready to receive more data. [0131] In alternate embodiments, the structure used for such communication may be completely implemented in only the controller 772, or the structure may be completely implemented only in the programmable hardware element. However, it should be noted that in these cases, data transfer logic, e.g., direct memory access (e.g., some or all of the data transfer logic, e.g., DMA logic) may not be needed since the structure is not distributed over the two devices. As noted above, in some embodiments (e.g., see Figure 7B), the data transfer may be performed by the processor of the controller. [0132] In further embodiments, the first and second nodes may be capable of the same functionality. For example, each of the first and second node may represent the FIFO structure, and each node may also be capable of providing an interface to the FIFO structure, e.g., for configuring the FIFO structure. Each node may only utilize the functionality required by the specific use, e.g., may be context sensitive, such that the appropriate functionality may be provided automatically, e.g., in response to the configuration, deployment, etc. In some embodiments, the two nodes may have the same appearance, while in other embodiments, the appearances may differ, e.g., based on the configuration, use, context, etc.[0133] As noted above, the embodiments shown in Figures 7A and 7B illustrate the system after deployment of various components of the present invention to their respective execution platforms. For example, the second graphical program 704 is shown deployed to the controller 722 (which in some embodiments may be computer system 82).[0134] In preferred embodiments, the second node, e.g., the FIFO manager node 604 may be configurable to specify a desired function of the FIFO structure. For example, the second node may be operable to receive input specifying FIFO read operations, FIFO write operations, FIFO start operations, FIFO stop operations, and FIFO configure operations, among other FIFO methods or functionality. For example in one embodiment, to specify a desired function of the FIFO structure, one or more selectable options for specifying the desired function of the FIFO structure may be provided, and input, e.g. user input, may be received selecting one of the one or more selectable options to specify the desired function of the FIFO structure, after which, the second node may be executable to invoke or perform the desired function of the FIFO structure. [0135] In various embodiments, the selectable options may be provided by program code, e.g., program instructions, stored in the memory of the computer system 82, e.g., comprised in the development environment in which the graphical program is being written and/or by the second node or program code associated with the second node. For example, in preferred embodiments, e.g., where the second node functions as a user interface node (i.e., is capable of displaying information and/or receiving input), the node may include both edit time and runtime program code, where the edit time code implements functionality that may operate at edit time, and where the runtime code operates at runtime, the edit time code of the node may execute to provide the options. In preferred embodiments, such edit time code of the second node may operate in conjunction with other program code, e.g., program code comprised in the development environment, e.g., the graphical program editor, to manage the presentation and selection of the options. [0136] In the example of Figure 6B, various attributes or fields of the FIFO structure are displayed by the node, e.g., "FIFO Read", "Number of Elements", "Timeout", "Data", and "Elements Remaining", although other fields or attributes may be used as desired. Note that provision of the selectable options may be invoked in any of a variety of ways. For example, in one embodiment, the user may click (e.g., left-click, right-click, double click, etc., of a mouse or other pointing device) on the node to invoke display of the options, e.g. in a drop-down display of the node. The user may then select one of the options to specify the desired functionality of the FIFO structure, e.g., by clicking on the desired option. Of course, any other means for providing, displaying, and/or selecting the selectable options are also contemplated, the above being but an exemplary manner of doing so.[0137] Once the selection has been made, i.e., once the node/FIFO structure has been configured to provide the desired functionality, the second node may represent the specified functionality of the FIFO structure in the second graphical program. For example, if FIFO read functionality were selected, the second node may then function as a FIFO read node in the second graphical program. In one embodiment, the appearance of the second node may be automatically modified to reflect or indicate the specified functionality, e.g., the node's icon, color, shape, or label, may be modified in accordance with the selected option. [0138] In some embodiments, to provide the one or more selectable options for specifying the desired function of the FIFO structure, program code, e.g., comprised in the development environment and/or the second node, and/or associated with the second node, may be operable to determine the FIFO structure's configuration, and only provide or present options that are in accordance with the FIFO structure's configuration. In other words, the options provided by or for the second node may be based on the FIFO structure's configuration. For example, in one embodiment, the development environment (e.g., editor), the second node, and/or program code associated with the second node, may access and analyze configuration information included in, or associated with, the FIFO structure node, i.e., the first node, described above, Based on this configuration information, only those options that are consonant with the configuration information, i.e., with the configured capabilities of the FIFO structure, may be presented. [0139] In some embodiments, determining the FIFO structure's configuration may include accessing edit time source code of the first node, and/or a compiled bit file generated from the source code of the first node. For example, in one embodiment, the editor (of the development environment) may access the first graphical program source code, e.g., via a project that includes the source code for both the first and second graphical programs. As another example, the editor (or node or associated code) may access the compiled bit file generated from the source code of the first node, and thus this access may be performed after compilation.[0140] In some embodiments, at least one of the one or more selectable options may specify a first function that requires one or more corollary functions. For example, in one embodiment, FIFO read functionality may always require prior performance of a FIFO start function, for example, or a validate state function; thus, a selected option specifying FIFO read operations may automatically specify inclusion of the FIFO start or validate functionality in the graphical program, along with the FIFO read functionality, this being but one simple example. In preferred embodiments, this automatic inclusion of corollary functionality based upon selected FIFO function options is transparent to the user. For example, in some embodiments, the graphical program may not contain any visible graphical program elements specifically indicating or representing the corollary functionality. Thus, if the second node is configured to invoke the first function, the second node may be executable to automatically invoke the one or more corollary functions in addition to the first function.[0141] Alternatively, in other embodiments, in response to the selection of the option, the one or more additional graphical program elements, e.g., nodes, indicating or representing the corollary functionality associated with the selected option may automatically be included and displayed in the graphical program. [0142] It should be noted that the first graphical program, including the first node, is preferably deployable to the programmable hardware element, while the second graphical program, including the second node, is preferably deployable to the controller 722, or computer system 82, where the first and the second graphical program are executable to communicate via the FIFO structure to cooperatively perform a specified task. [0143] The first and second graphical programs may be created on the computer system 82, or on a different computer system. For each of the graphical programs, the graphical program may be created or assembled by the user arranging on a display a plurality of nodes or icons and then interconnecting the nodes to create the graphical program. In response to the user assembling the graphical program, data structures may be created and stored which represent the graphical program. The nodes may be interconnected in one or more of a data flow, control flow, or execution flow format. The graphical program may thus comprise a plurality of interconnected nodes or icons that visually indicates the functionality of the program. As noted above, the graphical program may comprise a block diagram and may also include a user interface portion or front panel portion. Where the graphical program includes a user interface portion, the user may optionally assemble the user interface on the display. As one example, the user may use the LabVIEW graphical programming development environment to create the graphical program, [0144] In an alternate embodiment, at least one of the graphical programs may be created by the user creating or specifying a prototype, followed by automatic or programmatic creation of the graphical program from the prototype. This functionality is described in U.S. Patent Application Serial No. 09/587,682 titled "System and Method for Automatically Generating a Graphical Program to Perform an Image Processing Algorithm", which is hereby incorporated by reference in its entirety as though fully and completely set forth herein. The graphical program may be created in other manners, either by the user or programmatically, as desired. [0145] As noted above, in various embodiments, the FIFO structure may be implemented in any of a variety of ways. For example, in some embodiments, the FIFO structure may utilize data transfer logic for transferring data between portions of the FIFO structure. In different embodiments, the data transfer logic may be implemented in software, and/or hardware, and may be comprised in one or both of the controller and the reconfigurable device. [0146] For example, in one embodiment, the FIFO structure may be implemented as a Direct Memory Access (DMA) FIFO, where DMA is used to transfer data between the two portions of the FIFO. As is well known in the art of memory access and management, a DMA controller is generally used to facilitate direct access to memory. Thus, in embodiments of the present system where the FIFO structure is a DMA FIFO (see, e.g., Figure 7A), the reconfigurable device 720A may require data transfer logic in the form of a DMA controller, i.e., DMA logic, e.g., which may be either coupled to or implemented on the programmable hardware element 716A. For example, in one embodiment, the DMA controller may be included on the same circuit board as the programmable hardware element, and may be communicatively coupled thereto to facilitate direct memory access by the DMA FIFO, e.g., by the programmable hardware element, of the portion of the DMA FIFO comprised in the memory of the controller (or computer system 82). However, in some embodiments, the DMA controller may not inherently support or provide FIFO functionality, and so custom logic may need to be generated, as described below. [0147] As indicated in 506, in embodiments where data transfer logic, e.g., a memory controller, is required to transfer data between portions of the FIFO, at least a portion of this data transfer logic may be automatically generated in response to including the first node in the first graphical program, and may be generated in accordance with configuration information for the FIFO. For example, in embodiments where the FIFO is a DMA FIFO, at least a portion of the DMA controller, i.e., additional DMA logic, may be automatically generated in response to including the first node in the first graphical program, and may be generated in accordance with configuration information for the DMA FIFO. The at least a portion of DMA logic may be deployable to the programmable hardware element to implement FIFO functionality for the DMA controller, e.g., to implement the DMA FIFO functionality. [0148] In 508, the first graphical program, and optionally the at least a portion of data transfer logic, e.g., of DMA logic, may be deployed to the programmable hardware element. For further information regarding deployment of a graphical program to a programmable hardware element, please see U.S. Patent Application Serial No. 08/912,427 titled "System and Method for Converting Graphical Programs Into Hardware Implementations" filed on August 18, 1997, which was incorporated by reference above. The second graphical program may be deployed to the controller (or computer system 82). Note that deploying the second graphical program to the computer system 82 may simply mean compiling the program for execution by the processor, placing the program in a particular directory, or otherwise making sure that the second graphical program is properly executable by the computer system 82, since in preferred embodiments, the second graphical program is developed on the computer system 82, and thus may already be present. [0149] Referring again to Figure 7A, an exemplary system is shown after the deployments of 508, where in this embodiment, the FIFO structure is a DMA FIFO, and data transfer logic in the form of a DMA controller is included in and/or coupled to the programmable hardware element. In this embodiment, the DMA controller 712 is shown comprising first and second portions, 712A and 712B. As indicated in Figure 7A, in some embodiments, the first DMA controller portion 712A may be coupled to the programmable hardware element 716A, but may not actually be implemented on the programmable hardware element. This aspect is illustrated in Figure 7A by situating the first DMA controller portion 712A outside the drawn solid boundaries of the programmable hardware element 716A. In another embodiment, also represented in Figure 7A, the first DMA controller portion 712A may be deployed to and comprised on the programmable hardware element 716A. This aspect is illustrated in Figure 7A by enclosing the first DMA controller portion 712A within the dashed line boundary of the programmable hardware element. [0150] The second DMA controller portion 712B is shown comprised on the programmable hardware element 716A. In other words, in the embodiment shown in Figure 7A, the second DMA controller portion 712B has been deployed for execution on the programmable hardware element 716A.[0151] Node that in various other embodiments, the DMA controller 712 may be comprised entirely on the programmable hardware element 716A, or, alternatively, may not be comprised on the programmable hardware element 716A at all, i.e., may simply be coupled to the programmable hardware element.[0152] Thus, in some embodiments, the system may include the computer system 82, where the computer system includes a processor and memory, the programmable hardware element 716A, coupled to the computer system, and data transfer logic, in the form of a DMA controller comprised on and/or coupled to the programmable hardware element. In one embodiment, the DMA controller may include first DMA logic, coupled to or comprised on the programmable hardware element, where the first DMA logic implements DMA functionality, and second DMA logic, comprised on the programmable hardware element, where the second DMA logic implements FIFO functionality for the first DMA logic. Once the first and second graphical programs (and possibly some or all of the DMA controller logic) have been deployed, the DMA controller may be operable to receive instructions from the first node and the second node and directly transfer data between the programmable hardware element and the memory of the computer system in accordance with the received instructions.[0153] In 510, the first graphical program may be executed on the programmable hardware element, and the second graphical program may be executed on the controller concurrently with the execution of the first graphical program to cooperatively perform the specified task. During the execution, the first and the second graphical programs may communicate via the FIFO to cooperatively perform the specified task. Note that in embodiments where the FIFO structure is implemented as a DMA FIFO, the FIFO (possibly in conjunction with the DMA controller) preferably facilitates direct memory access of the controller memory, specifically, FIFO storage elements comprised in the memory of the controller, by the first graphical program, during execution. In other embodiments, the FIFO may rely on the processor of the controller to manage the data transfers, e.g., via messages, registers, and/or interrupts. Figures 8 A and 8B - FIFO structure[0154] Figures 8A and 8B are high-level block diagrams of a FIFO structure, according to one embodiment of the invention. Note that the FIFO structures shown in Figures 8A and 8B are intended to be exemplary only, and are not intended to limit the form or function of the FIFO structure to any particular implementation. [0155] As Figure 8A shows, and as described above, the first portion of the FIFO structure 712A may be comprised on the programmable hardware element 716, while the second portion of the FIFO structure 712B may be comprised in the memory 822 of the controller 722 (or of computer system 82), where the programmable hardware element 716 and the memory 822 of the controller 722 are coupled via transmission medium 710. [0156] As noted above, the FIFO structure has various attributes that determine at least part of the physical implementation and operation of the FIFO structure, including for example, depth, direction, and data type, of the FIFO structure, each configurable by one or more of the first and second nodes described above. [0157] In one embodiment, the depth of the FIFO structure may include a hardware depth 802, comprising a depth (number of storage elements) of the first portion of the FIFO structure, and a memory depth 804, comprising a depth (number of storage elements) of the second portion of the FIFO structure, where the depth comprises the sum of the hardware depth and the memory depth. The memory depth 804 may have a default configuration of twice the hardware depth 802, although any other values may be used as desired.[0158] Note that in preferred embodiments, the hardware depth of the FIFO structure may be configurable at compile time, while the memory depth of the FIFO structure may be configurable at run time. One reason for this asymmetry is that the program code implementing the first portion of the FIFO structure, i.e., that portion deployed to the programmable hardware element, must be compiled and otherwise processed to generate a hardware configuration program that is then deployed to the programmable hardware element, and thus the hardware depth must be specified and configured at or before compile time. In contrast, the second portion of the FIFO structure, i.e., that portion deployed to the controller memory, is implemented in memory, e.g., in random access memory (RAM), which is suitable for dynamic configuration, and so the memory depth may be configured at run time. [0159] As is well known in the art of data structures, the FIFO structure preferably includes a front, from which data may be read, and a rear, to which data may be written. Because the FIFO structure is intended to facilitate communications between the programmable hardware element (e.g., the first graphical program implemented thereon) and the controller (e.g., the second graphical program implemented thereon), the front of the FIFO structure may be comprised on one of the devices, while the rear of the FIFO structure may be comprised on the other. The specific placement of the front and rear depends upon the direction of the FIFO, which is determined by the direction of the communication between the devices.[0160] Note that the direction dependence of the placement of the front and rear of the FIFO structure may be at least in part due to the nature of data transfer logic (e.g., hardware or software) that may be used (in some embodiments), e.g., in embodiments where, for example, the DMA controller that actually performs the data transfers between the two portions of the DMA FIFO operates in a "greedy" manner. More specifically, the DMA controller (and DMA FIFO) may operate in such a way as to maximize the locality of the data to be retrieved, i.e., placing the front of the DMA FIFO from which data are retrieved on the device where the retrieved data will be used. One benefit of this is that if the bus 710 becomes inoperable for any reason the user of the data (i.e., the first or second graphical program) may continue to retrieve data for a time, i.e., whatever data are stored in the local portion of the FIFO may be retrieved, even though no data are being transmitted across the bus 710. Similarly, when the bus 710 is inoperable, the entity inserting data into the FIFO may continue to do so, since the rear of the FIFO is located on the same device as that entity.[0161] Figure 8B illustrates this aspect of the FIFO, according to one embodiment. As indicated in Figure 8B, if the direction of the FIFO is configured to be memory to hardware, i.e., controller memory to programmable hardware element, the first (hardware) portion of the FIFO includes the front of the FIFO and the second portion of the FIFO includes the rear of the FIFO, as indicated by FIFO 800A. Alternatively, if the direction of the FIFO is configured to be hardware to memory, i.e., programmable hardware element to controller memory, the first portion of the FIFO includes the rear of the FIFO and the second portion of the FIFO includes the front of the FIFO, as indicated by FIFO 800B.[0162] Thus, if communication from the controller to the programmable hardware element is desired, the FIFO may be configured with the controller memory to programmable hardware element direction (800A). In this case, the controller (e.g., the second graphical program) may insert data at the rear of the FIFO (which is preferably comprised in controller memory), and the programmable hardware element (e.g., the first graphical program) may retrieve that data at the front of the FIFO (which is preferably comprised on the programmable hardware element). [0163] Conversely, if communication from the programmable hardware element to the controller is desired, the FIFO may be configured with the programmable hardware element to controller memory direction (800B). In this case, the programmable hardware element (e.g., the second graphical program) may insert data at the rear of the FIFO (which is preferably comprised on the programmable hardware element), and the controller (e.g., the first graphical program) may retrieve that data at the front of the FIFO (which is preferably comprised in controller memory).Figure 9 - FIFO Distributed Among Multiple Programmable Hardware Elements[0164] In some embodiments, the FIFO may be used for communication among reconfigurable devices (e.g., that each include respective programmable hardware elements), instead of between a reconfigurable device and a controller. Figure 9 illustrates such an alternative embodiment. As shown, reconfigurable device 720 (a first reconfigurable device), described above with reference to Figure 7, may be coupled to another reconfigurable device 721 (a second reconfigurable device) instead of controller 722. As described above with reference to Figure 7, in some embodiments, the first reconfigurable device 720 includes programmable hardware element 716, configured with data transfer logic, such as DMA controller 712 (optionally as first and second portions 712A and 712B), a first graphical program 704, and a first portion of FIFO 710A.[0165] The second reconfigurable device 721 shown is substantially similar to reconfigurable device 720, where similar but possible variant elements are labeled with a "prime" indicator. For example, in embodiment shown, the reconfigurable device 721 includes programmable hardware element 716', whereupon are configured respective data transfer logic, such as DMA controller 712' (optionally as first and second portions 712A' and 712B'), a second graphical program 704', and a second portion of the FIFO 710B'.[0166] Thus, the second reconfigurable device replaces the controller (722) in Figure 7. Instead of data moving between a programmable hardware element and a controller, data moves between two programmable hardware elements without the need for a controller. Note that in the embodiment of Figure 9, the FIFO is still distributed, but now both portions are implemented in reconfigurable devices 720 and 721 , instead of one portion being implemented in a reconfigurable device and one being implemented in the memory of the controller 722. [0167] Note also that since both portions of the FIFO are implemented in reconfigurable devices, the depths of both portions of the FIFO must be set at compile time, in contrast to the implementation of Figure 7, the controller memory part of the FIFO may be specified at runtime. [0168] Note further that this embodiment still facilitates communication between two different graphical programs. However, in this embodiment, both graphical programs are preferably comprised of nodes suitable for implementation on programmable hardware elements, such as those used in the graphical program shown Figure 6A, since these nodes are representative of programming constructs that run on reconfigurable hardware. Thus, nodes such as those used in the program of Figure B should not be used, since these nodes are representative of programming constructs that execute on controllers.[0169] It should be noted that while in the embodiment shown in Figure 9, each of the reconfigurable devices includes DMA logic (712 and 712'), in other embodiments, one of the reconfigurable devices may not include data transfer logic, e.g., DMA controller 712 or 712'. In other words, in some embodiments, the data transfer logic of one of the reconfigurable devices may operate to transfer data with respect to both reconfigurable devices. [0170] Thus, in some embodiments, the FIFO may be implemented and used for communication between two graphical programs running on two different reconfigurable hardware elements.[0171] Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
A contact element may be formed on the basis of a hard mask (233), which may be patterned on the basis of a first resist mask (210) and on the basis of a second resist mask (211), so as to define an appropriate intersection area (234) which may represent the final design dimensions of the contact element. Consequently, each of the resist masks may be formed on the basis of a photolithography process with less restrictive constraints since at least one of the lateral dimensions may be selected as a non-critical dimension in each of the two resist masks.
CLAIMS 1. A method comprising: forming a first resist mask above a hardmask layer formed on a material layer of a semiconductor device; forming a first opening in said hardmask layer on the basis of said first resist mask, said first opening having a first dimension along a first lateral direction and has a second dimension along a second lateral direction that is different from said first lateral direction, said first dimension being less than said second dimension; forming a second resist mask above said hardmask layer, said second resist mask having a second opening that defines an intersection area with said first opening; and forming a contact opening in said material layer on the basis of said intersection area. 2. The method of claim 1 , wherein said second opening has at least one lateral dimension that is greater than said first lateral dimension. 3. The method of claim 2, wherein each lateral dimension of said second opening is greater than said first lateral dimension of said first opening. 4. The method of claim 1 , wherein forming said first opening in said hardmask layer comprises performing a selective etch process to remove material of said hardmask layer selectively to said material layer. 5. The method of claim 4, wherein forming said contact opening on the basis of said intersection area comprises performing a second selective etch process to remove material of said material layer and using said second resist mask and said hardmask layer as etch masks. 6. The method of claim 5, wherein forming said contact opening further comprises performing said second etch process and using a contact etch stop layer to control said second etch process and performing a third etch process to open said contact etch stop layer and using at least one of said material layer, said hardmask layer and said second resist mask as etch masks in said third etch process. 7. The method of claim 1 , wherein forming said first opening comprises forming a first portion of said first opening so as to extend through a first sub-layer of said hardmask layer and forming a second portion of said first opening on the basis of said intersection area so as to extend through at least a second sub-layer of said hardmask layer. 8. The method of claim 7, wherein said second portion extends through a third sub-layer of said hardmask layer. 9. The method of claim 8, wherein at least two of said first, second and third sub-layers are comprised of different material compositions. 10. The method of claim 1 , wherein said contact opening connects to a contact region of a transistor element formed in and above a semiconductor layer. 11. The method of claim 1 , wherein said contact opening extends to a metal region formed in a metallization layer of said semiconductor device. 12. The method of claim 1 , further comprising filling said contact opening with a metal containing material and removing excess material of said metal containing material and residues of said hardmask layer in a common removal process. 13. A method comprising:forming a hardmask layer above an interlayer dielectric material of a semiconductor device; forming an opening in said hardmask layer using a first resist mask, said opening having a rectangular portion; forming a mask opening in said rectangular portion using a second resist mask, said mask opening extending through said hardmask layer; and forming a contact opening in said interlayer dielectric material by using said mask opening, said contact opening extending through said interlayer dielectric material. 14. The method of claim 13, wherein said opening is formed so as to extend through said hardmask layer and said mask opening is formed by an intersection area formed by said portion and said second resist mask. 15. The method of claim 13, wherein said opening extends to a first sub-layer of said hardmask layer. 16. The method of claim 13, wherein a smaller one of lateral dimensions of said rectangular portion corresponds to a critical dimension associated with said contact opening. 17. The method of claim 16, wherein said mask opening has a substantially rectangular top surface. 18. The method of claim 16, wherein said second resist mask is formed so as to have lateral dimensions that are greater than said critical dimension. 19. The method of claim 13, further comprising filling said contact opening with a metal containing material and removing said hardmask layer and excess material of said metal containing material in a common removal process. 20. A semiconductor device comprising: a plurality of circuit elements formed in and above a semiconductor layer; a contact region connecting to at least one of said plurality of circuit elements; an interlayer dielectric material enclosing said plurality of circuit elements; and a contact element extending through said interlayer dielectric material and connecting to said contact region, said contact element forming a rectangular elongated interface with said contact area. 21. The semiconductor device of claim 20, wherein a shorter one of lateral dimensions of said rectangular interface is approximately 100 nanometer of less.
CONTACTS AND VIAS OF A SEMICONDUCTOR DEVICE FORMED BY A HARDMASK AND DOUBLE EXPOSURE FIELD OF THE PRESENT DISCLOSURE Generally, the subject matter disclosed herein relates to integrated circuits and more particularly to contact features for connecting contact areas or metal regions of semiconductor devices with conductive lines or regions, such as metal lines, in a higher wiring level of the semiconductor device, wherein the contact features are formed on the basis of advanced photolithography techniques. DESCRIPTION OF THE PRIOR ART The fabrication of microstructures, such as integrated circuits, requires tiny regions of precisely controlled size to be formed in one or more material layers of an appropriate substrate, such as a silicon substrate, an SOI (silicon on insulator) substrate, or other suitable carrier materials. These tiny regions of precisely controlled size are typically defined by patterning the material layer(s) by applying lithography, etch, implantation, deposition processes and the like, wherein typically at least in a certain stage of the patterning process a mask layer may be formed over the material layer(s) to be treated to define these tiny regions. Generally, a mask layer may consist of or may be formed by means of a layer of photoresist that is patterned by a lithographic process, typically a photolithography process. During the photolithographic process, the resist may be spin- coated onto the substrate surface and then selectively exposed to ultraviolet radiation through a corresponding lithography mask, such as a reticle, thereby imaging the reticle pattern into the resist layer to form a latent image therein. After developing the photoresist, depending on the type of resist, positive resist or negative resist, the exposed portions or the non-exposed portions are removed to form the required pattern in the layer of photoresist. Based on this resist pattern actual device patterns may be formed by further manufacturing processes, such as etch, implantation, anneal processes, and the like. Since the dimensions of the patterns in sophisticated integrated microstructure devices are steadily decreasing, the equipment used for patterning devicefeatures have to meet very stringent requirements with regard to resolution and overlay accuracy of the involved fabrication processes. In this respect, resolution is considered as a measure for specifying the consistent ability to print minimum size images under conditions of predefined manufacturing variations. One important factor in improving the resolution is the lithographic process, in which patterns contained in the photo mask or reticle are optically transferred to the substrate via an optical imaging system. Therefore, great efforts are made to steadily improve optical properties of the lithographic system, such as numerical aperture, depth of focus and wavelength of the light source used. The resolution of the optical patterning process may therefore significantly depend on the imaging capability of the equipment used, the photoresist materials for the specified exposure wavelength and the target critical dimensions of the device features to be formed in the device level under consideration. For example, gate electrodes of field effect transistors, which represent an important component of modern logic devices, may have a length of 50 nanometers and less for currently produced devices with significantly reduced dimensions for device generations that are currently under development. Similarly, the line width of metal lines provided in the plurality of wiring levels or metallization layers may also have to be adapted to the reduced feature sizes in the device layer in order to account for the increased packing density. Consequently, the actual feature dimensions may be well below the wavelength of currently used light sources provided in current lithography systems. For example, currently in critical lithography steps an exposure wavelength of 193 nm may be used, which therefore may require complex techniques for finally obtaining resist features having dimensions well below the exposure wavelength. Thus, highly non-linear processes are typically used to obtain dimensions below the optical resolution. For example, extremely non-linear photoresist materials may be used, in which a desired photochemical reaction may be initiated on the basis of a well defined threshold so that weakly exposed areas may substantially not change at all, while areas having exceeded the threshold may exhibit a significant variation of their chemical stability with respect to a subsequent development process. The usage of a highly non-linear imaging processes may significantly extend the capability for enhancing the resolution for available lithography tools and resist materials.Due to the complex interaction between the imaging system, the resist material and the corresponding pattern provided on the reticle, even in highly sophisticated imaging techniques, which may possibly include optical proximity corrections (OPC) and the like, the consistent printing of latent images, that is, of exposed resist portions which may reliably be removed or maintained, depending on the type of resist used, may also significantly depend on the specific characteristics of the respective features to be imaged. For instance, it has been observed that line-like features having a specific design width and a design length may require specific exposure recipes for otherwise predefined conditions, such as a specified lithography tool in combination with a specific reticle and resist material, in order to reliably obtain the desired critical width dimension, while the length dimension is less critical, except for respective end portions, so-called end caps of the respective lines, which may also typically require respective corrections. Consequently, for other features having critical dimensions in two lateral directions, such as substantially square-like features, the same exposure recipe as used for line-like features may not be appropriate and may therefore require elaborated process parameters, for instance with respect to exposure dose and OPC, and the like. Furthermore, the respective process parameters in such a highly critical exposure process may have to be controlled to remain within extremely tight process tolerances compared to a respective exposure process based on line-like features, which may contribute to an increasing number of non-acceptable substrates, especially as highly scaled semiconductor devices are considered. Due to the nature of the lithography process, the corresponding process output may be monitored by respective inspection techniques in order to identify non-acceptable substrates, which may then be marked for reworking, that is, for removing the exposed resist layer and preparing the respective substrates for a further lithography cycle. However, lithography processes for complex integrated circuits may represent one of the most dominant cost factors of the entire process sequence, thereby requiring a highly efficient lithography strategy so as to maintain the number of substrates to be reworked as low as possible. Consequently, the situation during the formation of sophisticated integrated circuits may increasingly become critical with respect to throughput. With reference to Figs 1a - 1c a typical process sequence for forming vias or contacts and line-like features may be described in order to more clearly demonstrate theproblems involved in the manufacturing process for forming advanced semiconductor devices. Fig 1a schematically illustrates a top view of a semiconductor device 100 in a manufacturing stage after a respective lithography process including a respective development step. The semiconductor device 100 may comprise a resist layer 110, which may be formed above a respective material layer as will be described later on with reference to Fig 1 b. The resist layer 110 has formed therein respective resist openings 110a having lateral dimensions in a length direction L and a width direction W, indicated as 110L, 110W. The respective lateral dimensions 110L1 110W may be similar, if for instance a substantially square-like feature is to be formed on the basis of the resist openings 110A. As previously explained, for highly sophisticated applications, the corresponding lateral dimensions 110L, 110W may represent critical dimensions for the device layer under consideration, ie. these lateral dimensions may represent the minimum dimensions to be printed in the corresponding device level. The respective resist openings 110A are to be used as etch masks for patterning the underlying material layer in order to form respective openings therein that, in turn, may be used for forming appropriate device features, such as contacts, vias, and the like, which may provide contact to overlying and underlying device features, such as metal regions, metal lines, and the like. For example, it may be assumed that a connection to a respective line feature is to be provided in a subsequent device level, wherein it may be assumed that the corresponding line features, indicated by dashed lines 120a, may have substantially the same critical dimension in the width direction W. Fig 1b schematically illustrates the semiconductor device 100 in a cross-sectional view taken along the line Ib- Ib from Fig 1a. The semiconductor device 100 in this manufacturing stage comprises a substrate 101 , which may represent an appropriate carrier material including the respective material layers (not shown) which may comprise device features, such as transistors, capacitors, and the like. Furthermore, a dielectric layer 102 comprised of any appropriate dielectric material, such as silicon dioxide, silicon nitride, combinations thereof, and the like, is formed above the substrate 101 and comprises a respective opening 102a having similar lateral dimensions as the respective resist opening 110a. Furthermore, a further dielectric layer 103, for instance an ARC layer and the like, may be formed on the dielectric layer 102 in order to assist therespective exposure process for patterning the resist layer 110. The layer 103 may be formed of any appropriate material, such as silicon oxynitride, silicon nitride, and the like. The semiconductor device 100 as shown in Fig 1b may be formed on the basis of the following processes. After providing respective device features in and above the substrate 101 , the dielectric layer 102 may be deposited on the basis of well-established manufacturing techniques, which may comprise CVD (chemical vapour deposition) processes, and the like. For instance, sophisticated CVD techniques for forming silicon nitride, silicon dioxide, and the like, are well-established in the art, for instance for providing a reliable encapsulation of respective device features, such as transistors and the like. After the deposition of the layer 102, a respective planarization process may be performed, if required, so as to enhance the surface topography prior to forming the layer 103 and the resist layer 110. In other cases, the respective surface topography may be maintained and may be taken into account by appropriately forming the resist layer 110. The resist layer 110 may be prepared for a subsequent exposure process on the basis of established treatments, such as pre-exposure bake and the like, to enhance process uniformity. Thereafter, the resist layer 110 may be exposed on the basis of a respective photomask or reticle, which may comprise corresponding mask features that may possibly be designed on the basis of appropriate correction techniques in order to take into account the respective non-linearity of the corresponding exposure process, as previously described. In other cases, any other appropriate techniques, such as phase shift masks and the like, may be used. During the exposure process, typically a well- defined exposure field may be illuminated by an optical beam that is modulated by the pattern included in the reticle so as to transfer the reticle pattern into the resist layer 110 in order to define a respective latent image. That is, the latent image may be understood as a respective portion of the resist layer 110 receiving a significant amount of radiation energy in order to modify the photo-chemical behaviour of the corresponding resist material. In the present case, it may be assumed that a positive resist may be used which may become soluble upon exposure during a subsequent development step. Consequently, during the respective exposure process, the substrate 101 is appropriately aligned and thereafter a certain exposure dose is transferred into the respective exposure field under consideration in order to create the respective latent images, wherein the mask features and/or the imaging techniques may be selected such that a certain threshold of energy for generating a required photochemical modificationmay be accomplished within specified areas according to the desired design dimensions of the respective features. That is, in the above-described case, the exposure process is designed in combination with respective mask features so as to deposit sufficient energy within an area corresponding to the openings 110a having the lateral dimensions 110L, 110W in order to obtain a substantially complete removal of the exposed resist material during the subsequent development step. Due to the minimum dimensions in both lateral directions respective process parameters of the exposure process, such as exposure dose and the like, as well as of any pre-exposure and post-exposure processes, may have to be maintained within tightly set process margins in order to obtain the resist openings 110a since even some incompletely opened areas within the resist opening 110a may result in corresponding irregularities during the subsequent etch process for forming the openings 102a in the dielectric layer 102. Hence, after developing the exposed resist layer 110, ie. after removing exposed portions of the resist material, an inspection of the substrate 100 may be performed in order to identify exposure fields outside the respective specifications. Due to the very tight process margins for forming the critical openings 110a a corresponding high number of non- acceptable exposure fields, each of which may be exposed on the basis of an individually adjusted exposure dose, may occur in particular if highly scaled devices are considered, in which the respective lateral dimensions 110L, 110W may be approximately 100 nanometers and less. Fig 1c schematically illustrates the device 100 in a cross-sectional view according to the section Ic-Ic in Fig 1a in an advanced manufacturing stage. Here, the opening 102a may be filled with an appropriate material, such as a metal, and a further dielectric layer 104 may be formed above the layer 102 which comprises a further line-like feature 104a. Furthermore, a resist layer 120 possibly in combination with a respective ARC layer 113 may be formed above the dielectric layer 104 including respective trench-like openings 120a having the lateral dimension 110W. In this case, it is assumed that the width of the resist opening 120a may substantially correspond to the critical dimensions of the resist openings 110a. A respective process flow for forming and patterning the layers 104, 113 and 120 may comprise substantially the same process steps as described with reference to Fig 1b. However, as previously explained, during the corresponding lithography sequenceincluding any pre- and post-exposure processes, it has been observed that corresponding process tolerances may be less critical compared to the exposure process for forming the openings 110a, which is believed to be caused by the lack of respective boundary conditions in the lateral length direction L. For example, the respective resist opening 120a may be formed with a reduced exposure dose compared to the opening 11Oa1 while also other process parameters may be less critical, thereby providing a moderately wider process window for the corresponding lithography process for forming the line-like features 120a. Since respective resist openings 110a for contacts and vias may have to be provided at various manufacturing stages, the very tight process tolerances to be met may thus significantly contribute to a reduced overall throughput of the per se very cost intensive lithography module, which may therefore significantly contribute to overall production costs. Furthermore, the respective exposure processes may be restricted to highly advanced lithography tools only, thereby even more increasing the overall production costs. Furthermore, the fabrication of contacts on the basis of substantially circular cross-sections may contribute to significant yield losses due to patterning related process fluctuations as described above, while also the contact resistance, for instance for connecting the very first metallization layer with the active semiconductor regions, is moderately high. In view of the situation described above the present disclosure relates to semiconductor devices and techniques for forming critical contact elements while avoiding or at least reducing the effects of one or more of the problems identified above. SUMMARY OF THE DISCLOSURE Generally, the subject matter disclosed herein relates to process techniques and semiconductor devices in which a critical exposure process, for instance during the formation of contact elements connecting to contact areas of transistors and the like may be replaced by two less critical exposure processes using two successively formed resist masks obtained by the two less critical exposure processes in order to appropriately pattern a hard mask layer, which may then be used for transferring the actual contactopening into the lower lying dielectric material. To this end, each of the resist masks used for patterning the hard mask layer may exhibit at least one lateral dimension that may be obtained with less restrictive constraints in view of the photolithography process, as previously described, thereby contributing to overall increased process flexibility, since less sophisticated lithography tools may be used or for given lithography tools the error rate of the entire exposure process and the related patterning sequence may be reduced. For example, the mask layer may be patterned in a first step on the basis of a resist mask which may have an elongated shape, thereby relaxing overall exposure related constraints, while the desired lateral dimension along the length direction of the initial opening in the hard mask layer may then be determined on the basis of a second resist mask, which may be provided by a separate exposure step, wherein also at least one or even both lateral dimensions may be selected as "non-critical" dimensions depending on the desired size of the final contact opening. Consequently, any process related constraints with respect to the critical contact patterning sequence may significantly be relaxed while also providing the possibility of appropriately adjusting the size of the corresponding contact elements, at least in one lateral dimension according to device requirements, for instance in view of reducing the overall contact resistivity. Similarly, respective "contacts" or vias may be formed in the metallization levels of sophisticated semiconductor devices, in which also more or less critical exposure and patterning process sequences may be required. One illustrative method disclosed herein comprises forming a first resist mask above a hard mask layer formed on a material layer of a semiconductor device. The method further comprises forming a first opening in the hard mask layer on the basis of the first resist mask, wherein the first opening has a first dimension along a first lateral direction and has a second dimension along a second lateral direction that is different from the first lateral direction and wherein the first dimension is less than the second dimension. Additionally, the method comprises forming a second resist mask above the hard mask layer, wherein the second resist mask has a second opening that defines an intersection area with the first opening. Finally, the method comprises forming a contact opening in the material layer on the basis of the intersection area. A further illustrative method disclosed herein comprises forming a hard mask layer above an interlayer dielectric material of a semiconductor device. Moreover, an opening isformed in the hard mask layer by using a first resist mask wherein the opening has a rectangular portion. The method further comprises forming a mask opening in the rectangular portion using a second resist mask, wherein the mask opening extends through the hard mask layer. Additionally, the method comprises forming a contact opening in the interlayer dielectric material by using the mask opening wherein the contact opening extends through the interlayer dielectric material. One illustrative semiconductor device disclosed herein comprises a plurality of circuit elements formed in and above a semiconductor layer. Furthermore, a contact region is provided and connects to at least one of the plurality of circuit elements and an interlayer dielectric material encloses the plurality of circuit elements. Furthermore, the semiconductor device comprises a contact element extending through the interlayer dielectric material and connecting to the contact region, wherein the contact element forms a rectangular elongated interface with the contact area. BRIEF DESCRIPTION OF THE DRAWINGS Further embodiments of the present disclosure are defined in the appended claims and will become more apparent with the following detailed description when taken into account with the accompanying drawings, in which: Fig 1 a schematically illustrates a top view of a semiconductor device including resist openings having critical dimensions in two lateral directions formed in accordance with conventional exposure strategies; Figs 1 b and 1 c schematically illustrate cross-sectional views of the semiconductor device shown in Fig 1 a; Fig 2a schematically illustrates a cross-sectional view of a semiconductor device in which an interlayer dielectric material is to be patterned on the basis of a hard mask and two less critical lithography steps according to illustrative embodiments;Fig 2b schematically illustrates a top view indicating positions of corresponding contacts to be formed; Fig 2c schematically illustrates the semiconductor device in a further advanced manufacturing stage, in which the position and the size of a contact opening are defined on the basis of a second resist mask according to illustrative embodiments; Fig 2d schematically illustrates a top view of the device of Fig 2c, indicating an intersection area for defining the size and position of respective contact elements; Figs 2e and 2f schematically illustrate cross-sectional views of the semiconductor device during various manufacturing stages in forming the contact opening on the basis of a hard mask layer and the resist mask according to illustrative embodiments; Fig 2g schematically illustrates a top view after forming the contact openings so as to extend to a contact area; Fig 2h schematically illustrates the semiconductor device in a further advanced manufacturing stage in which residues of the hard mask layer may be removed according to illustrative embodiments; Figs 2i and 2j schematically illustrate a cross-sectional view and a top view, respectively, of the semiconductor device in various manufacturing stages in patterning a hard mask layer comprising two sub layers according to still further illustrative embodiments; and Figs 2k - 2t schematically illustrate cross-sectional views and top views, respectively, of a semiconductor device during various manufacturing stages in forming a contact opening on the basis of a hard mask including a plurality of sub layers and using two separately formed resist masks according to still further illustrative embodiments.DETAILED DESCRIPTION Although the present disclosure is described with reference to the embodiments as illustrated in the following detailed description and in the drawings, the detailed description and the drawings are not intended to limit the present disclosure to the particular embodiments disclosed therein, but rather the described embodiments merely exemplify the various aspects of the present disclosure, the scope of which is defined by the appended claims. Generally, the present disclosure provides process techniques and semiconductor devices for enhancing the patterning of critical contact elements, for instance contact elements connecting to contact areas of circuit elements, such as transistors or contacts in the form of vias connecting to a lower lying metal region in the metallization system of sophisticated semiconductor devices. Typically, contacts and vias may have similar dimensions in the respective lateral directions according to conventional strategies, thereby requiring tight process parameter control and sophisticated exposure tools during the corresponding process for forming the respective resist mask, as previously explained. In order to significantly relax the respective constraints, ie. providing less restrictive process windows for the overall process sequence, it is taken advantage of the fact that critical dimensions in one specific lateral dimension may be obtained on the basis of less critical lithography requirements, as long as the corresponding orthogonal lateral dimension is significantly greater. Consequently, by using two separately formed resist masks based on less critical mask openings, a corresponding intersection area may be formed in the hard mask layer, which may have the desired design dimensions in both lateral directions without requiring highly complex and critical exposure process techniques. That is, at the respective intersection area formed by two independently provided resist masks in combination with the hard mask, the desired overall lateral dimensions of the contact opening to be formed may be defined as may be required by design rules, without having to perform one highly critical lithography step. For instance, if critical dimensions in both lateral directions may be required, each of the corresponding resist masks may still be provided on the basis of less critical lithography parameters, while on the other hand increased flexibility may be provided in appropriately adapting at least one lateral dimension of the finally obtained contact opening wherein at least one ofthe resist masks may be formed on the basis of a non-critical lithography process since both lateral dimensions of the corresponding mask opening may be selected to be well above any critical dimension. Respective contact failures may significantly be reduced and also increased process flexibility may be obtained, for instance in terms of the possibility of using less advanced lithography tools and the like. With reference to Figs 2a - 2t further illustrative embodiments will now be described in more detail, wherein also reference may be made to Figs 1a - 1c, if appropriate. Fig 2a schematically illustrates a cross-sectional view of a semiconductor device 200 comprising a substrate 201 above which may be formed a semiconductor layer 220. The substrate 201 may represent any appropriate carrier material for forming thereabove the semiconductor layer 220, which may be provided in the form of a silicon-based layer, a germanium layer or any other appropriate semiconductor material that may be used for forming corresponding circuit elements 221 therein and thereabove. The circuit elements 221 may represent transistors, capacitors and the like, as may be required in view of the overall circuit configuration of the device 200. In the embodiment shown the circuit element 221 may represent a field effect transistor wherein it should be understood that any other circuit elements, such as bipolar transistors and the like, may be used as is required for the device 200. In sophisticated applications, the circuit elements 221 may be formed on the basis of critical device dimensions, such as a length 222I of a gate electrode 222, which may be approximately 50 nm and less, depending on the technology standard under consideration. Consequently, also critical dimensions of other device levels, such as a contact structure 230, or any metallization level (not shown) may have to be formed on the basis of respective design dimensions that are adapted to the critical dimensions in the device level 220. The circuit elements 221 may further comprise respective contact areas 223, which may be formed on or in the semiconductor layer 220 and/or in the gate electrode 222 and which may include a metal-containing material, such as a metal suicide and the like. Due to the reduced feature sizes in the device level 220 also the corresponding lateral dimensions of the contact areas 223 may be reduced, thereby requiring highly sophisticated and thus critical patterning regimes for forming corresponding contact elements in the contact level 230. In the embodiment shown the contact level 230 may comprise an interlayer dielectric material 232, for instance in the form of silicon dioxide and the like, possibly incombination with an etch stop material 231 , such as silicon nitride and the like, or any other appropriate etch stop material. It should be appreciated, however, that the material composition of the dielectric components of the contact level 230 may be selected in any other appropriate manner so as to comply with device and process requirements for the device 200. For example, frequently the contact etch stop layer 231 may be provided as a highly stressed dielectric material in order to enhance performance of field effect transistors due to a corresponding strain that may be induced in the semiconductor layer 220 below the gate electrode 222. On the other hand, the material composition of the interlayer dielectric material 232 may be selected so as to provide for the desired chemical and mechanical characteristics for maintaining integrity of the circuit element 221 and provide for an appropriate platform for forming further metallization layers above the contact level 230. Moreover, in the embodiment shown the contact level 230 may further comprise a hard mask material 233, for instance in the form of silicon nitride, when well-established dielectric materials in the semiconductor process may be used in terms of providing for a high degree of compatibility with conventional process techniques. In other cases, any other material may be used that provides for a desired high etch selectivity with respect to at least the interlayer dielectric material 232. For example, silicon carbide, silicon oxynitride, certain high-k dielectric materials, for instance hafnium oxide and the like, may be used for this purpose. It should be appreciated that frequently in advanced semiconductor devices high-k dielectric materials may increasingly be used in order to enhance overall performance of corresponding transistor elements. Some of these high-k dielectric materials may also exhibit a high etch selectivity with respect to a plurality of well-established materials used in the semiconductor production process and may readily be used as a hard mask material. Moreover, a resist mask 210 is formed above the hard mask layer 233 and comprises respective openings 210a, which may have lateral dimensions at least in one lateral direction that are to be understood as non-critical dimensions. That is, in some illustrative embodiments, a width 21Ow of the openings 210a may be selected to be greater than a corresponding lateral dimension of a contact opening to be formed in the interlayer dielectric material 232. Similarly, a length direction (not shown in Fig 2a) may be selected to correspond to a critical dimension or may be selected to be greater than a corresponding critical dimension, ie. a lateral dimension of contact elements determined by the corresponding design rules. In the embodiment shown in Fig 2a it may be assumed that the width 21Ow may substantially correspond to the design width of acorresponding contact element to be formed in the contact level 230, while a corresponding length dimension may be significantly greater than the dimension 21Ow. Fig 2b schematically illustrates a top view of the semiconductor device 200, in which an illustrative example for a configuration of the openings 210a is shown. In this example, the openings 210a may have a length dimension 2101 that is significantly greater than the corresponding width 21Ow, which may be selected so as to correspond to the overall lateral dimensions of the circuit elements 221 so that corresponding contact elements may be positioned with a required lateral offset to each other. For instance, respective positions 234a, 234b may correspond to positions and the lateral size of contact elements to be formed so as to connect to the contact areas 223a and 223b, respectively. Thus, the size and position of the contacts 234a, 234b may be defined by the position and the width 21Ow of the openings 210a while, however, in the length direction 2101 a corresponding restriction of the positions and sizes of the contact elements 234a, 234b may be accomplished on the basis of a further resist mask, which may be provided in a later manufacturing stage. The semiconductor device 200 as shown in Figs 2a and 2b may be formed on the basis of the following processes. After forming the corresponding circuit elements 221 , using well-established process techniques, the contact level 230 may be formed. For this purpose the materials 231 and 232 may be provided in accordance with well-established process techniques, ie. any plasma assisted deposition processes or thermally activated deposition techniques may be used, possibly followed by a corresponding planarization step for planarizing the resulting surface topography. Thereafter, the hard mask layer 233 may be formed, for instance by plasma assisted CVD (chemical vapour deposition), thermally activated CVD, spin-on techniques, physical vapour deposition and the like, depending on the characteristics of the hard mask material 233. Thereafter, the resist mask 210 may be formed by using an appropriate lithography mask in order to expose the resist material so as to obtain latent images corresponding to the openings 210a. As previously discussed, since at least one of the lateral dimensions 21Ow, 2101 may significantly greater compared to a corresponding critical design dimension a corresponding exposure process may be performed on the basis of less critical process constraints. It should be appreciated that, if required, the layer 233 or a portion thereof may act as an ARC (antireflective coating) material.After forming the resist mask 210, in one illustrative embodiment, a selective etch process may be performed in which the corresponding openings 210a may be transferred into the mask material 233 so as to extend substantially completely through the mask layer 233, while in other embodiments the corresponding openings may extend into the mask material 233 without completely extending therethrough, as will be described later on in more detail. Respective anisotropic plasma assisted etch techniques are well-established for a plurality of materials and corresponding recipes may be used for patterning the hard mask material 233. For example, a plurality of process recipes are available for etching silicon nitride in the presence of a resist material, wherein also etch selectivity with respect to the interlayer dielectric material 232 may be obtained. Hence, in a corresponding etch process, the material 232 may act as an efficient etch stop material. Fig 2c schematically illustrates the semiconductor device 200 in a further advanced manufacturing stage. As illustrated, openings 233a are provided in the hard mask material 233 so as to extend to the interlayer dielectric material 232, wherein the corresponding lateral dimensions may substantially correspond to the dimensions 21Ow, 2101 (cf. Fig 2b). Furthermore, a second resist mask 211 may be formed above the hard mask layer 233 and may comprise a corresponding opening 211a having appropriate lateral dimensions so as to define, in combination with the hard mask 233, an intersection area 234 having lateral dimensions that may substantially correspond to lateral dimensions of a contact element to be formed so as to connect to the contact area 223a. Fig 2d schematically illustrates a top view of the semiconductor device 200 of Fig 2c. For convenience the openings 211a defined by the resist mask 211 are indicated as dashed lines and the corresponding intersection 234 defined by the previously formed openings 233a and the openings 211a are illustrated as hatched areas. As is evident from Fig 2d, corresponding lateral dimensions of the openings 211a may be selected so as to adjust a length dimension 234I of the intersection area 234 in accordance with design requirements for the corresponding contact elements. For example, if a reduced overall contact resistance may be desired, the length dimension 234I may be selected moderately high, as is compatible with the overall device configuration, while in othercases the dimension 234I may substantially correspond to a critical dimension, if a substantially square-like configuration of the corresponding contact elements may be desired. On the other hand, the width dimension of the intersection area 234 is defined by the width 21Ow, while however the opening 211a may significantly extend beyond the opening 233a, thereby also providing for moderately relaxed process conditions during a corresponding lithography process for forming the resist mask 211. It should be appreciated that in the above-described embodiments, the opening 211a may be provided in the form of the resist mask 211 , while the openings 233a may have been formed in the mask layer 233 in the preceding manufacturing sequence. In other cases, the openings 233a may be formed so as to correspond to the openings 211a, while the resist mask 211 may be formed such that the corresponding openings formed therein correspond to the lateral dimensions of the openings 233a, as shown in Fig 2d. At any rate, the corresponding lithography processes for defining the openings 233a and 211a may be performed on the basis of less restrictive lithography parameters compared to a process sequence, in which both lateral dimensions of a corresponding contact element have to be defined on the basis of a single lithography step. Again referring to Fig 2c it should be appreciated that the resist mask 211 may be formed, in some illustrative embodiments, on the basis of an additional planarization material (not shown), which may be provided so as to obtain a planarized surface topography, thereby filling the openings 233a previously formed in the hard mask layer 233. For example, any appropriate polymer material may be deposited by spin-on techniques and may be used as a planarization material, and possibly as an ARC material if required. Thereafter, the resist material may be provided and may be patterned on the basis of a corresponding lithography process, as previously described. If required, the corresponding planarization material may be removed from within the opening 211a, for instance on the basis of a specifically designed etch process, while in other cases the corresponding material may be removed during an etch process 213 that is designed to etch the interlayer dielectric material 232, while initially the corresponding planarization material may be removed. The etch process 213 may be performed on the basis of well-established anisotropic etch techniques when for instance silicon dioxide may be used as the interlayer dielectric material 232 in combination with silicon nitride material for the mask layer 233. As previously explained, also any other materials may be used as long as a pronounced etch selectivity between the material 233 and theinterlayer dielectric material 232 is obtained. In the illustrative embodiment shown in Fig 2c, the resist mask 211 , possibly in combination with a corresponding planarization material, may provide for a reliable coverage of any portions of the openings 233a outside the opening 211a. Fig 2e schematically illustrates the semiconductor device 200 in a further advanced manufacturing stage, in which a contact opening 235 is formed in the interlayer dielectric material 232, which may have lateral dimensions as are defined by the intersection area 234 (cf. Fig 2d). Moreover, depending on the etch recipe used, a significant portion of the resist mask 211 may also have been consumed during the preceding etch process, while in other cases, as previously explained, a corresponding planarization material as indicated by 212 may optionally provide for additional etch stop capabilities. In other cases, the outermost layer 233 may comprise two or more sub layers, as will be described later on in more detail, when a corresponding resist material, possibly in combination with the fill material 212, may not provide for the required etch stop capabilities. Fig 2f schematically illustrates the semiconductor device 200 when exposed to a further etch ambient 214 that is designed to remove material of the etch stop layer 231 selectively to the interlayer dielectric material 232. For example, well-established and highly selective anisotropic etch techniques are well-established so as to etch silicon nitride material selectively to silicon dioxide material. In the embodiments shown it may be assumed that the hard mask material 233 may also be comprised of silicon nitride, which may thus also be removed, at least within the opening 211a (cf. Fig 2e) and which may also be removed in other portions when finally the resist mask 211 (cf. Fig 2e) may completely be consumed. In other illustrative embodiments, prior to performing the etch process 214 the remaining resist mask 211 may be removed by any appropriate resist strip process and the exposed mask layer 233 may be etched along with the etch stop layer 231 , wherein a thickness of the mask layer 233 may appropriately be adjusted so as to be substantially completely removed during the etch process 214. Fig 2g schematically illustrates a top view of the device of Fig 2f. As illustrated, the contact areas 223a, 223b are exposed via the corresponding contact openings 235, which may have lateral dimensions that substantially correspond to the dimensions of theintersection area 234 (cf. Fig 2d). Furthermore in Fig 2g it may be assumed that portions of the initial hard mask layer 233 are still present outside of an area corresponding to the opening 211a (cf. Fig 2e) and outside of the openings 233a (cf. Fig 2d). In other cases, as discussed before, the layer 233 may substantially be completely removed during the etch process 214. Fig 2h schematically illustrates the semiconductor device 200 in a further advanced manufacturing stage. As illustrated, the contact opening 235 may be filled with a metal- containing material 236, such as tungsten, copper, aluminium and the like, possibly in combination with a corresponding barrier material 237, such as titanium nitride, titanium, tantalum, tantalum nitride and the like, depending on the overall device requirements. The materials 237, 236 may be deposited on the basis of well-established process techniques, such as CVD, sputter deposition, electroless deposition, electroplating and the like, depending on the materials used. Furthermore, the semiconductor device 200 may be subjected to a removal process 215, for instance in the form of a CMP process (chemical mechanical polishing) so as to remove excess material of the layers 237, 236, while in some illustrative embodiments also residues of the hard mask layer 233 may be removed during the process 215. Fig 2i schematically illustrates a cross-sectional view of the semiconductor device 200 after the removal process 215 of Fig 2h. As illustrated, a contact element 238 may be formed which defines an interface 238s with the contact area 223a, the lateral extension of which may be defined on the basis of less critical photolithography processes to define the intersection area 234 (cf. Fig 2d). Fig 2j schematically illustrates a top view of the interface 238s, which may have a substantially rectangular configuration with a width 238w and a length 238I, which may be determined by the corresponding lateral dimensions of the intersection area 234 and the corresponding etch parameters of the process 214 (cf. Fig 2f), since a corresponding inclination of respective sidewalls of the contact opening 235 (cf. Fig 2f) may be obtained. As previously discussed, at least one of the lateral dimensions 238w, 238I may be varied with increased flexibility in order to adapt to the overall characteristics of the contact element 238 to the device requirements. For instance, if the lateral dimension 238w may substantially be restricted by the design rules, in view of closelyspaced neighbouring circuit elements and the like, the length 238I may be selected appropriately large in order to reduce the overall contact resistance of the contact element 238. In this case, well-established "conventional" metal-containing materials, such as tungsten, may be used on the basis of less critical lithography techniques, even for highly scaled semiconductor devices, since an increased overall area of the interface 238s may compensate for a reduced conductivity of tungsten material compared to highly conductive metals, such as copper and the like, while nevertheless critical dimensions in the width direction may be respected. Fig 2k schematically illustrates the semiconductor device 200 according to further illustrative embodiments in which the hard mask layer 233 may comprise at least two different sub layers 233a, 233b. Thus, as previously discussed, if a resist mask is considered inappropriate for withstanding an etch ambient for etching through the interlayer dielectric material 232, the opening 233a, indicated as dashed lines, may be formed in the upper sub layer 233c, which may be accomplished on the basis of a corresponding resist mask, such as the mask 210 (cf. Fig 2a). During the corresponding patterning process, the layer 233b may act as an etch stop layer. Fig 2I schematically illustrates the device 200 in a further advanced manufacturing stage in which the etch mask 211 may define the opening 211a, which may then be used for deepening the opening 233a so as to extend through the layer 233b. During the corresponding patterning process, the layer 233c may act as a mask in combination with the resist mask 211 , which may be accomplished by for instance providing the material 233c in the form of a silicon dioxide material and the material 233b as a silicon nitride material. Hence, when forming the openings 233a well-established etch techniques may be used for etching silicon dioxide selectively to silicon nitride and thereafter a further selective etch process may be used for selectively etching silicon nitride with respect to silicon dioxide material, thereby obtaining the opening 233a so as to extend through the layer 233b within the opening 211a. Thereafter, the resist mask 211 may be removed and the further processing may be continued as previously described, wherein the layer 233b may efficiently be used as a mask material while the layer 233c may be consumed during the corresponding process for etching through the interlayer dielectric material 232. That is, during the etching of the material 232, which may be comprised of silicon dioxide, also the material of the layer 233c may be removed. Thus, also in this case anefficient patterning regime may be obtained on the basis of less critical lithography steps, while a significant etch resistivity of the resist mask 211 may not be required. With reference to Figs 2m - 2t further illustrative embodiments will now be described in which the hard mask layer may comprise more than two sub layers. Fig 2m schematically illustrates the semiconductor device 200 in which the hard mask layer 233 comprises the first sub layer 233b, the second sub layer 233c and a third sub layer 233d. For example, the sub layers 233b, 233d may be comprised of silicon nitride while the layer 233c may be comprised of silicon dioxide. It should be appreciated, however, that any other materials may be used as long as the desired etch selectivity of the layer 233d with respect to the layer 233c and to the layer 232 is provided. Fig 2n schematically illustrates the semiconductor device 200 with the resist mask 210 formed in the openings 210a, as also previously explained, in order to transfer the openings 210a into the layer 233d on the basis of a corresponding selective etch process 217. Fig 2o schematically illustrates the semiconductor device 200 after the etch process 217 and the removal of the resist mask 210 (cf. Fig 2n). Thus, the openings 233a are formed in the layer 233c in accordance with design requirements, as previously explained. Fig 2p schematically illustrates the semiconductor device 200 with the resist mask 211 formed so as to have the corresponding openings 211a, thereby defining the intersection area 234. Fig 2q schematically illustrates a top view of the semiconductor device 200 of Fig 2p. As illustrated, the openings 233a may expose the layer 233c while the remaining portions of the device 200 may be covered by the layer 233d. Furthermore, the openings 211a, indicated by dashed lines, may define in combination with the openings 233a the intersection area 234. Fig 2r schematically illustrates the semiconductor device 200 in a further advanced manufacturing stage in which the openings 233a, corresponding to the intersection area,extend through the entire hard mask layer 233, while the openings 233a outside of the intersection areas 234 are formed in the layer 233d only. This may be accomplished by performing an appropriate etch process on the basis of the mask 211 (cf. Fig 2p), in which may be etched through the layer 233c and through the layer 233b, which may be accomplished on the basis of two different etch chemistries or on the basis of a single etch chemistry that may etch both materials of the layers 233c and 233b with a moderately high etch rate. For instance, when the layer 233c is comprised of silicon dioxide and the layer 233b is comprised of silicon nitride, corresponding selective etch recipes are available and may be used for a corresponding etch sequence. In other cases an etch recipe without a pronounced selectivity with respect to these materials may be used, wherein also a certain degree of material removal may occur in the layer 232. Fig 2s schematically illustrates the semiconductor device 200 when exposed to the etch process 213 designed to etch through the interlayer dielectric material 232. During the process 213, at least the layer 233b may provide for integrity of the material 232 within openings 233a, which may not correspond to the intersection area 234. In other cases, the layer 233d may provide for the desired etch stop capabilities, while in the intersection area 234 the contact opening 235 may be formed so as to extend to the etch stop layer 231 formed above the contact area 223a. Fig 2t schematically illustrates the semiconductor device 200 when exposed to the etch process 214 designed to etch through the contact etch stop layer 231. In some illustrative embodiments, the layer 233b may also provide for etch stop capabilities during the process 214, for instance when comprised of a material having an increased etch resistivity with respect to the silicon nitride etching ambient of the process 214. For instance, as previously discussed, high-k dielectric materials may increasingly be used during semiconductor processing, and a corresponding material may also advantageously be used for the layer 233b, thereby providing a pronounced etch selectivity with respect to, for instance, silicon dioxide, silicon nitride and the like. In other cases, the layer 233b may be provided in the form of a silicon carbide material which may also have a significantly reduced etch rate with respect to an etch chemistry designed to etch through the etch stop layer 231. In still other illustrative embodiments the etch chemistry 214 may provide for a high degree of etch selectivity with respect tosilicon dioxide and thus a removal of the layer 233b may not be considered disadvantageous since the process may stop on the interlayer dielectric material 232. On the other hand, the layer 233c may provide for etch stop capabilities outside of the openings 233a, thereby providing for enhanced integrity of the material 232. Thereafter, the further processing may be continued, for instance by filling in any metal- containing material and removing excess material thereof by CMP while also removing the layers 233c, 233b. As a result, the present disclosure provides semiconductor devices and techniques for forming the same, in which contact elements may be formed on the basis of two independent resist masks, the mask openings of which may be formed with at least one non-critical lateral dimension, thereby providing for enhanced conditions of the corresponding photolithography processes. For example, a resist mask may be formed first, which may have a mask opening based on two non-critical lateral dimensions followed by a corresponding patterning sequence, after which a further resist mask may be formed wherein at least one lateral dimension may have a critical dimension, whereas the other lateral dimension may also be selected non-critical, wherein a commonly defined intersection area may result in the desired overall design dimensions of the contact element under consideration. In other cases, as described above, the first resist mask may comprise one critical dimension, while the second resist mask may be provided with one or no critical lateral dimension at all, depending on the overall device requirements. Furthermore, the patterning of the hard mask material may be accomplished on the basis of an additional planarization material, such as a polymer material, if enhanced surface conditions are required during the lithographical patterning of the second resist mask. In other illustrative embodiments, in addition to providing a corresponding planarization material, the hard mask material may be provided in the form of two or more sub layers, at least two of which may have a different material composition so as to enhance the overall patterning sequence, for instance when a resist material may not provide for a sufficient etch resistivity to withstand the etch attack during an anisotropic etch process for patterning the interlayer dielectric material. It should be appreciated that, although embodiments described above may refer to contact elements connecting to a circuit element, such as a transistor, in other cases any criticalcontact elements, such as vias connecting different metallization layers, may also be formed on the basis of the principles disclosed above. Further modifications and variations of the present disclosure will be apparent to those skilled in the art in view of this description. Accordingly, this description is to be construed as illustrative only and is for the purpose of teaching those skilled in the art, the general manner of carrying out the principles disclosed herein. It is to be understood that the forms shown and described here are to be taken as the presently preferred embodiments.
An integrated circuit such as a SoC may indicate the critical battery status without powering-on a substantial portion including the host processing cores. The SoC may include a microcontroller, which may cause the critical battery status data to be stored in a static memory and the display unit may retrieve such data from the static memory to display a visual symbol on the screen. The other portions of the SoC such as the dynamic memory, system agent, media processors, and memory controller hubs may be powered-down while the critical battery status is displayed in the visual form on the screen.
1.An integrated circuit comprising:If the charge on the battery is reduced to the critical battery level, the power control unit that generates the status identifier,Controller forTransmitting, in response to detecting the status identifier, a request to power up a first portion of the integrated circuit, wherein the first portion includes a static memory and a display controller,Configuring one or more configuration registers with configuration values, andStoring critical battery state data into the static memory,The display controller is used to:Determining, based on the configuration value, that the critical battery state data is to be retrieved from the static memory,Retrieving the critical battery state data from the static memory, andThe critical battery state data is presented in a visual form on the display.2.The integrated circuit of claim 1 wherein said controller further comprises power indication logic to detect occurrence of said status identifier and to generate said request to energize said first portion, wherein said request comprises An identifier of the blocks in the first portion.3.The integrated circuit of claim 2, further comprising a memory, wherein said electrical indication logic causes said critical battery state data stored in said memory to be transferred to said static memory.4.The integrated circuit of claim 2, said controller further comprising a display driver, wherein said display driver writes said configuration value to said signal in response to receiving a signal from said power identifier logic In the one or more configuration registers.5.The integrated circuit of claim 3, the static memory further comprising a control unit and one or more memory blocks, wherein the critical battery state data is stored in the one or more memory blocks.6.The integrated circuit of claim 4, said one or more configuration registers comprising a first register, wherein said first register comprises a power indicator bit (PIB), a static random access memory identifier (SRAM ID) a field, a start address (STRT ADDR) field, and an end address (END ADDR) field, wherein the PIB is configured with a first value, and the SRAM ID field is configured with an identifier of the static memory, STRT The ADDR is configured with a start address of the memory block from which the critical battery state data is to be retrieved, and the END ADDR field is configured with the last address of the memory block storing the critical battery state data.7.The integrated circuit of claim 6 wherein storing the first value in the PIB will indicate: the configuration stored in the SRAM ID field, the STRT ADDR field, and the END ADDR field The value is valid.8.The integrated circuit of claim 6 wherein storing the second value in the PIB will indicate one or more of the SRAM ID field, the STRT ADDR field, and the END ADDR field. Invalid values.9.The integrated circuit of claim 8 wherein, after said critical battery state data is retrieved from said memory block of said static memory, said visual form is displayed on said display screen The display controller stores the critical battery state data into a frame buffer.10.The integrated circuit of claim 9 wherein said display controller further comprises a control unit, wherein said control unit presents said critical battery state data in a visual form on the display.11.Methods in integrated circuits, including:Generating a status identifier in response to detecting that a critical battery level is reached,Transmitting an identifier of one or more blocks to be powered on in response to the occurrence of the status identifier,Powering the one or more blocks based on the identifier of the one or more blocks, wherein the one or more blocks comprise static memory blocks and display units,Storing critical battery state data in the static memory block, wherein the static memory block is one of the one or more blocks that are powered on,The battery status is displayed in a visual form to indicate that the battery is being charged when a substantial portion including the host processor is powered down.12.The method of claim 11 further comprising configuring a first register to cause said critical battery state data to be retrieved from said static memory block.13.The method of claim 12 including configuring a power indicator bit of said first register with a first value to indicate that a configuration value in other fields of said first register is valid.14.The method of claim 13 including configuring the static memory identifier field with an identifier of said static memory in which said critical battery state data is stored, wherein said other field comprises said static memory field .15.The method of claim 13 including separately configuring a start address and an end address field using a start address and a last address of a memory block in which said critical battery state is stored.16.The method of claim 15 including retrieving said critical battery state data based on said starting and last addresses and storing said critical battery state data in a frame buffer.17.The method of claim 16 including configuring a power indicator bit of the second register with the second value to indicate that the configuration value in the other fields of the second register is invalid, wherein the other fields include A field that stores the identifier of the dynamic memory.18.The method of claim 11 including displaying a battery symbol on said display screen to indicate to the user that said battery is being charged.19.The method of claim 11 further comprising checking the level of charge on said battery at regular intervals and energizing said substantial portion in response to receiving input from said user.20.The method of claim 19 including powering at least said host processor, dynamic memory block, and system agent in response to receiving said input from said user.
Indicate critical battery status in mobile devicesTechnical fieldThe present invention is directed to indicating a battery status in a mobile device, and a code executed thereon, particularly, but not exclusively, indicating a critical battery status in the mobile device.backgroundDisplaying the battery status in the mobile device is an important indication to the user regarding the status of the mobile device. However, current mobile devices follow a normal boot sequence in which the host processor is first powered on. The host processor then leaves the reset and powers up other blocks, such as system agents, dynamic memory (e.g., DRAM), before powering up the display device. The display device can then provide a visual status (such as a symbol of the battery) on the user interface to indicate to the user that the mobile device is being charged. Even if the battery is in a critical (no charge or minimum charge) state of charge, the normal boot sequence can be followed. The host processor is computationally powerful and powerless, but the battery may not be able to support current surges that may occur when the host processor, dynamic memory, system agents, and other such blocks are powered.However, if the battery is in a critical state of charge (i.e., no or very little power), and if a normal boot sequence is followed, the battery will not be in a condition that supports a current surge that energizes the host processor. As a result, the display may not be powered, and without the visual indication of the battery status, the mobile device appears to be dead, even if it is not. In the absence of such an indication, the user may assume that the mobile device is not working or malfunctioning.BRIEF DESCRIPTION OF THE DRAWINGSThe various embodiments of the invention described herein are illustrative and not limited to the drawings of the various figures. For the sake of simplicity and clarity, the elements shown in the figures are not necessarily drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Moreover, reference labels are reused in different graphics, where appropriate, to indicate corresponding or similar components.1 shows a system on a chip (SoC) 100 that can support techniques for indicating critical battery states in a mobile device, in accordance with one embodiment.2 illustrates a first portion of a SoC 100 that can support techniques that indicate critical battery states in a mobile device when the remaining (or second) portion of the SoC 100 is powered down, in accordance with one embodiment.3 illustrates signals exchanged between blocks supporting a first portion of a technique for indicating a critical battery state in a mobile device, in accordance with one embodiment.4 is a flow diagram showing operations of blocks supporting a first portion of support for indicating critical battery states in a mobile device, in accordance with one embodiment.Figure 5 is an example mobile device that can provide a visual indication of critical battery status, in accordance with one embodiment.6 is a computer system that can support techniques for indicating critical battery states in a mobile device, in accordance with one embodiment.Detailed waysThe status of a critical battery in a mobile device can be indicated below. In the following, many specific details such as logic implementations, opcodes, means of specifying operands, resource partitioning or shared or repetitive implementations, types and interrelationships of system components, and logical partitioning or integration selection are set forth to provide A more comprehensive understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without the specific details. In other instances, control structures, gate level circuits, and full software instruction sequences have not been shown in detail so as not to obscure the invention. Those skilled in the art, with the included description, will be able to implement suitable functions without undue experimentation.The reference to "one embodiment", "an embodiment" or "an example embodiment" in the specification means that the described embodiment may include a particular feature, structure or characteristic, but each embodiment may not necessarily include the specific feature, Structure or characteristics. Moreover, such phrases are not necessarily referring to the same embodiment. In addition, when a particular feature, structure, or characteristic is described in conjunction with an influencing example, it is contemplated that such features, structures, or characteristics may be Embodiments of the invention may be implemented in hardware, software, firmware or any combination thereof. Embodiments of the invention may also be implemented as instructions stored on a machine readable medium that can be read and executed by one or more processors. A machine readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device).For example, a machine readable medium can include, read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustic or other similar signals. Further, firmware, software, routines, and instructions may be described as performing certain actions. However, it should be understood that such description is for convenience only, and such operations are in fact generated by computing devices, processors, controllers, and other devices that execute firmware, software, routines, and instructions.The SoC may include a host processor, a system agent, dynamic memory, static memory, a power management unit, a media processor, a bus controller, an integrated memory controller, and the like. In one embodiment, the SoC may include a microcontroller that causes the critical battery state to be indicated without powering up the host processor. In one embodiment, the microcontroller can determine if the battery state is in a critical state of charge and can initiate a special boot sequence. In one embodiment, the microcontroller can send a power on signal to the power management unit to energize a much smaller number of blocks than the number of blocks powered in the normal boot sequence. Further, a much smaller number of blocks that are energized during a particular boot sequence can utilize much smaller amounts than the current surges required to operate a host processor and other blocks that are powered during a normal boot sequence. Current surges to operate.In one embodiment, in response to receiving a power up signal from the microcontroller, the power management unit can power up the following components: for example, static memory (eg, SRAM), display controller, and in the microcontroller and static memory and display control The bus interface provided between the devices. In one embodiment, the microcontroller can store critical battery status display data in static memory. In one embodiment, the microcontroller can store the configuration values in a configuration register provided in the display controller. In one embodiment, the display controller can retrieve critical battery state data from the static memory in response to a configuration value stored in the configuration register. In one embodiment, the display controller can present critical battery status data on the display screen of the mobile device. In one embodiment, the critical battery status data can be displayed in a visual form to indicate the battery status to a user of the mobile device. In one embodiment, the critical battery status data can be displayed as a battery symbol on the display screen of the mobile device. Due to the visual indication provided on the screen, the user of the mobile device can view the battery status without inferring that the mobile device is not working or malfunctioning.1 illustrates an embodiment of a system on chip (SoC) 100 that can support one or more techniques for indicating critical battery states on a screen of a mobile device. In one embodiment, SoC 100 may include: single or multi-core application processor 110, interconnect unit 112, integrated memory controller unit 114, bus controller unit 116, media processor 120, SRAM unit 130, DRAM unit 132. The controller 135, the system agent 140, the power management unit 150, and the display unit 160.Processor 110 or 120 may be a general purpose such as CoreTM i3, i5, i7, 2Duo Quad, XeonTM, ItaniumTM, XScaleTM, AtomTM or StrongARMTM processors provided by Intel Corporation of Santa Clara, California. processor. Alternatively, the processor may be from another company such as ARM Holdings, Ltd, MIPS, Advanced Micro Devices, and the like. The processor may be a dedicated processor such as, for example, a network or communication processor, a compression engine, a graphics processor, a coprocessor, an embedded processor, and the like. The processor can be implemented on one or more chips. Processor 100 can be part of one or more substrates and/or can be implemented on one or more substrates using any of several processing techniques, such as, for example, BiCMOS, CMOS, or NMOS.SoC100 can be used in laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, networking equipment, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video Known system designs and configurations for gaming devices, set top boxes, microcontrollers, cellular phones, portable media players, handheld devices, and various other electronic devices. In general, various systems or electronic devices capable of including a processor and/or other execution logic as disclosed herein are generally suitable.In FIG. 1, interconnect unit 112 is coupled to: application processor 110 including a set of one or more cores 102A-N and shared cache unit 106; system proxy unit 140; bus controller unit 116; integrated memory Controller unit 114; a set or one or more media processors 120, may include integrated graphics logic 108, an image processor 124 for providing still and/or video camera functions for providing hardware audio accelerated audio processing 126, and a video processor 128 for providing video encoding/decoding acceleration; a static random access memory (SRAM) unit 130; a direct memory access (DMA) unit 132; and a display unit 160, which may include one or more A display controller 165 for controlling one or more external displays, and a controller 135. In one embodiment, controller 135 may be a microcontroller that can be designed to consume relatively low power. In one embodiment, even a battery in its critical (or least charged) state of charge can support the power consumption of controller 135.The memory hierarchy includes one or more levels of cache within the core, a set or one or more shared cache units 106, and external memory (not shown) coupled to the integrated set of memory controller units 114. The set of shared cache units 106 may include one or more intermediate caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other level of cache, last level cache (LLC). , and / or a combination thereof. Although in one embodiment the ring-based interconnect unit 112 will integrate the graphics logic 108, the shared cache unit 106 group, and the system proxy unit 140 interconnects, alternative embodiments may use any number of interconnects for this The known technology of the unit. In some embodiments, one or more of the cores 102A-N can be multi-threaded.Cores 102A-N may be homogeneous or heterogeneous in terms of architecture and/or instruction set. For example, some of the cores 102A-N may be ordered while others are unordered. As another example, two or more of the cores 102A-N may be capable of executing the same set of instructions, while others may be capable of executing only a subset of the set of instructions or a different set of instructions.In one embodiment, system agent 140 may include those components for coordinating and operating cores 102A-N. In one embodiment, system proxy unit 140 may include, for example, a power power control unit (PCU) 150 and display unit 160. PCU 150 may include the logic and components required to manage the power states of cores 102A-N and integrated graphics logic 108. The display unit 160 is for driving one or more externally connected displays. In other embodiments, PCU 150 and display unit 160 may be provided external to system agent 140, as depicted in FIG. In one embodiment, PCU 150 can be coupled to battery 190, which can continuously check the amount of power on battery 190. In one embodiment, PCU 150 may generate a battery indicator to indicate that the amount of power on battery 190 has reached a critical battery level or decreased below a critical battery level. In one embodiment, PCU 150 can power down almost all of the SoC 100. However, in one embodiment, PCU 150 may not de-energize controller 135. In one embodiment, PCU 150 may power up a small portion of SoC 100 (eg, SRAM unit 130, display unit 160, and interfaces 134 and 136) in response to receiving a request from controller 135. In one embodiment, PCU 150 can configure display controller 165 or can delegate this task to controller 135.In one embodiment, the controller 135 can cause the critical battery state to be indicated without energizing most of the SoC 100. In one embodiment, controller 135 can cause critical battery status to be indicated without powering application processor 110, media processor 120, system agent 140, DRAM unit 132, and such other blocks. In one embodiment, controller 135 can determine if the battery status is in a critical state of charge and can initiate a special boot sequence. In one embodiment, controller 135 can send a power on signal to power management unit 150 to energize a much smaller number of blocks than the number of blocks powered in the normal boot sequence. In one embodiment, SRAM unit 130, display unit 160, and interfaces such as interfaces 134 and 136. Further, a much smaller amount of power is applied during a particular boot sequence than the current surges required to operate the application processor 110, the media processor 120, and other blocks that are powered during the normal boot sequence. The blocks can be operated with much smaller current surges.In one embodiment, the controller 135 can send a request to power the SRAM unit 130, the display unit 160, and the bus interfaces 134 and 136 provided between the controller 135, the SRAM unit 130, and the display unit 160 (to power management) Unit 150). In one embodiment, controller 135 can store critical battery state display data in SRAM unit 130. In one embodiment, controller 135 can then store the configuration values in one or more configuration registers provided in display controller 165 if PCU 150 delegates such tasks to controller 135. In one embodiment, display controller 165 can retrieve critical battery state data from SRAM unit 130 in response to configuration values stored in configuration registers. In one embodiment, display controller 15 may present critical battery status data on a display screen of the mobile device. In one embodiment, the critical battery status data can be displayed in a visual form to indicate the battery status to the user of the mobile device. In one embodiment, the critical battery status data can be displayed as a battery symbol on the display screen of the mobile device.2 depicts an embodiment of a block diagram of controller 135, SRAM unit 130, and display unit 160 that can operate together to indicate a critical battery state when the amount of power on the battery is at a minimum level. In one embodiment, controller 135 can include power identifier logic 210 and display driver 215. However, controller 135 may include other units, but for the sake of brevity, not all such other units are depicted herein. In one embodiment, the electrical energy identifier logic 210 can monitor the status of the battery and can cause an indication to be provided to the user. If the amount of power on the battery drops below a certain level (i.e., critical battery state level), power control unit 150 can power down a significant number of cells within SoC 100. In one embodiment, power control unit 150 may use such things as voltage and frequency throttling, dynamic voltage and frequency scaling (DVFS), instruction throttling, selective and independent power control of multiple cores, system sleep state, and core Techniques such as changes in sleep states and such other techniques control the electrical energy to various portions of the SoC 100.In one embodiment, a substantial portion of the SoC 100 can be turned off or in a sleep state or any other such deep power saving state in response to the amount of power on the battery reaching or falling below a critical amount of power or power. However, the controller 135 can be powered (or may not be powered down) even if the amount of power or power on the battery reaches or falls below the critical amount or power state. In one embodiment, the power identifier logic 210 can send a request to the PCU 150 to power up the SRAM unit 130 and the display unit 160 and the interfaces 134 and 136. In one embodiment, the power identifier logic 210 can receive a response from the PCU 150 after the SRAM unit 130, the display unit 160 is powered on. In one embodiment, the power identifier logic 210 can transfer battery status data from the memory 216 to the memory blocks 225-A through 225-N provided in the SRAM unit 130. In an alternate embodiment, the power identifier logic 210 can send a first signal to the display driver 215 to perform the transfer of battery status data. In still another alternate embodiment, the power identifier logic 210 can send a second signal to the SRAM controller 230 to cause battery state data in the memory 216 to be transferred to the memory block 225.In one embodiment, in response to receiving an indication from the power identifier logic 210, the display driver 215 can cause battery state data to be transferred from the memory 216 to the memory blocks 225-A through 225-N or a subset of the memory blocks 225. In one embodiment, the battery status may represent visual data such as battery symbols, for example, when presented, the visual data may provide a convenient means for the user to understand the status of the battery. In one embodiment, display driver 215 can configure configuration registers in display controller 165 of display unit 160. In one embodiment, display driver 215 can configure configuration registers 251 and 261. In one embodiment, display driver 135 can configure first configuration register 251 using (0, dram_id, strt_addr, end_addr) in fields PIB252, DRAMID253, STRT ADDR254, and END ADDR255, respectively. In addition, the display driver 135 can also configure the second configuration register using (1, sram_id, strt_addr, end_addr) of the fields PIB 262, SRAM ID 263, STRT ADDR264, and END ADDR265, respectively. In one embodiment, if PIB 262 is configured with a first value (eg, 1), critical battery state data can be read based on values stored in the SRAM ID 263, STRT ADDR 264, and END ADDR 265 fields. In one embodiment, the configuration values stored in fields 263, 264, and 265 are only valid if PIB 262 is configured with a first value (eg, 1), if PIB 262 is configured with a second value (eg, 0), the values in fields 263 to 265 are invalid. In one embodiment, SRAM ID 23 may be configured with an identifier of static memory, ie, an identifier of SRAM unit 130, which may be configured with a starting address of a memory block from which critical battery state data may be retrieved or The identifier (eg, 225-A), END ADDR 264 may be configured with an end address or identifier (eg, 225-Q) of the memory block to which the critical battery state data is stored. In other embodiments, display driver 215 can provide configuration values to control unit 250, which in turn can configure first and second configuration registers 251 and 261. In other embodiments, power control unit 150 may configure configuration registers 251 and 261 in addition to energizing SRAM cell 130 and display unit 160.In one embodiment, SRAM cell 130 may include one or more memory blocks 225-A through 225-N and SRAM controller 230. In one embodiment, SRAM controller 230 can receive a second signal from power indication logic 210, in response, SRAM controller 230 can transmit battery status data to memory blocks 225-A through 225-N or portions thereof. In one embodiment, SRAM controller 230 can send a third signal to control unit 250 to instruct reading of battery status data for retrieval.In one embodiment, display unit 160 can include display controller 165 and frame buffer 270. In one embodiment, display controller 165 can include control unit 250 and first and second configuration registers 251. In one embodiment, control unit 250 can transmit battery status data from memory block 225 and store battery status data in frame buffer 270 in response to receiving a request from SRAM controller 230 or display driver 215. In one embodiment, control unit 250 can receive one or more configuration values from display driver 215, and in response, control unit 250 can configure configuration registers 251 and 261. In one embodiment, control unit 250 can present battery state data stored in frame buffer 270 on a display device.FIG. 3 illustrates one embodiment of a line graph 300 showing signals exchanged between controller 135, PCU 150, SRAM unit 130, and display unit 170. In one embodiment, controller 135 can detect that the amount of power on the battery has reached or fallen below a critical battery level or state, and such detection is indicated as event 330. In one embodiment, power control unit 150 may have powered down SoC 100 (or any other such low power saving state) in response to detecting a critical battery level or state. In one embodiment, controller 135 can send request 335 to power control unit 150. In one embodiment, request 335 may indicate a request to power up only the first portion of SoC 100, while the second portion of SoC 150 (which is relatively large) may remain in the power down state.In one embodiment, PCU 150 can power SRAM unit 130 by transmitting a first power-on signal 357. Similarly, PCU 150 can energize display unit 160 by transmitting a second power-on signal 356. In one embodiment, SRAM unit 130 and display unit 160 may transmit acknowledgment signals 375 and 365, respectively, in response to receiving power-on signals 357 and 356. In one embodiment, PCU 150 can send ready signal 355 to controller 135. Further, in one embodiment, PCU 150 can transmit configuration signal 336-B (dashed line) to display unit 160 to configure configuration registers 251 and 261 provided in display controller 165.In one embodiment, controller 135 (or more specifically, power indicating logic 210) may store critical battery state data in memory blocks 225-A through 225-N and control such critical battery state data from Memory 216 in 135 is transferred to memory block 225 indicated by data transfer signal 337. In other embodiments, the power indication logic 210 can send a data transfer signal to the SRAM controller 230, which can retrieve critical battery state data from the memory 216 and store such data in the memory block 225. In other embodiments, controller 135 can configure configuration registers 251 and 261 (if PCU 150 delegates the task to controller 135), such configuration activity is illustrated by configuration signal 336-A.In one embodiment, display controller 165 can send data read signal 367 to SRAM controller 230, which in response can write critical battery data to frame buffer 270. Such data transfer activity is represented by a data write signal 376. In other embodiments, control unit 250 in display controller 165 can retrieve critical battery data and store such data in frame buffer 270. In one embodiment, control unit 230 may display or present such critical battery data on display screen 280, such activity being represented by presence signal 368.An embodiment of the operation of the first portion (depicted in FIG. 2) of the SoC 100 indicating the critical battery state on the display screen is shown in the flow chart of FIG. In block 410, the controller 135 can check if the amount of power on the battery 190 has reached a critical battery level. In one embodiment, PCU 150 may generate a status identifier that controller 135 may use to perform other tasks described below. If the charge on battery 190 reaches the critical battery level, then control passes to block 420, otherwise control passes to block 490.In block 420, controller 135 can identify a first portion of SoC 100 to be powered (e.g., SRAM unit 130 and display unit 160 and interfaces 134 and 136). In block 430, the controller 135 may send the identifiers of the blocks of the first portion of the SoC 100 to the PCU 150 along with a request to power up such blocks of the first portion of the SoC 100.In block 435, controller 135 may check if the block of the first portion of SoC 100 is powered, and if the block of the first portion is powered, control passes to block 440. In block 440, controller 135 may store the critical battery data in a static memory such as memory block 225 of SRAM cell 130. In block 450, controller 135 can configure configuration registers such as registers 251 and 261 using the configuration values described above.In block 460, display controller 165 can retrieve critical battery data from SRAM unit 130 and store such data into frame buffer 270. In block 470, display controller 165 can present critical battery data on the display screen based on the critical battery data retrieved from the static memory. In one embodiment, the visual symbol can indicate a battery state of charge. In one embodiment, the visual symbol can be a battery symbol 550, as displayed on the screen of the mobile device 500 depicted in FIG.In block 475, controller 135 can check if the amount of power on the battery exceeds the critical battery level. If the amount of power on the battery exceeds the critical battery level, control passes to block 480, otherwise, block 460 is entered. In block 480, power control unit 150 may determine if a normal boot sequence can be resumed, and if a normal boot sequence is to be resumed, control passes to block 490, otherwise, block 460 is entered.FIG. 6 illustrates a system or platform 600 that implements the methods disclosed herein in accordance with an embodiment of the present invention. System 600 includes, but is not limited to, a desktop computer, tablet, laptop, netbook, laptop, personal digital assistant (PDA), server, workstation, cellular telephone, mobile computing device, smart phone, internet device, or any other type Computing device. In another embodiment, system 600 for implementing the methods disclosed herein can be a system on a chip (SOC) system.Processor 610 has a processing core 512 that executes instructions of system 600. Processing core 612 includes, but is not limited to, fetching logic for fetching instructions, decoding logic for decoding instructions, execution logic for executing instructions, and the like. Processor 610 has a cache 516 for caching instructions and/or data of system 600. In another embodiment of the invention, cache 616 includes, but is not limited to, primary, secondary, and tertiary caches or any other configuration of caches within processor 610. In one embodiment of the invention, processor 610 has a central power control unit PCU 613.The Memory Control Hub (MCH) 614 performs the function of enabling the processor 610 to access and communicate with the memory 630 including the volatile memory 632 and/or the non-volatile memory 634. Volatile memory 632 includes, but is not limited to, synchronous dynamic random access memory (SDRAM), dynamic random access memory (DRAM), RAMBUS dynamic random access memory (RDRAM), and/or any other type of random access memory. . Non-volatile memory 634 includes, but is not limited to, NAND flash, phase change memory (PCM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), or any other type of nonvolatile memory. device.Memory 630 stores information and instructions to be executed by processor 610. Memory 630 can also store temporary variables or other intermediate information while processor 610 is executing instructions. Chipset 620 is coupled to processor 510 via point-to-point (PtP) interfaces 617 and 622. Chip set 620 enables processor 610 to connect to other modules in system 600. In another embodiment of the invention, chipset 620 is a platform controller hub (PCH). In one embodiment of the invention, interfaces 617 and 622 operate in accordance with a PtP communication protocol such as the Intel QuickPath Interconnect (QPI) or the like. Chip set 620 is coupled to GPU or display device 640, including but not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT) display, or any other form of visual display device. In another embodiment of the invention, GPU 640 is not coupled to chipset 620 and is part of processor 610 (not shown).In addition, chipset 620 is also coupled to one or more buses 650 and 660, with one or more buses 650 and 660 interconnecting various modules 674, 680, 682, 684, and 686. If there is a mismatch in bus acceleration or communication protocol, buses 650 and 660 can be interconnected by bus bridge 672. Chipset 620 is coupled to, but is not limited to, non-volatile memory 680, mass storage device 682, keyboard/mouse 684, and network interface 686. Mass storage device 682 includes, but is not limited to, a solid state drive, a hard drive, a universal serial bus flash drive, or any other form of computer data storage medium. Network interface 686 is implemented using any type of known network interface standard, including, but not limited to, an Ethernet interface, a universal serial bus (USB) interface, a Peripheral Component Interconnect (PCI) Express interface, a wireless interface, and/or Any other suitable type of interface. The wireless interface operates in accordance with, but is not limited to, the IEEE 802.11 standard and its associated series, Home Plug AV (HPAV), Ultra Wide Band (UWB), Bluetooth, WiMax, or any form of wireless communication protocol.Although the modules shown in FIG. 6 are depicted as separate blocks within system 600, the functions performed by some of these blocks may be integrated into a single semiconductor circuit or two or more separate integrated circuits may be used. to fulfill. In another embodiment of the invention, system 600 can include more than one processor/processing core.The methods disclosed herein can be implemented in hardware, software, firmware, or any other combination thereof. Although an example of various embodiments of the disclosed subject matter is described, those of ordinary skill in the relevant art will readily appreciate that many other methods of implementing the disclosed subject matter may alternatively be employed. In the foregoing description, various aspects of the disclosed subject matter are described. For the sake of explanation, specific numbers, systems, and configurations are set forth to provide a comprehensive understanding of the subject matter. However, it will be apparent to those skilled in the art that the subject matter can be practiced without these specific details. In other instances, well-known features, components, or modules are omitted, simplified, combined, or separated so as not to obscure the disclosed subject matter.The term "operable" as used herein means that a device, system, protocol, etc., when the device or system is in a power down state, is capable of operating for its desired function or for operation. Embodiments of the disclosed subject matter can be implemented in hardware, firmware, software, or a combination thereof, and can be referenced (or with them) such as instructions, functions, processes, data structures, logic, applications, design representations or Described in program code such as simulation, simulation, and manufacturing design format, when accessed by a machine, causes the machine to perform tasks, define abstract data types or low-level hardware contexts, or produce results.The techniques illustrated by the figures may be implemented using code and data stored and executed on one or more computing devices, such as a general purpose computer or computing device. Such computing devices use machine readable media such as machine readable storage media (eg, magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase change memory) and machine readable communication media ( For example, electrical, optical, acoustic or other forms of propagating signals - such as carrier waves, infrared signals, digital signals, etc. - are stored and transmitted (internal and with other computing devices, over a network) of code and data.Although the disclosed subject matter is described with reference to the illustrative embodiments, this description is not intended to be construed in a limiting manner. Various modifications to the illustrative embodiments, as well as other embodiments of the subject matter, which are apparent to those skilled in the art of the present invention, are considered to be within the scope of the disclosed subject matter.Certain features of the invention are described with reference to example embodiments. However, the description is not intended to be construed in a limiting manner. Various modifications of the example embodiments, as well as other embodiments of the invention, which are obvious to those skilled in the art to which the invention pertains, are considered to be within the spirit and scope of the invention.
Apparatus and methods for managing time sensitive application privileges on a wireless device include a computer platform operable to execute an application having a time sensitivity requirement. A time retrieval service resident on the computer platform is operable to retrieve a date/time result, which may be associated with a confidence factor. And, a date/time determination module resident on the computer platform is operable to determine whether or not to execute the application based on the date/time result and/or based on the confidence factor. Corresponding methods and computer readable media are also included.
CLAIMS What is claimed is: 1. A method of managing time sensitive application privileges on a wireless device, comprising: receiving a request to execute an application having a time sensitivity requirement; retrieving a date/time result; and determining execution of the application based on the date/time result. 2. The method of claim I9 further comprising associating a corresponding confidence factor with the date/time result. 3. The method of claim 2, where the time sensitivity requirement further comprises a confidence requirement, and further comprising executing the application if the confidence factor achieves the confidence requirement. 4. The method of claim 3, further comprising selecting one time retrieval service from a plurality of time retrieval services that each provide a corresponding one of a plurality of date/time results and a corresponding one of a plurality of confidence factors such that the confidence factor of the selected time retrieval service achieves the confidence requirement. 5. The method of claim 1, where the act of retrieving at least one date/time result further comprises at least one of receiving the data/time result from a time system across a wireless network, and receiving the date/time result from a manual user input into the wireless device. 6. The method of claim 5, where the time system is selected from the group consisting of a global positioning system, an assisted global positioning system, a trusted time server, and a wireless network carrier server. 7. The method of claim 1, where the act of retrieving at least one date/time result further comprises retrieving a plurality of date/time results, where each of the plurality of date/time results have a corresponding one of a plurality of confidence factors, and further comprising selecting one of the plurality of date/time results based on the corresponding one of the plurality of confidence factors. 8. The method of claim 1, where the act of retrieving at least one date/time result further comprises selecting one of a plurality of date/time services to provide the date/time result. 9. The method of claim 8, where the act of selecting one of the plurality of date/time services is based on a corresponding one of a plurality of fetch parameters. 10. The method of claim 9, where the plurality of fetch parameters are selected from the group consisting of a confidence factor, a retrieval duration, a retrieval cost, and a retrieval performance. 11. The method of claim 9, where the act of selecting one of the plurality of date/time services further comprises receiving a user input of a selected date/time service. 12. The method of claim 2, where the act of associating the corresponding confidence factor with the date/time result further comprises determining the confidence factor based on a source of the data/time result. 13. The method of claim 2, where the act of associating the corresponding confidence factor with the date/time result further comprises receiving the confidence factor from a source of the data/time result. 14. The method of claim 1, further comprising generating a user prompt asking a user to input a selection of a time retrieval service, the user prompt comprising a plurality of time retrieval services and at least one fetch parameter corresponding to each of the plurality of time retrieval services, the fetch parameter selected from the group consisting of a confidence factor, a retrieval duration, a retrieval cost, and a retrieval performance. 15. The method of claim. 1, where the application has a plurality of functional modes each corresponding to one of a plurality of confidence requirements, and further comprising executing the respective one of the plurality of functional modes based on a match between a confidence factor associated with the date/time result and a respective one of the plurality of confidence requirements. 16. The method of claim 1, where the act of retrieving at least one date/time result further comprises receiving the data/time result from a time service across a wireless network, where the date/time result further comprises a guarantee of authenticity. 17. The method of claim 1, wherein retrieving the date/time result comprises retrieving no date/time result, and further comprising associating a predetermined confidence factor with at least one of a retrieved date/time result and no date/time result. 18. A computer-readable medium embodying means for managing time sensitive application privileges on a wireless device, comprising: at least one sequence of instructions, wherein execution of the instructions by a processor causes the processor to perform the acts of: receiving a request to execute an application having a time sensitivity requirement; retrieving a date/time result; associating a corresponding confidence factor with the date/time result; and determining execution of the application based on the confidence factor. 19. A wireless device, comprising: a means for receiving a request to execute an application having a time sensitivity requirement; a means for retrieving a date/time result; a means for associating a corresponding confidence factor with the date/time result; and a means for determining execution of the application based on the confidence factor. 20. A wireless device, comprising: a computer platform, operable to execute an application having a time sensitivity requirement; a time retrieval service resident on the computer platform and operable to retrieve a date/time result; and a date/time determination module resident on the computer platform and operable to determine an execution of the application based on the date/time result. 21. The device of claim 20, wherein the date/time result comprises a confidence factor, and wherein the date/time determination module is further operable to determine the execution of the application based on the confidence factor. 22. The device of claim 21, where the time sensitivity requirement further comprises a confidence requirement, and further comprising the computer platform executing the application if the confidence factor achieves the confidence requirement. 23. The device of claim 22, further comprising a plurality of time retrieval services resident on the computer platform, and each operable to provide a corresponding one of a plurality of date/time results and a corresponding one of a plurality of confidence factors, wherein the date/time determination module is operable to select one time retrieval service from the plurality of time retrieval services such that the confidence factor of the selected time retrieval service achieves the confidence requirement. 24. The device of claim 21, wherein the confidence factor associated with the date/time result is received from a source of the data/time result. 25. The device of claim 21, wherein, "the application has a plurality of functional modes each corresponding to one of a plurality of confidence requirements, and wherein the date/time determination module is further operable to execute the respective one of the plurality of functional modes based on a match between the confidence factor associated with the date/time result and a respective one of the plurality of confidence requirements. 26. The device of claim 20, wherein the time retrieval service is operable to retrieve the date/time result in response to an attempted execution of the application. 27. The device of claim 20, wherein the date/time determination module is operable to select one time retrieval service based on at least one of an application requirement, a carrier network requirement, a manual user input, and at least one fetch parameters. 28. The device of claim 20, wherein the fetch parameter is selected from the group consisting of a confidence factor, a retrieval duration, a retrieval cost, and a retrieval performance. 29. The device of claim 20, wherein the time retrieval service is operable to retrieve the date/time result from at least one of a time system across a wireless network on which the wireless device is operable and from a manual user input into the wireless device. 30. The device of claim 29, wherein the time system is selected from the group consisting of a global positioning system, an assisted global positioning system, a trusted time server, and a wireless network carrier server. 31. The device of claim 20, wherein the date/time determination module is operable to associate a confidence factor with the date/time result based on a source of the data/time result. 32. The device of claim 20, wherein the date/time determination module is further operable to generate a user prompt asking a user to input a selection of a time retrieval service, the user prompt comprising a plurality of time retrieval services and at least one fetch parameter corresponding to each of the plurality of time retrieval services, the fetch parameter selected from the group consisting of a confidence factor, a retrieval duration, a retrieval cost, and a retrieval performance. 33. The device of claim 20, wherein the date/time determination module is further operable to receive the data/time result from a time service across a wireless network on which the wireless device is operable, wherein the date/time result further comprises a guarantee of authenticity. 34. The device of claim 20, wherein the date/time result comprises at least one of a retrieved date/time result and no date/time result, and wherein the date/time determination module is operable to associate a predetermined confidence factor with at least one of the retrieved date/time result and no date/time result.
APPARATUS AND METHODS FOR MANAGING TIME SENSITIVE APPLICATION PRIVILEGES ON A WIRELESS DEVICEFIELD OF THE INVENTIONThe described embodiments generally relate to wireless communications devices and computer networks. More particularly, the described embodiments relate to apparatus and methods of managing time sensitive application privileges on a wireless device.BACKGROUNDWireless devices, such as cellular telephones, communicate packets including voice and data over a wireless network. Wireless devices are being manufactured with increased computing capabilities and are becoming tantamount to personal computers. These "smart" wireless devices, such as cellular telephones, have installed application programming interfaces ("APIs") onto their local computer platform that allow software developers to create software applications that operate on the cellular telephone. The API sits between the wireless device system software and the software application, making the cellular telephone functionality available to the application without requiring the software developer to have the specific cellular telephone system source code.The software applications can come pre-loaded at the time the cellular telephone is manufactured, or the user may later request that additional programs be downloaded over cellular telecommunication carrier networks, where the programs are executable on the wireless telephone. As a result, users of wireless telephones can customize their cellular telephones with programs, such as games, printed media, stock updates, news, or any other type of information or program available for download through the wireless network. Each of these software applications normally requires a license for the user to legally use the software on the wireless device. [0004] If a license is meant to limit the use of the software application to a finite duration, such as a specific number of days of use, then once the license expires, a user of the wireless device must typically cither download a new license to incorporate into the software application, or reinstall the entire software application if further use of the application is desired. The wireless device API normally checks the software either at the time execution is requested or at some other period to determine if the software is licensed for use on the platform. If the license has expired, then the wireless device will not execute the unlicensed software application. Thus, these types of licensing schemes rely on the wireless device having an accurate date/time setting in order to determine whether or not a license has expired.In some networks, however, the date/time in the wireless device may not be established in a trustworthy fashion, thereby making time-based licensing decisions difficult or impossible. For example, some communication systems/protocols, such as GSM (Global System for Mobile Communications), TDMA (Time Division Multiple Access) and UMTS (Universal Mobile Telecommunications System), do not require time to be synchronized between the wireless device and the wireless network. As such, the date/time setting on the wireless device may be a setting, input by a user of the device, or may be obtained from some other time service, such as NTP (Network Time Protocol). Tn any case, the APT, or other logic on the wireless device responsible for determining the expiration of a time-based license, cannot verify the authenticity of the time/date setting on the wireless device in these systems.Accordingly, it would be advantageous to provide a system that enables reliable time-based licensing decisions to be made on wireless devices operating on wireless networks that do not require time synchronization with the wireless device.SUMMARYTo address one or more of the drawbacks of the prior art, the disclosed embodiments provide apparatus and methods for managing time sensitive application privileges on a wireless device. In one embodiment, the disclosed apparatus and methods determine whether or not to execute a time-sensitive application based on a given date/time result. In another embodiment, the disclosed apparatus and methods determine which one of a plurality of functional modes of an application to execute based on a given date/time result.In one embodiment, a method of managing time sensitive application privileges on a wireless device comprises receiving a request to execute an application having a time sensitivity requirement and retrieving a date/time result. The method further includes determining execution of the application based on the date/time result. [0009] In another embodiment, a computer-readable medium embodying means for managing time sensitive application privileges on a wireless device comprises at least one sequence of instructions, wherein execution of the instructions by a processor causes the processor to perform the acts of receiving a request to execute an application having a time sensitivity requirement and retrieving a date/time result. The acts further including associating a corresponding confidence factor with the date/time result, and determining execution of the application based on the confidence factor. [0010] In yet another embodiment, a wireless device comprises a means for receiving a request to execute an application having a time sensitivity requirement and a means for retrieving a date/time result. Further, the wireless device includes a means for associating a corresponding confidence factor with the date/time result, and a means for determining execution of the application based on the confidence factor. [0011] In still another embodiment, a wireless device comprises a computer platform operable to execute an application having a time sensitivity requirement. The wireless device further includes a time retrieval service resident on the computer platform and operable to retrieve a date/time result. Additionally, a date/time determination module is resident on the computer platform and is operable determine an execution of the application based on the date/time result.BRIEF DESCRIPTION OF THE DRAWINGSThe disclosed embodiments will hereinafter be described in conjunction with the appended drawings provided to illustrate and not to limit the disclosed embodiments, wherein like designations denote like elements, and in which:] Fig. 1 is a schematic diagram of one embodiment of a system for managing time-sensitive licensing privileges on wireless devices;Fig. 2 is a schematic diagram of one embodiment of the wireless device of Fig. 1Fig. 3 is a schematic diagram of one embodiment of a cellular telephone embodiment of the system of Fig. 1;Fig. 4 is a flow diagram of one embodiment of a method of managing time- sensitive licensing privileges on a wireless device; andFig. 5 is a schematic diagram of one embodiment of the message flow sequence between various components of the system of Fig. 1. DETAILED DESCRIPTIONThe described embodiments include apparatus, methods and computer readable media for the management of time-based licensing privileges on a wireless device. These apparatus, methods and computer readable medium provide a wireless device with logic that enables the wireless device to determine an authenticity and/or level of confidence associated with a given date/time result. In turn, this logic provides the wireless device with the ability to make decisions regarding the expiration of a time- sensitive license for an application executable on the wireless device. In addition, or alternatively, the logic provides the wireless device with the ability to choose between functional modes of an application based on either the existence of a given date/time result or on a level of confidence associated with the given date/time result. The logic may be accessed at any time, such as when an application is initially executed or after the initial execution, for example, when an application reaches an operation, or mode, requiring a date/time result. Thus, the described embodiments advantageously provide apparatus, methods and computer readable media that allow for managing license privileges on wireless devices, especially on wireless networks that do not require time to be synchronized between the network and the device.Referring to Figs. 1-2, one embodiment of an application license management system 10 comprises a plurality of wireless devices 12, 14, 16, 18, 20 each having a computer platform 22 operable to store and execute a time-sensitive licensed application 24. Licensed application 24 is associated with a licensing configuration 26 that includes a license 28 having a time sensitivity requirement 30. For example, time sensitivity requirement 30 includes a time requirement 32 (Fig. 2), such as a time period or an expiration date and/or time, that indicates when the respective wireless device is licensed to execute application 24. Further, time sensitivity requirement 30 includes a confidence requirement 34, such as a minimum confidence factor or level/type of authentication that indicates a threshold of trustworthiness associated with time requirement 32. As such, time sensitivity requirement 30 defines one or more levels of date/time authenticity and/or trustworthiness required to execute one or more modes 36, or levels of functionality, of licensed application 24. Additionally, each of the plurality of wireless devices 12, 14, 16, 18, 20 includes a date/time determination module 38 operable to retrieve a date/time result 40 having a confidence factor 42 indicating the accuracy, authenticity and/or trustworthiness of date/time result 40. For example, confidence factor 42 may be based upon a source of date/time result 40, where a trusted and/or authenticated source is associated with a higher confidence factor than an untrusted and/or unknown and/or unauthenticated source. Alternatively, confidence factor 42 may be based on the existence or absence of a date/time result 40 in the respective wireless device 12, 14, 16, 18, 20. For example, the existence of date/time result 40 may imply or be associated with a predetermined confidence factor, such as a 100% confidence, whereas the absence of date/time result 40 may imply a different predetermined confidence factor, such as 0%. In one embodiment, the source of date/time result 40 may include one or more remote time systems 44 in communication with the respective wireless device 12, 14, 16, 18, 20 across a wireless network 46, and a local time system 45 maintained on the respective wireless device. Thus, date/time determination module 38 is operable to compare date/time result 40 and confidence factor 42 with the corresponding time sensitivity requirement 30 in order to determine whether or not a license 28 for application 24 is in effect or is expired, and to determine whether or not to execute one or more modes 36 of licensed application 24. [0020] The wireless devices can include any type of computerized, wireless devices, such as cellular telephone 12, personal digital assistant 14, laptop computer 16, two-way text pager 18, and even a separate computer platform 20 that has a wireless communication portal, and which also may have a wired connection 48 to a network or the Internet. The wireless device can be a remote-slave, or other device that does not have an cnd-uscr thereof but simply communicates data across the wireless network 46, such as a remote sensor, a diagnostic tool, a data relay, and the like. Thus, the apparatus, methods and computer readable media for the management of time-based licensing privileges on a wireless device can accordingly be performed on any form of wireless device or computer module including a wired or wireless communication portal, including without limitation, wireless modems, PCMCTA cards, access terminals, personal computers, telephones, asset tags, telemetry modules or any combination or sub-combination thereof.Additionally referring to Figs. 1-3, each wireless device 12, 14, 16, 18, 20, such as cellular telephone 12 in this case, has computer platform 22 that can transmit data across wireless network 46, and that can receive and execute software applications and display data transmitted from another computer device connected to wireless network 46. Computer platform 22 also includes an application-specific integrated circuit ("ASIC") 50, or other chipset, processor, logic circuit, or other data processing device. ASIC 50 or other processor may execute an application programming interface ("API") layer 52 that interfaces with any resident programs, such as licensed application 24, in a memory 54 of the wireless device. API 52 is a runtime environment executing on the respective wireless device 12, 14, 16, 18, 20. One such runtime environment is Binary Runtime Environment for Wireless<(R)> (BREW<(R)>) software developed by Qualcomm, Inc., of San Diego, California. Other runtime environments may be utilized that, for example, operate to control the execution of applications on wireless computing devices. Memory 54 may include read-only and/or random-access memory (RAM and ROM), EPROM5 EEPROM, flash cards, or any memory common to computer platforms. Computer platform 22 also includes a local database 56 that can hold the software applications, files, or data not actively used in memory 54. Local database 56 typically includes one or more flash memory cells, but can be any secondary or tertiary storage device, such as magnetic media, EPROM, EEPROM, optical media, tape, or soft or hard disk. Additionally, local database 56 can ultimately hold a resident copy of licensed application 24. Further, computer platform 22 includes a communications module 58 that enables data communications between the various components of the respective wireless device 12, 14, 16, 18, 20, as well as providing for data communications between the respective wireless device and wireless network 46 and other computer devices connected to the wireless network.In one embodiment, memory 54 includes licensed application 24 and its corresponding licensing configuration 26. Licensed application 24 may be any hardware, software/programs, firmware, logic and/or instructions executable by ASIC/processing engine 50 to perform some function on the respective wireless device. For example, licensed application 24 may include games, printed media, stock updates, news, word processing programs, data processing programs, graphics-related programs, media-related programs, communications-related programs, browser programs, or any other type of information or program operable on. the respective wireless device. In one embodiment, referring to Fig. 1, application 24 is a software program received by the respective wireless device 12, 14, 16, 18, 20 from an application download server 60 located across wireless network 46.As noted above, licensing configuration 26 includes the respective license 28 corresponding to application 24, as well as the associated time sensitivity requirements 30. License 28 can be copied to the respective wireless device from server 60 with application 24, or license 28 can be created on the respective wireless device as a file, key, or other resident object. Further, due to its limited timeframe, license 28 is associated with time sensitivity requirement 30, which includes time requirement 32 and confidence requirement 34. As noted above, time requirement 32 includes a time period or an expiration date and/or time, that indicates when the respective license 28 expires. Also, as noted above, confidence requirement 34 includes a minimum threshold of trustworthiness associated with time requirement 32. It should be noted that, in some embodiments, application 24 may have a plurality of application modes 36 corresponding to varying levels of operational functionality. In this case, there may be a corresponding plurality of time sensitivity requirements 30 each having differing levels of time requirements 32 and/or confidence requirements 34. For example, a basic application mode may provide for basic operational functionality of application 24, while an advanced application mode may provide for additional operational functionality. In turn, the basic application mode may correspond to a first set of time sensitivity requirements, while the advanced application mode may correspond to a second set of time sensitivity requirements. For example, the first set of time sensitivity requirements may allow licensed execution of the application for a longer period of time and/or may require a lower level of confidence in a date/time result than the second set of time sensitivity requirements. In this case, the selected one of the plurality of application modes 36 may be launched upon an initial execution of application 24 based on date/time result 40 and/or confidence factor 42. Alternatively, the selected one of the plurality of application modes 36 may be invoked based on date/time result 40 and/or confidence factor 42 retrieved after the initial execution of application 24, for example, when application 24 reaches an operation having time sensitivity requirement 30. [0024] Additionally, licensing configuration 26 may also include user-defined parameters 62 and/or third party-defined parameters 64 that govern how date/time determination module 38 retrieves date/time result 40. User-defined parameters 62 include predetermined or real-time settings input by a user of the respective wireless device, while third party-defined parameters 64 include predetermined settings established by some third party, such as a wireless network carrier 66 or application download server 60, having some control over portions of the respective device and/or application 24. For example, both user-defined parameters 62 and third party-defined parameters 64 may define one or more of a plurality of fetch parameters 68 that dictate how date/time determination module 38 retrieves a given date/time result 40. [0025] In one embodiment, plurality of fetch parameters 68 available for use by date/time determination, module 38 include one or more of a confidence requirement parameter 34, a retrieval duration parameter 70, a retrieval cost parameter 72, a retrieval performance parameter 74, and additional fetch instructions 76. As noted above, confidence requirement parameter 34 includes a predetermined confidence factor 42, such as a minimum confidence factor, required to be associated with a given date/time result 40. Retrieval duration parameter 70 includes a predetermined time, such as a maximum time, to utilize in retrieving a given date/time result 40. Retrieval cost parameter 72 includes a predetermined monetary cost, such as a maximum cost, associated with retrieving a given date/time result 40. Retrieval performance parameter 74 includes a predetermined performance level, such as a minimum performance level, associated with the operational capabilities of the respective wireless device during the retrieval of a given date/time result 40. Additional fetch instructions 76 include any other parameters or guidelines to be followed by date/time determination module 38 in retrieving and/or determining a given date/time result 40.For example, for a user wishing to minimize the delay prior to execution of the application, user-defined parameters 62 may be configured to direct date/time determination module 38 to retrieve a date/time result 40 associated with the fastest responding source. In another example, for a user worried about cost, uscr-dcfincd parameters 62 may be configured to direct date/time determination module 38 to retrieve a date/time result 40 associated with the least cost source, or with a source that will not exceed a given maximum cost. In yet another example, since some of the sources of date/time result 40 may require more wireless device processing power than other sources, for a user concerned about maintaining a given level of wireless device operational performance, user-defined parameters 62 may be configured to direct date/time determination module 38 to retrieve a date/time result 40 associated with the least processor-intensive source. Similarly, for a third party wishing to guarantee a minimum confidence level, third party-defined parameters 64 may be configured to direct date/time determination module 38 to retrieve a date/time result 40 associated with a source achieving the defined minimum confidence requirement. It should be understood that the above examples are not to be construed as limiting, and that user-defined parameters 62 and third party-defined parameters 64 may include any combination of the plurality of fetch parameters 68, and such combinations may vary depending on the given situation.Further, memory 54 may include a user/device identification ("ID") 78 that provides a unique and/or authenticatable identifier and/or description associated with the respective wireless device and/or the user of the respective device. Examples of ID 78 include a mobile identification number ("MIN"), a phone number, a user name, a social security number, an Internet Protocol ("IP") address, a subscriber identity module ("SIM"), a security identification module, any other type of tracking mechanism, and any combination thereof.In one embodiment, some portion of third party-defined parameters 64 and/or time sensitivity requirements 30 may vary based on ID 78. For example, if ID 78 is associated with a user having a high dollar account with carrier 66, the carrier may set third-party defined parameters 64 in a manner to associate an increased confidence factor 42 with a user input date/time result 40 as compared to an ID 78 associated with a low dollar account. It should be understood that this is but one, non-limiting example and many other schemes may be utilized, depending on the given situation, whereby some portion of third party-defined parameters 64 and/or time sensitivity requirements 30 may vary based on ID 78.Additionally, ASIC/processing engine 50 includes various processing subsystems 80 embodied in hardware, firmware, software, and combinations thereof, that enable the functionality of the respective wireless device 12, 14, 16, 18, 20 and the operability of the respective device on wireless network 46, such as for exchanging data/communications with other devices. For example, processing subsystems 80 may include one or any combination of subsystems such as: sound, non- volatile memory, file system, transmit, receive, searcher, physical layer, link layer, call processing layer, main control, remote procedure, handset, power management, diagnostic, digital signal processor, vocoder, messaging, call manager, Bluetooth<(R)>, Bluetooth<(R)> Location Position ("LPOS"), position determination, position engine, user interface, sleep, data services, security, authentication, US1M/S1M, voice services, graphics, universal serial bus ("USB"), camera/camcorder interface and associated display drivers, multimedia such as moving picture experts group ("MPEG") standard, general packet radio service ("GPRS") standard, etc. [0030] In one embodiment, processing subsystems 80 include one or more time retrieval services 82 operable through, communications module 58 to fetch a date/time result 40 from a remote or local source of time. For example, each time retrieval service 82 may be associated with a respective one of remotely-located time systems 44 or local time system 45. Remotely-located time systems 44 include, for example, a carrier- based time system 84, a trusted time server time system 86 and an unauthenticated time system 88. Carrier-based time system 84 may be, for example, a time server associated with carrier 66, such as the wireless network carrier that provides the respective wireless device 12, 14, 16, 18, 20 with access to all or portions of wireless network 46 for voice and/or data communications. Trusted time server time system 86 includes, for example, a time system associated with a trusted or authenticated third party. For example, trusted time server time system 86 may be a time source approved by the provider of the given application 24 as being a trustworthy provider of time. Unauthenticated time system 88 includes, for example, a source of time that is unapproved, unverified and/or unauthenticated, and therefore may have a lower level of trustworthiness when compared to carrier-based time system 84 and trusted time server time system 86. Local time system 45 may include, for example, a time system set by a user input and maintained in memory 54 on the respective wireless device. Further, each of carrier- based time system 84, a trusted time server time system 86 and unauthenticated time system 88 may provide a certificate 90, along with a date/time result 40, to indicate the source and/or authenticity of the time. For example, certificate 90 may include a digital signature, a hash, etc.Additionally, as mentioned above, associated with each respective time retrieval service 82 is a date/time result 40 and a corresponding confidence factor 42. It should be noted that confidence factor 42 may be associated with the respective date/time result 40 by date/time determination module 38, by time retrieval service 82, by another processing subsystem 80, for example, based on the respective time system 44 or 45 supplying the respective date/time result or based on an associated certificate 90 indicating the authenticity and/or trustworthiness of the date/time result. Alternatively, confidence factor 42 may be supplied by the respective time system 44 or 45. Further, each respective time retrieval service 82 may include a retrieval duration factor 92, a retrieval cost factor 94 and a retrieval performance factor 96. Retrieval duration factor 92 comprises an actual or estimated time required to retrieve the corresponding date/time result 40. Retrieval cost factor 94 comprises an actual or estimated cost required to retrieve the corresponding date/time result 40. Retrieval performance factor 96 comprises an actual or estimated affect on data processing performance of the respective wireless device during retrieval the corresponding date/time result 40.Date/time determination module 38, which may be any combination of hardware, software, firmware and executable logic, includes a comparator 98 operable to match a given configuration of fetch parameters 68 with each of the factors 42, 92, 94, 96 of the respective time retrieval services 82 in order to select one or more services and retrieve one or more date/time results 40. Alternatively, date/time determination module 38 may present the various factors 42, 92, 94, 96 in one of a plurality of user interface messages 100, presented in a view 102 on a user interface 104 of the respective wireless device 12, 14, 16, 18, 20. Tn this case, a user of the respective device may select one or more time retrieval service 82, such as by providing an input through an input mechanism 106, such as a keypad, touch display, voice recognition software, etc., on the device.The plurality of user interface messages 100 may include any other information presentable to the user of the respective device during the operation of date/time determination module 38. For example, the plurality of user interface messages 100 may further include any combination of: a request to connect to a network to obtain a date/time result; a listing of available time retrieval services for obtaining a date/time result; a retrieved date/time result; a confidence factor; a retrieval duration factor; a retrieval cost factor; a retrieval performance factor; an actual retrieval duration; an actual retrieval cost; an actual retrieval performance; a message indicating the date/time retrieval is in process; etc.Date/time determination module 38 is included as a portion of APT 52. APT 52 includes a class of software extensions that allow applications resident on computer platform 22, such as application 24, to access ASIC/processor 50. These software extensions can communicate with processing subsystems 80 on the wireless device, which allows both data reads and commands. For example, this software extension can send commands, including register for log messages, on behalf of the applications that invoke it. Each resident application on wireless device can create an. instance of this new software extension to communicate with the subsystems independently. The module can then forward the responses of the subsystems to the requesting application, or across wireless network 46 to another computer device. For example, this capability allows application download server 60 and/or carrier 66, or any approved third party, to remotely monitor and/or control licensing privileges on the respective wireless device. [0035] Fig. 3 is a more detailed schematic diagram of a cellular telephone embodiment of Fig. 1. The cellular wireless network and plurality of cellular telephones 12 of Fig. 3 are merely exemplary, and the disclosed embodiments can include any system whereby any remote modules, such as wireless devices 12, 14, 16, 17, 18, communicate ovcr-thc- air between and among each other and/or between and among components of a wireless network, including, without limitation, wireless network carriers and/or servers. Fig. 3 illustrates three main components, namely a wireless network area 108, a network interface 110, and a server environment 112. In addition, computer platform 22 pertaining to exemplary cellular telephones 12 is illustrated.Wireless network area 108 is illustrated to include a plurality of cellular telephones 12. In addition, wireless network area 108 includes wireless network 46, as previously described with respect to Fig. 1. Here, wireless network 46 includes multiple base stations ("BTS") 114 and a mobile switching center ("MSC") 116. [0037] MSC 116 may be connected to network interface 110, specifically its component carrier network 118, through either a wired or wireline connection network 120. For example, network 120 may comprise a data services network, a switched voice services network, often referred to as POTS ("plain old telephone service"), and/or a combination of both, including for example an Internet portion of a network for data information transfer and a POTS portion of a network for voice information transfer. For example, typically, in network 120, network or Internet portions transfers data, and the POTS portion transfers voice information transfer.MSC 116 may also be connected to the multiple BTS's 114 by another network 122. Network 122 may carry data and/or switched voice information. For example, network 122 may comprise carry a data network, a voice network, and/or a combination of both, including for example an Internet portion of a network for data transfer and a POTS portion of a network for voice information transfer.BTS 114 are wirelessly connected to exemplary cellular telephones 12 in wireless network area 108. For example, BTS 114 may ultimately broadcast messages wirelessly to cellular telephones 12 or receive messages wirelessly from cellular telephones 12, via switched voice services, data transfer services (including short messaging service ("SMS")), or other over-the-air methods.As noted, the second main component of Fig. 3 is network interface 110. Specifically, network interface 110 is shown to include carrier network 118, data link 124 and local area network ("LAN") 126. The features and functions associated with data link 124 and LAN 126 are described below with reference to server environment 112.Carrier network 118 is any regional, national or international network offering switched voice communication and/or data communication services. As such, carrier network 118 may include switched voice or data service provider communications facilities and lines, including data and/or switched voice information, or any combination of both, including for example an Internet portion of a network for data transfer and a POTS portion of a network for voice information transfer. Tn one embodiment, carrier network 118 controls messages, generally in the form of data packets, sent to or received from mobile switching center ("MSC") 116. [0042] The third main component of Fig. 3 is server environment 112. In one embodiment, server environment 112 is the environment wherein the above described application download server 60 functions. As illustrated, server environment 112 may further include a separate data repository 128, and a data management server 130. [0043] Application download server 60 can be in communication over LAN network 126 (of network interface 110) with separate data repository 128 for storing applications 24 and/or licensing configurations 26 to download to wireless devices. Further, data management server 130 may be in communication with application download server 60 to provide post-processing capabilities, data flow control, etc. Data management server 130 may be a network carrier server, for example, a server that manages user account informatiori or a server that provides trustworthy date/time results. Application download server 60, data repository 128 and data management server 130 may be present on the illustrated network with any other network components that are needed to provide cellular telecommunication services. Application download server 60, data repository 128 and/or data management server 130 communicate with carrier network 118 through a data link 124 (of network interface 110) such as the Internet, a secure LAN, WAN, or other network. [0044] In operation, referring to Fig. 4, one embodiment of a method for managing time sensitive application privileges on a wireless device includes receiving a request to execute an application having a time sensitivity requirement (Block 150). For example, API 52 may control the launching of applications on the respective wireless device. When licensed application 24 is requested for launching, date/time determination module 38 receives the request to execute and application, and also receives or accesses the associated licensing configuration 26.Based on licensing configuration 26, date/time determination module 38 then retrieves a date/time result 40 (Block 152). Date/time determination module 38 may utilize any of the factors associated with licensing configuration 26 to determine which time retrieval service 82 is utilized to fetch a date/time result 40. For example, comparator 98 may review the given configuration of fetch parameters 68, as well as the available time retrieval services 82, in order to find a service that matches the fetch configuration. In one embodiment, for example, date/time determination module 38 and the respective licensed application 24 communicate and automatically fetch a date/time result that complies with the given license 28 and time sensitivity requirements 30. In another embodiment, date/time determination module 38 automatically fetches a given date/time result 40 based on a user-defined parameter 62 or a third party-defined parameter 64. In another embodiment, date/time determination module 38 present the user of the respective wireless device with the available time retrieval services 82, along with their associated factors 42, 92, 94, 96, and then retrieves one or more date/time results 40 based on a selection input by the user.Further, the method includes associating a corresponding confidence factor 42 with date/time result 40 (Block 154). As mentioned above, in one embodiment, this association may be performed by date/time determination module 38 based on a source, i.e. the respective time system 44 or 45, providing the given date/time result 40. Alternatively, the association, may be based on a certificate 90 indicating a level of authenticity and/or trustworthiness of the given date/time result 40. It should be understood, however, that these examples are to be construed as non-limiting, and any scheme may be utilized to associate a given date/time 40 with a confidence factor 42. [0047] Additionally, the method includes determining execution of the application based on the confidence factor 42 (Block 156). For example, date/time determination module 38 automatically initiates execution of licensed application 24 upon receipt of date/time result 40 that achieves the given fetch parameter configuration. In one embodiment, in order to determine whether or not to execute licensed application 24, date/time determination module 38 compares licensing configuration 26 with the retrieved date/time result 40 and its confidence factor 42 to ensure that the licensing configuration is achieved. In another embodiment, date/time determination module 38 passes the retrieved date/time result 40 and its associated confidence factor 42 to licensed application 24, which ensures that licensing configuration is achieved, and then sends an "execute" or "do not execute" command to date/time determination module 38 or API 52 to either execute the application or to provide user with a message indicating that the application is not executable based on the retrieved date/time result. [0048] Referring to Fig. 5, in another specific example of the operation of the described embodiments, a user 132 of the respective wireless device 12, 14, 16, 18, 20 provides an input to input mechanism 106 (Fig. 2) that generates a launch application request message 170. Launch application request message 170 references a given licensed application 24, and hence the corresponding licensing configuration 26. As such, date/time determination module 38 ultimately receives this request, or receives an associated request for a date/time, and generates a date/time request message 172 that is received by ASIC/processor 50 or an associated operating system component of the respective wireless device. ASIC/processor 50 then supplies an initial reply message 174 to date/time determination module 38.Initial reply message 174 may be any one of a plurality of messages relating to the current state of time maintained by the respective wireless device. For example, initial reply message 174 may be a message indicating that ASIC/processor 50 does not have any time. In this case, date/time determination module 38 sends user 132 an initial status message 176. For example, initial status message 176 is one of the plurality of user interface messages 100 in a view 102 that relays the fact that no time is currently available, and that requests permission to retrieve a date/time result 40 from a remote time system 44. In another embodiment, initial reply message 174 may be a list of available time retrieval services 83 and their associated factors 42, 92, 94 and 96. In this case, date/time determination module 38 generates initial status message 176, which is one of the plurality of user interface messages 100 in a view 102, indicating the available time retrieval services 83 and their associated factors 42, 92, 94 and 96, and prompts user 132 for a selection. [0050] Upon receiving initial status message 176, user 132 may then provide a user selection message 178 indicating a choice of the user as to how to proceed. Continuing with the above-listed examples, user selection message 178 may indicate a permission to utilize a remote time system 44 in the retrieval of date/time result 40, or may indicate a selected time retrieval service 82 based on a user's desired confidence factor, retrieval duration, retrieval cost, and/or retrieval performance. For example, if user 132 only desires to utilize a basic functional mode 36 of application 24, the user may select one of a plurality of time retrieval services 82 having a relatively low confidence factor 42 compared to other services that have higher confidence factors that meet a higher threshold corresponding to a more advanced function mode 36.In any case, date/time determination module 38 then generates a date/time request message 180 based on the received user selection message 178. The respective wireless device processes this date/time request message 180 and transmits it across wireless network 46 to a corresponding remote time system 44. In turn, the respective remote time system 44 returns a date/time response message 182, which includes at least date/time result 40, and may also include confidence factor 42, certificate 90 and/or any other associated factor relating to the retrieval of the date/time result. If the received date/time result 40 and the associated confidence factor 42 meet the licensing configuration 26, then date/time determination module 38 executes licensed application 24 (message 184). If the received date/time result 40 and the associated confidence factor 42 do not meet the licensing configuration 26, or if no date/time result 40 and/or confidence factor 42 is received, then date/time determination module 38 sends a status message 186 to user 132. In this case, status message 186, which is one of the plurality of user interface messages 100 in a view 102, indicating, for example, that licensed application 24 cannot be executed.It should be understood that the above message sequence is but one of a plurality of message sequence scenarios, and that many alternatives exist depending on the given situation. For example, rather than initially communicating with user 132 (i.e. messages 176 and 178), date/time determination module 38 may automatically retrieve a date/time result 40 based on an associated licensing configuration 26. [0053] While the various disclosed embodiments have been illustrated and described, it will be clear that the subject matter of this document is not limited to these embodiments only. Numerous other modifications, changes, variations, substitutions and equivalents will be apparent to those skilled in the art without departing from the spirit and scope of the disclosed embodiments as described in the claims.
Described is a system and method for centralized synchronization for the transportation of data between devices in different clock domains. In a preferred embodiment, synchronization logic synchronizes read data from an asynchronous peripheral to a bus clock. Rather than being located on each peripheral, the synchronization logic is located in the bus interface logic. When there is an indication that synchronization is needed for a peripheral, the synchronization logic samples the data bus twice or more and compares the values of consecutive data samples. If the data samples are equal, this data is returned to the bus master. If they are different, the data in the next cycle is returned to the bus master.
CLAIMS 1. A circuit for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, the circuit comprising: a data register to sample data placed on a data bus by the first device during a first bus clock cycle; a comparator to compare data on the data bus during a second, consecutive bus clock cycle to the data sampled by the data register; a multiplexor to output the sampled data to the second device during a third, consecutive bus cycle when the sampled data is equal to the data on the data bus during the second, consecutive bus clock cycle and to output data on the data bus to the second device during the third, consecutive bus clock cycle when the sampled data is not equal to the data on the data bus during the second bus clock cycle. 2. A circuit for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per claim 1, wherein the first device is an asynchronous peripheral in a computer system and the second device is a bus master. 3. A circuit for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per claim 1, wherein the first device is maintained in a list indicating that synchronization is needed when data is transported to the second device and the circuit automatically performs synchronization when the data is transported to the second device. 4. A circuit for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per claim 1, wherein the first device provides a signal to the circuit to indicate that synchronization is needed when the data is transported to the second device. 5. A circuit for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per claim 1, wherein the first device provides a signal that indicates a duration of time for <Desc/Clms Page number 9> transporting data to the second device needs extended. 6. A circuit for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per claim 1, wherein the first device provides to the circuit a single signal used to both indicate that synchronization is needed and to indicate a duration of time for transporting data to the second device needs extended. 7. A method for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, the method comprising: sampling data placed on a data bus by the first device during a first bus clock cycle; comparing data on the data bus during a second consecutive bus clock cycle to the sampled data; for the sampled data equal to the data on the data bus during a second consecutive bus clock cycle, outputting the sampled data to the second device during a third consecutive bus cycle; and for the sampled data not equal to the data on the data bus during the second consecutive bus clock cycle, outputting data on the data bus during the third consecutive bus clock cycle to the second device. 8. A method for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per claim 7, wherein the first device is an asynchronous peripheral in a computer system and the second device is a bus master. 9. A method for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per claim 7, wherein the first device is maintained in a list indicating that synchronization is needed when data is transported to the second device and the circuit automatically performs synchronization when the data is transported to the second device. 10. A method for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per <Desc/Clms Page number 10> claim 7, wherein the first device provides a signal to the circuit to indicate that synchronization is needed when the data is transported to the second device. 11. A method for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per claim 7, wherein the first device provides a signal that indicates a duration of time for transporting data to the second device needs extended. 12. A method for providing synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain, as per claim 7, wherein the first device provides to the circuit a single signal used to both indicate that synchronization is needed and to indicate a duration of time for transporting data to the second device needs extended. 13. A computer comprising: an asynchronous peripheral that transmits read data to a bus master by placing the read data on a peripheral bus; bus interface logic to synchronize the read data, the bus interface logic comprising: a data register to sample read data on the peripheral bus during a first bus clock cycle; a comparator to compare read data on the peripheral bus during a second consecutive bus clock cycle to the sampled read data; and a multiplexor to output the sampled read data during a third consecutive bus cycle when the sampled read data is equal to the read data on the data bus during a second consecutive bus clock cycle and to output read data on the data bus during the third consecutive bus clock cycle when the sampled read data is not equal to the read data on the data bus during a second consecutive bus clock cycle. 14. A computer, as per claim 13, wherein the peripheral is maintained in a list indicating that synchronization is needed for a read access and the bus interface logic automatically performs synchronization when a read access is performed. <Desc/Clms Page number 11> 15. A computer, as per claim 13, wherein the peripheral provides a signal to the bus logic to indicate that synchronization is needed when a read access is performed. 16. A computer, as per claim 13, wherein the peripheral provides a signal that indicates a duration of time for transporting data to the second device needs extended. 17. A computer, as per claim 13, wherein the peripheral provides to the bus logic a single signal used to both indicate that synchronization is needed and to indicate a duration of time for transporting data to the second device needs extended.
<Desc/Clms Page number 1> DATA SYNCHRONIZATION ON A PERIPHERAL BUS BACKGROUND OF THE INVENTION The invention relates to the field of synchronizing asynchronous data, and in particular to synchronizing read data from an asynchronous peripheral to a bus clock. Flip-flops are often used as storage elements in digital logic systems. Flip- flops sample their inputs on the rising or falling edge of the clock and continue to hold the sampled input as their output until the next clock edge. Because of the use of flip-flops in digital logic systems, metastability is an important design consideration that almost all designers of digital logic systems must contend with. When a flip-flop goes into a metastable state, its output is unknown and may be between a logic HIGH and a logic LOW, or may be oscillating. If the output does not resolve to a stable value before the next clock edge the metastable condition may be passed to other logic devices connected to the output of the flip-flop. Further, even if the output resolves to a stable value before the next clock edge, the value may be incorrect, causing invalid data to be passed to other logic devices connected to the output of the flip-flop. Metastability arises when a flip-flop input changes during the setup and/or hold time of a flip-flop. In most digital logic systems, the inputs to the flip-flops do not change during the setup and hold times because the systems are designed as totally synchronous systems, which meet or exceed their components'specifications. In a totally synchronous design, the inputs to the flip-flops have a fixed relationship to the clock, i. e. , they are synchronized to the clock. There are some systems, however, in which a totally synchronous design using a single master clock is not possible, or where certain advantages are gained from using an asynchronous design. In these systems, there is a need to interconnect subsystems that have no defined relationship between their clocks, i. e. different clock domains. This often results in a need to provide data from one of the clock domains as an asynchronous input to a flip-flop in the other clock domain. For these systems to function properly, there is a need to synchronize the incoming asynchronous input to the clock domain of the flip-flop. While described in relation to flip-flops, the metastable condition and its associated difficulties also applies to other types of storage elements in a digital logic system, such as latches or combinations of latches. Such synchronization is often needed for transferring data from asynchronous peripherals to a bus master in a computer system. Figure 1 illustrates a block diagram <Desc/Clms Page number 2> of a computer system having an asynchronous peripheral 106. Most modern bus systems provide some type of bus interface logic 100, which controls the transfer of data between a peripheral 106 and a bus master 104 using bus 102. Bus interface logic 100 operates in one clock domain, which may be synchronized to the clock domain of bus master 104. Peripheral 106, however, operates in its own clock domain, which is different from the clock domain of bus interface logic 100. Typically, the frequency of the clock domain of peripheral 106 is slower than the clock domain of bus interface logic 100. When data is being read from asynchronous peripheral 106, it places the data on bus 102. The data on the bus is then sampled using flip-flops set up as a register in either bus interface logic 102 or bus master 104. Without synchronization of the read data to the bus clock, an asynchronous peripheral may place the data on the bus during the set-up and/or hold times of the flip-flops used by bus interface logic 100, causing one or more of them to go into a metastable state. Therefore, in most systems, peripheral 106 synchronizes the placement of data on bus 102 to the clock domain of bus interface logic 100 using synchronization logic 108 on peripheral 106. Peripheral 106 receives the bus clock and synchronization logic 108 synchronizes the placement of the read data on bus 102 with the bus clock. Figure 2 illustrates typical logic on peripheral 106 for outputting read data to bus 102, including synchronization logic 108. For a peripheral to synchronize the data to the bus clock and provide a stable value on the bus, synchronization logic 108 generally requires a read buffer 202 that isolates the internal data change/update inside the peripheral from the read data driven onto the bus. An update~enable signal is asserted to read data that is to be placed on the bus into an internal register 206. Generally, the update~enable signal is long enough to be synchronized to bus clock. However, some applications may use a short update enable signal with logic that extends the update~enable signal for proper synchronization to the bus clock. Logic 204 then synchronizes the update~enable signal with the bus clock so that the data is transferred to read buffer 202 and, consequently, placed upon the bus in a manner synchronized to the bus clock. Synchronization performed on each peripheral, however, is disadvantageous because synchronization logic is needed on each peripheral. Further, when synchronization is performed by each peripheral, synchronization is decentralized and not necessarily performed the same way for each peripheral. It would therefore be <Desc/Clms Page number 3> advantageous to be able to provide stable, valid data from an asynchronous peripheral to a bus master without requiring each peripheral to synchronize its output to the bus clock. More generically, it would be advantageous to provide centralized synchronization for the transportation of data between devices in different clock domains. SUMMARY OF THE INVENTION One aspect of the present invention provides a circuit for synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain. The circuit comprises a data register to sample data placed on a data bus by the first device during a first bus clock cycle and a comparator to compare data on the data bus during a second, consecutive bus clock cycle to the data sampled by the data register. The circuit also comprises a multiplexor to output the sampled data to the second device during a third, consecutive bus cycle when the sampled data is equal to the data on the data bus during the second, consecutive bus clock cycle and to output data on the data bus to the second device during the third, consecutive bus clock cycle when the sampled data is not equal to the data on the data bus during the second bus clock cycle. Another aspect of the present invention provides a method for synchronized transportation of data from a first device having a first clock domain to a second device having a second clock domain. Data placed on a data bus by the first device is sampled during a first bus clock cycle. Data on the data bus during a second consecutive bus clock cycle is compared to the sampled data. When the sampled data is equal to the data on the data bus during a second consecutive bus clock cycle, the sampled data is output to the second device during a third consecutive bus cycle. When the sampled data is not equal to the data on the data bus during the second consecutive bus clock cycle, data on the data bus during the third consecutive bus clock cycle is output to the second device. Another aspect of the present invention provides a computer comprising an asynchronous peripheral that transmits read data to a bus master by placing the read data on a peripheral bus and bus interface logic to synchronize the read data. The bus interface logic comprises a data register to sample read data on the peripheral bus during a first bus clock cycle; a comparator to compare read data on the peripheral bus during a second consecutive bus clock cycle to the sampled read data; and a <Desc/Clms Page number 4> multiplexor to output the sampled read data during a third consecutive bus cycle when the sampled read data is equal to the read data on the data bus during a second consecutive bus clock cycle and to output read data on the data bus during the third consecutive bus clock cycle when the sampled read data is not equal to the read data on the data bus during a second consecutive bus clock cycle. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates a block diagram of a computer system having an asynchronous peripheral; Figure 2 illustrates prior art synchronization logic on a peripheral for outputting read data to a peripheral bus; Figure 3 illustrates a block diagram of a computer system according to the present invention, which has an asynchronous peripheral; Figure 4 illustrates the method of the present invention implemented by synchronization logic; Figure 5a illustrates one embodiment of synchronization logic that performs the method of figure 4; Figure 5b illustrates timing waveforms for nPWAIT deasserted for one bus clock cycle; and Figure 5c illustrates timing waveforms when nPWAIT is deasserted for more than one clock cycle by an asynchronous peripheral to extend the duration of the read access. DETAILED DESCRIPTION OF THE INVENTION While the present invention will be described in relation to a preferred embodiment of synchronization between a peripheral and a bus master in a computer system, it is not limited thereto. The present invention is envisioned as being applicable to the transportation of digital data from any source device with a first clock domain to any destination device with a second clock domain. Figure 3 illustrates a block diagram of a computer system according to the present invention, which has an asynchronous peripheral 306. The computer system according to the present invention is similar to the computer system of figure 1. As with the computer system of figure 1, the computer system according to the present <Desc/Clms Page number 5> invention has bus interface logic 300, which controls the transfer of data between a peripheral 306 and a bus master 304 using bus 302. Bus interface logic 300 operates in one clock domain, which is usually synchronized to the clock domain of bus master 304. Peripheral 306, however, operates in its own clock domain, which is different from the clock domain of bus interface logic 300. The frequency of the clock of peripheral 306 is typically, although not always, slower than the clock of bus interface logic 300.] Asynchronous peripheral 306, however, does not change the read data faster than the frequency of the clock of bus interface logic 300. When data is being read from peripheral 306, it places the data on bus 302 and bus interface logic 300 uses flip-flops configured as a register to sample the data. However, instead of having synchronization logic in each peripheral, logic 308 is built into bus interface logic 302 to insure that data read from bus 102 is valid and stable before it is passed to bus master 304. Therefore, for a read from peripheral 306, peripheral 306 places read data onto bus 302 asynchronously from the bus clock. Logic 308 then insures valid data is passed to bus master 304 by implementing the method illustrated in figure 4. As illustrated, logic 308 first determines that peripheral 306 is asynchronous (step 400). Logic 308 then samples the data peripheral 306 places on the bus twice or more (step 402). Logic 308 compares the values of consecutive data samples (step 404). If the data samples are equal, the sampled data is returned to bus master 304 as valid data (step 406). If they are different, the data in the next cycle is returned to bus master 304 as valid data (step 408). Figure 5a illustrates one embodiment of synchronization logic 308 that performs the method of figure 4. In this embodiment, the data is a set of related bits representing a single piece of information, e. g. a data word. These bits are kept coherent in order to have valid information. While not shown, embodiments in which the data is a combination of independent bits is envisioned within the scope of the present invention. In this case, there is no need to keep coherence between the bits and they can be changed independently. Similarly, while a register with a gated clock is illustrated as the mechanism for placing data on the bus, other implementations of doing so are envisioned. For example, a register with a feedback multiplexor, an individual register with its own clock for each bit, or an ungated clock can be used. Likewise, even though the bus is illustrated as a tri-state bus, one of skill in the art would appreciate that any type of bus is within the scope of the present invention. <Desc/Clms Page number 6> When a read transaction is initiated (e. g. , by a Read Enable signal provided by bus master 304), a signal, nPWAIT, is provided from peripheral 306 to bus interface logic 300 to inform bus interface logic 300 that peripheral 306 is asynchronous and, consequently, synchronization needs to be performed. In many bus systems, a signal is also used to extend the duration of the read access for slow peripherals, i. e. for peripherals that run at a lower frequency or for any other reason do not have their data ready within one bus clock cycle. The illustrated embodiment of synchronization logic 308 combines the use of nPWAIT to inform bus interface logic that synchronization needs to be performed and to extend the read access duration. Rather than using a read buffer in peripheral 306, data is driven from a register 506 directly to a peripheral bus 510 upon a read access. Also upon a read access, peripheral 306 sends the nPWAIT signal LOW. When peripheral 306 needs to extend the duration of the read access, nPWAIT is sent low for the number of bus clock cycles peripheral 306 needs the read access extended. When peripheral 306 only needs to inform bus interface logic 300 that synchronization is needed, nPWAIT is sent LOW for one bus clock cycle. In this case the peripheral bus access lasts 3 bus clock cycles. When peripheral 306 indicates that synchronization is needed, bus interface logic 300 performs synchronization using synchronization logic 308. Synchronization logic 308 comprises a data register 500 connected to peripheral bus 510, a comparator with one input connected to peripheral bus 510 and the other input connected to the output of data register 500 and a multiplexor 504 for choosing between peripheral bus 510 and the output of data register 500 as the input, DIN, to bus master 304. The input DIN is provided to a register (not shown) that bus master 304 uses to sample in the data from peripheral 306. Synchronization logic 308 also comprises logic 508 for controlling the operation of data register 500, comparator 502 and multiplexor 504 to implement the method illustrated in figure 4. Operation of synchronization logic 508 is discussed in conjunction with figure 5b, which illustrates timing waveforms for nPWAIT deasserted for one bus clock cycle. While the bus clock, PCLK, is shown as having the same frequency and phase relationship as the bus master's clock, MCLK, this is not necessary. At times, there may be a slight skew between these two clocks, or the PCLK may be divided from MCLK, therefore having a slower frequency, depending upon the system implementation. As shown, during a read access, nPWAIT is deasserted during the <Desc/Clms Page number 7> first bus clock cycle of the read. When nPWAIT is deasserted, the data placed on bus 510 during the first bus cycle, Data 1, is read into data register 500 at the end of the first bus clock cycle. Comparator 502 compares the data on bus 510 during the second bus clock cycle, Data 2, with Data 1 in data register 500. At the beginning of the third bus clock cycle, the result of comparator 502 is used to control multiplexor 504 to output either Data 1 from data register 500 or to output the data on bus 510 during the third bus clock cycle. As can be seen, when Data 1 is equal to Data 2, the output of data register 500 (i. e. Data 1) is output by multiplexor 504 as the input, DIN, to bus master 304. However, when Data 1 is not equal to Data 2, the data on bus 510 is output by multiplexor 504 as the input, DIN, to bus master 304. Bus interface logic 308 provides a wait signal, nWAIT, that places bus master 306 into a wait state until DIN is valid. The signal nWAIT places bus master 306 into a wait state in any appropriate manner, such as disabling the bus master's clock or preventing bus master 306 from progressing to the next state by performing NOPs. Figure 5c illustrates timing waveforms when nPWAIT is deasserted for more than one clock cycle by peripheral 306 to extend the duration of the read access. As shown, when nPWAIT is deasserted for more than one bus clock cycle, the data output by multiplexor 504 as DIN is the data on bus 510 on the last cycle of the access (i. e. the cycle following rising edge of nPWAIT). While nPWAIT has been shown as indicating the need for synchronization and used to extend a read access, it is possible to use two separate signals for each function. Further, it is also possible to eliminate the need for a signal to indicate that the peripheral is asynchronous. By maintaining a list of which peripherals are asynchronous and automatically implementing the present invention for those peripherals, the bus interface logic can still perform synchronization without the need for a signal to indicate synchronization is needed. Although the present invention has been shown and described with respect to a preferred embodiment thereof, various changes, omissions and additions to the form and detail thereof, may be made therein, without departing from the spirit and scope of the invention. What is claimed is :
An electronic device (100 A) includes a driver circuit (134) embodied on an IC chip (132). The driver circuit includes a threshold voltage selection circuit (114) that is coupled to receive a horn comparator threshold setting (HORN_THR) and to use the horn comparator threshold setting to provide a horn comparator threshold voltage (Vhorn_thr). The driver circuit also includes a comparator (148) that has a non-inverting input coupled to a first pin (PI) and an inverting input coupled to receive the horn comparator threshold voltage.
CLAIMSWhat is claimed is:1. An electronic device comprising a driver circuit embodied on an integrated circuit (IC) chip, the driver circuit comprising: a threshold voltage selection circuit having an input coupled to a digital core and an output coupled to provide a selectable horn comparator threshold voltage; and a comparator having a non-inverting input coupled to a first pin and an inverting input coupled to the output of the threshold voltage selection circuit.2. The electronic device as recited in claim 1 wherein the threshold voltage selection circuit comprises: a resistor ladder coupled between an upper supply voltage for the driver circuit and a lower supply voltage; and a plurality of switches, each switch of the plurality of switches having a first terminal coupled to a respective location on the resistor ladder and a second terminal that can be selectively coupled to the output of the threshold voltage selection circuit.3. The electronic device as recited in claim 2 wherein the digital core comprises: a bus interface that operates under a bus protocol, the bus interface being coupled to a serial data pin and to a serial clock pin; and a plurality of registers that includes a register for storing a horn comparator threshold setting, the digital core being further coupled to utilize the horn comparator threshold setting to control the plurality of switches.4. The electronic device as recited in claim 3 wherein the bus protocol is Inter- Integrated Circuit (I2C) protocol.5. The electronic device as recited in claim 2 wherein the threshold voltage selection circuit further comprises a cutoff N-type field effect transistor (NFET) coupled between the resistor ladder and the lower supply voltage, the cutoff NFET coupled to be turned on when the threshold voltage selection circuit is active.6. The electronic device as recited in claim 5 further comprising: a first set of one or more inverters coupled between the output of the comparator and a second pin; and a second set of one or more inverters coupled between the output of the comparator and a third pin, wherein one of the first set and the second set has an odd number of inverters and a remaining set has an even number of inverters.7. The electronic device as recited in claim 5 further comprising: a first amplifier coupled between the output of the comparator and a second pin; and a delay buffer, an inverter and a second amplifier coupled in series between the output of the comparator and a third pin.8. The electronic device as recited in claim 7 further comprising: a first NFET coupled between the second pin and the lower supply voltage; and a second NFET coupled between the third pin and the lower supply voltage, a gate of the first NFET and a gate of the second NFET being coupled to be on when the driver circuit is not enabled.9. The electronic device as recited in claim 8 further comprising: a first discharge resistor coupled between the second pin and the first NFET; and a second discharge resistor coupled between the third pin and the second NFET.10. The electronic device as recited in claim 3 wherein the IC chip further comprises: a carbon monoxide detection circuit coupled to a plurality of CO-detection pins; a photo-detection circuit coupled to a plurality of photo-detection pins; and a multiplexor coupled to receive outputs from the carbon monoxide detection circuit and the photo-detection circuit, the multiplexor further coupled to a fourth pin for communicating the outputs.11. The electronic device as recited in claim 10 wherein the IC chip further comprises an ion detection circuit coupled to a plurality of ion-detection pins, the multiplexor being further coupled to receive outputs from the ion detection circuit.12. The electronic device as recited in claim 10 wherein the electronic device comprises a smoke alarm in which the IC chip is coupled between a microcontroller and a piezo buzzer, the microcontroller being coupled to a plurality of microcontroller pins on the IC chip that include the fourth pin, the piezo buzzer having a first input electrode and a feedback electrode on a first side and a second input electrode on an opposite side, the feedback electrode being coupled to the first pin, the first input electrode being coupled to the second pin and the second input electrode being coupled to the third pin.13. The electronic device as recited in claim 12 wherein the smoke alarm further comprises: a carbon monoxide sensor coupled to the plurality of CO-detection pins; and a photo sensor coupled to the plurality of photo-detection pins.14. The electronic device as recited in claim 13 wherein the IC chip further comprises an ion detection circuit coupled to a plurality of ion-detection pins, the multiplexor being further coupled to receive outputs from the ion detection circuit and further wherein the smoke alarm further comprises an ion sensor coupled to the plurality of ion-detection pins.15. A method of operating a piezo buzzer, the method comprising: coupling a driver circuit for the piezo buzzer between a microcontroller and the piezo buzzer; providing a first horn comparator threshold setting of a plurality of horn comparator threshold settings to the driver circuit and determining a first duty cycle of the piezo buzzer using the first horn comparator threshold setting; providing a second horn comparator threshold setting of the plurality of horn comparator threshold setting to the driver circuit and determining a second duty cycle of the piezo buzzer using the second horn comparator threshold setting; and selecting a horn comparator threshold setting of the plurality of horn comparator threshold settings that provides a respective duty cycle that is closest to fifty percent.16. The method as recited in claim 15 wherein providing a respective horn comparator threshold setting comprises: coupling a non-inverting input of a comparator in the driver circuit to a feedback electrode of the piezo buzzer; and controlling a plurality of switches that each couples a respective horn comparator threshold voltage from a resistor ladder to an inverting input of the comparator.17. The method as recited in claim 16 wherein determining a respective duty cycle comprises: activating the driver circuit; and measuring a respective duty cycle of an output signal sent by the comparator.18. The method as recited in claim 15 wherein the microcontroller provides the first horn comparator threshold setting, the second horn comparator threshold setting and the selected horn comparator threshold setting to an integrated circuit containing the driver circuit using a bus and a bus protocol.19. The method as recited in claim 18 wherein the bus protocol is Inter-Integrated Circuit (I2C).20. The method as recited in claim 15 further comprising, prior to performing the selecting, for additional horn comparator threshold settings of the plurality of horn comparator threshold settings, providing respective ones of the additional horn comparator threshold settings to the driver and determining respective duty cycles of the piezo buzzer using the respective additional horn comparator threshold settings.
DUTY CYCLE TUNING IN SELF-RESONANT PIEZO BUZZERBACKGROUND[0001] The behavior of piezoelectric materials in a piezo buzzer varies from part to part and is further influenced by mechanical stresses from surrounding elements. Each piezo buzzer thus has a unique resonant frequency. In order to achieve maximum loudness, the piezo buzzers can be used in self-resonant mode, where a feedback terminal provides positive feedback to create a sustained oscillation at the piezo buzzer’s own resonant frequency.[0002] The loudness of a piezo buzzer, also referred to herein as a horn, depends on three major parameters: (1) frequency, (2) amplitude and (3) duty cycle of the clock signal applied across its plates. The maximum achievable loudness from any buzzer is at the buzzer’s resonant frequency, the maximum possible differential amplitude and fifty percent (50%) duty cycle. The oscillation amplitude is almost always set to a maximum possible voltage that the circuit can generate. Prior art driving circuits for a self-resonant piezo buzzer optimize only the frequency and amplitude.SUMMARY[0003] Disclosed embodiments provide an electronic device that includes an integrated circuit (IC) chip having a driver circuit for a piezo buzzer. The driver circuit uses a comparator, which has a horn comparator threshold voltage that is programmable, to convert the feedback voltage from an analog signal to a digital signal. Adjusting the horn comparator threshold voltage changes the duty cycle of the driving signals sent to the piezo buzzer. A number of horn comparator threshold settings, e.g., four, are provided. The piezo buzzer can be tested using two or more of the horn comparator threshold settings and a horn comparator threshold setting that provides a duty cycle that best approaches fifty percent can be selected. The ability to vary the duty cycle can be utilized to adjust for differences in piezo buzzers and in the fabrication of the driver circuit, which can affect the common mode and amplitude of the horn feedback signal. Accordingly, the loudness of the piezo buzzer can be further enhanced.[0004] In one aspect, an embodiment of an electronic device is disclosed. The electronic device includes a threshold voltage selection circuit coupled to receive a horn comparator threshold setting and to use the horn comparator threshold setting to provide a horn comparator threshold voltage; and a comparator having a non-inverting input coupled to a first pin and an inverting input coupled to receive the horn comparator threshold voltage.[0005] In another aspect, an embodiment of a method of operating a piezo buzzer is disclosed. The method includes coupling a driver circuit for the piezo buzzer between a microcontroller and the piezo buzzer; providing a first horn comparator threshold setting of a plurality of horn comparator threshold settings to the driver circuit and determining a first duty cycle of the piezo buzzer using the first horn comparator threshold setting; providing a second horn comparator threshold setting of the plurality of horn comparator threshold settings to the driver circuit and determining a second duty cycle of the piezo buzzer using the second horn comparator threshold setting; and selecting a horn comparator threshold setting of the plurality of horn comparator threshold settings that provides a respective duty cycle that is closest to fifty percent.BRIEF DESCRIPTION OF THE DRAWINGS[0006] Embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that different references to "an" or "one" embodiment in this disclosure are not necessarily to the same embodiment, and such references may mean at least one. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. As used herein, the term "couple" or "couples" is intended to mean either an indirect or direct electrical connection unless qualified as in "communicably coupled" which may include wireless connections. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.[0007] The accompanying drawings are incorporated into and form a part of the specification to illustrate one or more exemplary embodiments of the present disclosure. Various advantages and features of the disclosure will be understood from the following Detailed Description taken in connection with the appended claims and with reference to the attached drawing figures in which:[0008] FIG. 1 depicts a block diagram of an electronic device that includes a driver circuit and a piezo buzzer according to an embodiment of the disclosure;[0009] FIG. 1A depicts a block diagram of an electronic device that includes a driver circuit and a piezo buzzer according to an embodiment of the disclosure;[0010] FIGS. 2A-2D each depicts a duty cycle achieved using different horn comparator threshold settings for the comparator according to an embodiment of the disclosure;[0011] FIG. 3 depicts a method of operating a driver for a piezo buzzer according to an embodiment of the disclosure;[0012] FIGS. 3A-3C depict additional elements of the method of FIG. 3 according to an embodiment of the disclosure;[0013] FIG. 4 depicts a block diagram of a smoke alarm that can include the disclosed driver circuit according to an embodiment of the disclosure;[0014] FIG. 5 depicts a piezoelectric buzzer that can be used with the disclosed driver circuit according to an embodiment of the disclosure;[0015] FIGS. 6A-6C depict the working principle of a piezo buzzer;[0016] FIG. 7A depicts an analog circuit for a self-drive piezoelectric buzzer; and[0017] FIG. 7B depicts a digital circuit for a self-drive piezoelectric buzzer according to the prior art.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0018] Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description. [0019] A piezoelectric diaphragm or piezoelectric buzzer, also referred to herein simply as a piezo buzzer or horn, can be either externally driven or self-driven. An externally driven piezo buzzer contains two electrodes, while a self-driven piezo buzzer has an additional feedback electrode that is used to drive the piezo buzzer to a resonant frequency.[0020] FIG. 5 depicts a piezo buzzer 500 for use in a self-driving circuit. Piezo buzzer 500 includes a piezo input electrode 502, a piezo feedback electrode 504 and a metal input electrode 506. Piezo input electrode 502 and piezo feedback electrode 504 are each made of a piezoelectric ceramic material, are electrically isolated from each other, and are mounted to one side of metal input electrode 506 with an adhesive (not specifically shown). Piezoelectric materials exhibit specific phenomenon known as the piezoelectric effect and the reverse piezoelectric effect. Exposure to mechanical strain will cause the material to develop an electric field, and exposure to an electric field will cause the material to deform due to the mechanical strain.[0021] FIGS. 6A-6C depict the working principle of a piezo buzzer. In FIG. 6A, the piezo element 602A receives a positive voltage and metal plate 604A receives a negative voltage, causing piezo element 602A to expand and bending metal plate 604A away from piezo element 602A. In FIG. 6B, the piezo element 602B receives a negative voltage and metal plate 604B receives a positive voltage, causing piezo element 602B to contract and thus bending metal plate 604B towards piezo element 602B. FIG. 6C depicts an alternating current being applied to piezoelectric buzzer 606, causing vibrations in piezoelectric buzzer 606 to generate sound waves. [0022] While magnetic buzzers can also be fabricated, piezo buzzers have lower current consumption while maintaining a higher sound pressure level. These attributes make them desirable in devices that rely on battery power but need high sound pressure levels, e.g., smoke alarms. Piezo buzzers generally have a wide operating voltage, e.g., between about 3 V and about 250 V, and a low current consumption, e.g., less than 30 mA in sound indication applications.[0023] FIG. 7A depicts an electronic device 700A containing an analog self-drive circuit 701 for a piezoelectric buzzer 704. Analog self-drive circuit 701 includes a bipolar junction transistor (BJT) 702 that is coupled between an upper supply voltage and a lower supply voltage, which may be a ground node. The collector of BJT 702 is coupled to the upper supply voltage through resistor Ra and the emitter is coupled to the lower supply voltage. The piezo input electrode 706 of piezoelectric buzzer 704 is coupled to a node 703 between resistor Ra and the collector of BJT 702 while the metal input electrode 710 is coupled to the lower supply voltage. The piezo feedback electrode 708 is coupled to the gate of BJT 702 through resistor Rb. Additionally, a node 705 between the piezo feedback electrode 708 and resistor Rb is coupled to the upper supply voltage through resistor Rc, with a switch SW between node 705 and resistor Rc allowing BJT 702 to be turned off.[0024] FIG. 7B depicts an electronic device 700B in which a driver circuit 721 on IC chip 720 is used to drive piezo buzzer 722. In driver circuit 721, an analog-to-digital (A2D) buffer 724 has an input coupled to a first pin PI and an output coupled to an input of both inverter 726 and inverter 730. Inverter 726 has an output coupled to an input of inverter 728 and inverter 728 has an output coupled to a second pin P2. Inverter 730 has an output coupled to third pin P3.[0025] Piezo buzzer 722 has a first input electrode 732, which is the piezo input electrode, a feedback electrode 734, which is also piezoelectric, and a second input electrode 736, which is the metal input electrode. The first input electrode 732 is coupled to second pin P2; feedback electrode 734 is coupled to first pin PI through resistor Rg and resistor Re; and the second input electrode 736 is coupled to third pin P3. A resistor Rd has a first terminal coupled to a node 738 between resistor Re and first pin PI and a second terminal coupled to the lower supply voltage. Similarly, resistor Rf has a first terminal coupled to a node 740 between resistor Re and resistor Rg and a second terminal coupled to a node 742 between the second input electrode 736 and the third pin P3. A capacitor C has a first terminal coupled to node 740 and a second terminal coupled to a node 744 between the first input electrode 732 and the second pin P2.[0026] Because piezo buzzer 722 is self-driving, the feedback provided from feedback electrode 734 will cause piezo buzzer to vibrate at its resonant frequency. However, due to small differences between various piezoelectric buzzers and the variations that occur during fabrication of IC chip 720, the duty cycle of the signals provided to the first input electrode 732 and second input electrode 736 may not be at the desired fifty percent, so that piezo buzzer 722 is unable to provide the loudest possible sound.[0027] FIG. 1 depicts an electronic device 100 that includes a driver circuit 101 on an IC chip 102 and a piezo buzzer 104 according to an embodiment of the disclosure. IC chip 102 again contains first pin PI that is coupled to the feedback electrode 106 of piezo buzzer 104 through first resistor R1 and second resistor R2, second pin P2 that is coupled to first input electrode 108, and third pin P3 that is coupled to second input electrode 110. Third resistor R3 has a first terminal that is coupled to a first node 122 between first pin PI and first resistor R1 and a second terminal that is coupled to the lower supply voltage. Fourth resistor R4 has a first terminal that is coupled to a second node 124 that is between first resistor R1 and second resistor R2 and a second terminal that is coupled to a third node 126 between third pin P3 and second input electrode 110. A capacitor Cl is coupled between the second node 124 and a fourth node 128 between the second pin P2 and the first input electrode 108. The external connections shown will help the piezo buzzer 104 achieve a resonant frequency.[0028] Driver circuit 101 differs from driver circuit 721 in that the analog-to-digital buffer 724 in driver circuit 721 is replaced by a comparator 112 that has a non-inverting input coupled to the first pin PI to receive a horn feedback signal HORNFB from the feedback electrode 106 and also has an inverting input coupled to a threshold voltage selection circuit 114. The threshold voltage selection circuit 114 is coupled to provide a horn comparator threshold voltage Vhom thr that is programmable, as will be described in greater detail below. A first inverter 116 and a second inverter 118 are coupled in series between an output of comparator 112 and the second pin P2. Third inverter 120 is coupled between the output of comparator 112 and the third pin P3. It will be understood that although driver circuit 101 is shown with two inverters coupled to the second pin P2 and one inverter coupled to the third pin P3, the important relationship is that the signal presented on third pin P3 is inverted from the signal presented on second pin P2. This can be accomplished by having a first set of inverters and a second set of inverters, with one set having an odd number of inverters and a remaining set having an even number of inverters.[0029] During operation of driver circuit 101, threshold voltage selection circuit 114 is provided with a horn comparator threshold selection signal (not specifically shown) that directs threshold voltage selection circuit 114 to provide a selected horn comparator threshold voltage to comparator 112. When driver circuit 101 and piezo buzzer 104 are paired with each other, driver circuit 101 can be tested using two or more different values of the horn comparator threshold settings, each of which designates a corresponding available horn comparator threshold voltage. The horn comparator threshold setting that provides a duty cycle of the drive voltages that is closest to fifty percent can be used during operation of the electronic device 100.[0030] Threshold voltage selection circuit 114 would generally be tested and programmed at or near the time that IC chip 102 and piezo buzzer 104 are assembled together, so that the driver circuit 101 can be further tuned to elicit the loudest response from piezo buzzer 104. This programming can take a number of forms, one of which will be explained in greater detail below. By programming the horn comparator threshold voltage to a value that most nearly brings about a fifty percent duty cycle, the amplitude can be maximized and common mode voltage variation of the feedback analog signal can be cancelled or reduced.[0031] FIG. 1A depicts in greater detail one embodiment of an electronic device 100A that includes the disclosed driver circuit. In this embodiment, the electronic device 100A is a smoke alarm. An IC chip 132 containing a driver circuit 134 is coupled between a microcontroller 130 and a piezo buzzer 136. In the embodiment shown, IC chip 132 is able to control either a two- terminal piezo buzzer, i.e., a piezo buzzer that does not use feedback, or a three-terminal piezo buzzer, which does utilize feedback, although the capability to drive both types of piezo buzzer is not required for the disclosed embodiment. A horn selection signal HORN SEL can be set to zero for a two-terminal piezo buzzer and to one for a three-terminal piezo buzzer. Because the present application is directed to a piezo buzzer that utilizes feedback, some portions of IC chip 132 are not discussed in detail herein. Two comparators are provided in driver circuit 134: comparator 148 is used with a three-terminal piezo buzzer and will receive a horn comparator threshold voltage Vhom thr that is programmable, while comparator 149 is used with a two- terminal piezo buzzer and receives a fixed horn comparator threshold voltage.[0032] IC chip 132 is coupled to receive three signals from microcontroller 130 that are relevant to operation of the piezo buzzer or horn: serial data signal SDA, which is received on a serial data pin Psd, serial clock signal SCL, which is received on a serial clock pin Pci, and pin- controlled horn enable signal HBEN, which is received on a horn enable pin Phb. Serial data signal SDA and serial clock signal SCL are both part of a messaging bus, which in one embodiment is an Inter-Integrated Circuit (I2C) bus. When a three-terminal piezo buzzer 136 is used, pin-controlled horn enable signal HBEN is used to turn on the piezo buzzer 136 when either smoke is detected or a test of the piezo buzzer is initiated. Serial data signal SDA and serial clock signal SCL are both received by a level shift circuit 138, which shifts serial data signal SDA and serial clock signal SCL from a microcontroller voltage VMCU to an internal voltage VINT and provides these two signals to a bus interface 142 in the digital core 140. In one embodiment, bus interface 142 is an I2C interface.[0033] I2C is a standard protocol for sending serial information from one IC to another IC and is the bus protocol used in one embodiment discussed herein. However, it will be understood that other protocols can also be used. In one embodiment, the microcontroller 130 sends a horn comparator threshold setting HORN THR to bus interface 142 in the digital core 140 to indicate which of four possible threshold voltages should be used for the horn comparator threshold voltage Vhorn thr. The bus interface 142 interprets the received horn comparator threshold setting HORN THR and stores the value in one of a plurality of registers 144. The digital core 140 will pass the horn comparator threshold setting HORN THR through a digital line connected to driver circuit 134.[0034] As seen in driver circuit 134, a resistor ladder 146 is coupled in series with a cutoff NFET M3T between a boosted voltage VBST and the lower supply voltage and each of a plurality of switches 147 has a first terminal that is coupled to a respective point on the resistor ladder 146 and a second terminal that can be selectively coupled to the inverting input of comparator 148. Together, resistor ladder 146 and the plurality of switches 147 provide one embodiment of threshold voltage selection circuit 114. When the 3-terminal option is selected, cutoff NFET M3T is turned on to provide the various horn comparator threshold voltages Vhorn thr and digital core 140 sends the selected horn comparator threshold setting HORN THR to close one of the switches in the plurality of switches 147 and provide a selected horn comparator threshold voltage Vhorn thr to the inverting input of comparator 148. In one embodiment, a horn comparator threshold setting HORN THR of “00” provides a horn comparator threshold voltage Vhorn thr that is 9% of boosted voltage VBST, a horn comparator threshold setting of “01” provides a horn comparator threshold voltage Vhorn thr that is 12% of boosted voltage VBST, a horn comparator threshold setting of “10” provides a horn comparator threshold voltage Vhorn thr that is 15% of boosted voltage VBST, and a horn comparator threshold setting of “11” provides a horn comparator threshold voltage Vhorn thr that is 18% of boosted voltage VBST. It will be understood that both the number and spread of possible voltages that can be selected are variables that can be adjusted as necessary or desired.[0035] When a three-terminal piezo buzzer is used with IC chip 132, the output of comparator 148 can be coupled to second pin P2 through amplifier 150 and can also be coupled to third pin P3 through delay buffer 152, inverter 154 and amplifier 156.[0036] Once a horn comparator threshold voltage Vhorn thr is selected, comparator 148 is set to transform the analog signal received on first pin PI and to output a digital drive signal DRV. Digital drive signal DRV is provided to first amplifier 150, which sends the amplified signal to second pin P2 as first horn control signal HORN1. Digital drive signal DRV is also sent to delay buffer 152 and then to inverter 154, which provides a delayed, inverted version of digital drive signal DRV to second amplifier 156. Amplifier 156 sends a second horn control signal HORN2 to third pin P3.[0037] Electronic device 100A also includes first resistor Rl, second resistor R2, third resistor R3, fourth resistor R4 and capacitor Cl arranged similarly to electronic device 100 seen in FIG. 1. These external components in combination with the delay provided by delay buffer 152 are all part of driving the piezo buzzer 136 to the resonant frequency.[0038] An AND gate 162 is coupled to receive three signals - a register-controlled horn enable signal HORN EN, which is a register value, the pin-controlled horn enable signal HBEN, which can be provided by the microcontroller 130, and a horn selection signal HORN SEL, which has a value of zero for a two-terminal piezo buzzer and a value of one for a three-terminal piezo buzzer - and to provide a three-terminal enable signal 3T. Similarly, AND gate 164 is coupled to receive two signals - the inverse of horn selection signal HORN SEL and register-controlled horn enable signal HORN EN - and to provide a two-terminal enable signal 2T. An OR gate is coupled to the output of both AND gate 162 and AND gate 164. The output of OR gate 160 is coupled to provide a driver enable signal DR EN to amplifiers 150, 156. Driver enable signal DR EN is also provided through inverter 166 to the gates of first NFET Ml and second NFET M2. First NFET Ml is coupled between the second pin P2 and the lower supply voltage and second NFET M2 is coupled between the third pin P3 and the lower supply voltage. In one embodiment, a first discharge resistor Rdl having a resistance of 120 kO is coupled between second pin P2 and first NFET Ml and a second discharge resistor Rd2 also having a resistance of 120 kO is coupled between third pin P3 and second NFET M2.[0039] OR gate 160, along with AND gates 162, 164, are used to enable the piezo horn driver 134 with either a three-terminal piezo buzzer or a two-terminal piezo buzzer. With a three- terminal piezo buzzer, high values in each of the register-controlled horn enable signal HORN EN, the pin-controlled horn enable signal HBEN, and the horn selection signal HORN SEL are required to set the three-terminal signal 3T high: With a two-terminal piezo buzzer, a low value in horn selection signal HORN SEL and a high value in register-controlled horn enable signal HORN EN are required to set the two-terminal signal 2T high. A high signal on either of three-terminal signal 3T or two-terminal signal 2T causes driver enable signal DR EN to be high, which enables first amplifier 150 and second amplifier 156 and turns off first NFET Ml and second NFET M2. When neither the three-terminal signal 3T nor the two- terminal signal 2T are high, driver enable signal DR EN is low, which disables first amplifier 150 and second amplifier 156 and turns on first NFET Ml and second NFET M2. First NFET Ml and second NFET M2 provide a discharge path if, for example, the electronic device 100A is dropped, causing deformation in piezo buzzer 136 and generating a high voltage from piezo buzzer 136.[0040] FIGS. 2A-2D graphically depict bench measurements of the duty cycle of the first horn control signal HORN! and the second horn control signal HORN2 used to drive piezo buzzer 104. In FIGS. 2A-2D, the horn comparator threshold voltage Vhom thr is set successively to each of the four different settings. As seen in FIG. 2A in which the horn comparator threshold setting HORN THR is set to “00”, the duty cycle of the first horn control signal HORN! is greater than 50% and was measured at 54.44%. In FIG. 2B with the horn comparator threshold setting HORN THR set to “01”, the duty cycle of the first horn control signal HORN1 is 51.55. Similarly, in FIG. 2C with the horn comparator threshold setting HORN THR set to “10”, the duty cycle of the first horn control signal HORN1 is 48.65% and in FIG. 2B with the horn comparator threshold setting HORN THR at “11”, the duty cycle of the first horn control signal HORN1 is 44.42%. Table 1 provides the information from FIGS. 2A-2D more succinctly and also clarifies that changing the value of the horn comparator threshold voltage Vhom thr does not change the frequency of the driving signal, but significantly alters the duty cycle. Since alternative values are provided, the piezo buzzer used in this embodiment can receive a horn comparator threshold setting HORN THR of “10”, while other values may be used for other piezo buzzers and circuits.Table 1[0041] FIG. 3 depicts a method 300 of operating a piezo buzzer in an electronic device such as a smoke alarm. The disclosed method requires testing of the piezo buzzer and driver circuit combination in order to determine the best the horn comparator threshold setting HORN THR. This method was selected in order to keep the circuitry as simple as possible and the power requirements low. Duty cycle correction circuits are currently used in many public land mobile network (PLMN) circuits to provide self-correction of the duty cycle. In an embodiment of driver circuit 101 in which the circuit complexity and/or power requirements are less important, one of these duty cycle correction circuits could be adapted to replace the threshold voltage selection circuit 114 in driver circuit 101. Method 300 would typically be performed during assembly and testing of the electronic device and prior to making the electronic device available for use.[0042] Method 300 starts with coupling 305 a driver circuit for the piezo buzzer between a microcontroller and the piezo buzzer. In one embodiment, the driver circuit is part of an IC chip that provides both power and a number of detection circuits for a smoke alarm. The method continues with providing 310 a first horn comparator threshold setting of a plurality of horn comparator threshold settings to the driver circuit, followed by determining a first duty cycle of the piezo buzzer using the first horn comparator threshold setting. Next, the method continues with providing 315 a second horn comparator threshold setting of the plurality of horn comparator threshold settings to the driver circuit and determining a second duty cycle of the piezo buzzer using the second horn comparator threshold setting. If there are only two horn comparator threshold settings or if the first two horn comparator threshold settings closely bracket the desired fifty percent duty cycle, method 300 can conclude with selecting 320 a horn comparator threshold setting of the plurality of horn comparator threshold settings that provides a respective duty cycle that is closest to fifty percent.[0043] It can be noted that in one embodiment, a microcontroller provides the first horn comparator threshold setting, the second horn comparator threshold setting and the selected horn comparator threshold setting to the IC chip containing the driver circuit and does so over a bus using a bus protocol, of which I2C is one possible bus protocol. If additional horn comparator threshold settings are available and the desired closeness to a fifty percent duty cycle has not yet been established, then prior to selecting the programmable threshold, the method continues in FIG. 3A with providing 330 respective ones of the additional horn comparator threshold settings to the driver circuit and determining respective duty cycles of the piezo buzzer using the respective additional horn comparator threshold settings. Once all available or desired horn comparator threshold settings have been checked, a final selection can be made of the horn comparator threshold setting that provides a duty cycle closest to fifty percent.[0044] In one embodiment, the element of providing a respective horn comparator threshold setting includes the elements shown in FIG. 3B, i.e., coupling 340 a non-inverting input of a comparator in the driver circuit to a feedback signal from the piezo buzzer and controlling 345 a plurality of switches that each couples a respective horn comparator threshold voltage from a resistor ladder to an inverting input of the comparator. In one embodiment, the element of determining a respective duty cycle comprises activating 350 the driver circuit and measuring 355 a duty cycle of an output signal sent by the comparator.[0045] FIG. 4 depicts a block diagram of an electronic device that is a smoke alarm 400 incorporating a horn driver circuit 421 according to an embodiment of the disclosure. Smoke alarm 400 includes an IC chip 401 on which a number of circuits are implemented, including horn driver circuit 421, which can be implemented using the circuits shown in one of driver circuit 101 and driver circuit 134 and the method(s) as discussed in FIGS. 3-3C. IC chip 401 also includes a carbon monoxide detection circuit 404, a photo-detection circuit 406, an optional ion detection circuit 408, and a horn driver 421. In one embodiment, photo-detection circuit 406 includes a first light-emitting diode (LED) driver 412 and a second LED driver 414. Carbon monoxide detection circuit 404 is coupled to a plurality of CO-detection pins 405; photo detection circuit 406 is coupled to a plurality of photo-detection pins 407; and horn driver 421 is coupled to first pin PI, second pin P2 and third pin P3. Multiplexor 410, which is coupled to a fourth pin P4 that is part of a plurality of multiplexor pins 413, can receive input signals from each of carbon monoxide detection circuit 404 and photo-detection circuit 406. When optional ion detection circuit 408 is provided, ion detection circuit 408 is coupled to a plurality of ion- detection pins 409 and multiplexor 410 is also coupled to receive input signals from ion detection circuit 408. A piezo buzzer, shown here as horn 429, is coupled to first pin PI, second pin P2 and third pin P3.[0046] A number of power supply pins are noted in IC chip 401. A pre-regulator circuit 420 is coupled to fifth pin P5, which is coupled, external to IC chip 401, to an AC/DC converter 432 that can be coupled to receive voltage Vcc. Pre-regulator circuit 420 is also coupled to sixth pin P6 (coupling not specifically shown) to receive a lower supply voltage. A DC/DC boost converter 402 is coupled to seventh pin P7 to receive power from battery BAT through an inductor L and is also coupled to eighth pin P8 to provide a boosted voltage Vbst from the battery power. Eighth pin P8 is also coupled to fifth pin P5, which provides the boosted voltage Vbst to pre-regulator circuit 420 when battery power is relied on. Sixth pin P6 is coupled to a ground plane, although the internal connections to the circuits are not specifically shown. [0047] Pre-regulator circuit 420 provides a pre-regulator output voltage Vprereg, which will be used to provide a clamped supply voltage for internal circuits on IC chip 401. The pre-regulator output voltage Vprereg can be distributed to microcontroller (MCU) LDO regulator 416, internal LDO regulator 418 and Vcc divider 419. MCU LDO regulator 416 provides a supply voltage to MCU 430 and the I/O buffers (not specifically shown); internal LDO regulator 418 provides a supply voltage to internal circuits such as the digital logic core and the analog blocks, e.g., the carbon monoxide detection circuit 404, photo-detection circuit 406 and ion detection circuit 408; and Vcc divider 419 provides a supply voltage to multiplexor 410.[0048] In smoke alarm 400, carbon monoxide detection circuit 404 is coupled to carbon monoxide sensor 422 through the plurality of CO-detection pins 405; photo-detection circuit 406, which can include first LED driver 412 and second LED driver 414, is coupled to photo sensor 424 and LEDs 426 through the plurality of photo-detection pins 407; ion detection circuit 408 is coupled to ion sensor 428 through the plurality of ion-detection pins 409; and horn driver 421 is coupled to a horn 429 through first pin PI, second pin P2 and third pin P3. The carbon monoxide sensor 422, photo sensor 424 and ion sensor 428 collect the information needed to detect smoke and carbon monoxide in the area, while horn 429 provides a loud audible alert when smoke or carbon monoxide are detected. IC chip 401 is also coupled to microcontroller 430 though the plurality of microcontroller pins 413, with IC chip 401 supplying both power and information to microcontroller 430 and receiving instructions to control various aspects of operation of smoke alarm 400. The fourth pin P4, which is part of the plurality of microcontroller pins 413, provides a path for the multiplexor 410 to provide the outputs of the carbon monoxide detection circuit 404, photo-detection circuit 406, and ion detection circuit 408 to MCU 430.[0049] Applicants have demonstrated that the ability to change the voltage threshold to which the horn feedback signal is compared changes the duty cycle of the driving signal and can be used to compensate for variations in the common mode and amplitude of the feedback signal from piezo buzzer from part to part. In combination with the current practices of automatically tuning the frequency and setting the amplitude of the signal, adjusting the duty cycle can further tune each piezo buzzer to the maximum loudness the piezo buzzer is capable of providing. Setting the duty cycle for the piezo buzzer can become a routine part of assembling an electronic device that utilizes the piezo buzzer, e.g., a smoke alarm. Applicants have further demonstrated an electronic device containing a driver circuit that provides a programmable horn voltage threshold. The electronic device can be a circuit, an IC chip, or a system such as a smoke alarm that includes a piezo buzzer.[0050] Although various embodiments have been shown and described in detail, the claims are not limited to any particular embodiment or example. None of the above Detailed Description should be read as implying that any particular component, element, step, act, or function is essential such that it must be included in the scope of the claims. Reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather "one or more." All structural and functional equivalents to the elements of the above-described embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Accordingly, those skilled in the art will recognize that the exemplary embodiments described herein can be practiced with various modifications and alterations within the spirit and scope of the claims appended below.
A Group III-V Semiconductor device and method of fabrication is described. A high-k dielectric is interfaced to a confinement region by a chalcogenide region.
A method of fabricating a semiconductor device comprising:growing (50) a first region of a Group III-V compound;growing (51) a confinement region (20, 32) on the first region, the confinement region comprising a III-V material having a band gap wider than the band gap of the first region;forming (53) a chalcogen region (21) on the confinement region;forming (54) a dielectric region (22, 30) on the chalcogen region, the chalcogen region comprising an [S]n bridge or an [Se]n bridge, where n is 2, between the confinement region (32) and the dielectric region (30); andforming (55) a metal gate (23) on the dielectric region (22, 30).The method of claim 1, including the removal (52) of a native oxide from the confinement region prior to forming the chalcogen region.The method of claim 1, wherein the first region comprises InSb or InP.The method of claim 3, wherein dielectric region (22) comprises a high-k dielectric.The method of claim 4, wherein the high-k dielectric comprises HfO2 .The method of claim 1, wherein forming the first region comprises forming an InSb well, wherein forming the confinement region comprises forming an AlInSb confinement region on the InSb well, wherein forming (54) the dielectric region (22) comprises forming an Al2 O3 layer by an atomic layer deposition (ALD) process using the precursors trimethylaluminum and water, the method further comprising removing native oxide from a surface of the AlInSb region.The method defined by claim 7, wherein the InSb is formed on an underlying layer of AlInSb, and wherein the forming of the AlInSb region (70) includes forming a donor region of Si or Te.A semiconductor device comprising;a compound of Group III-V elements in a first region;a confinement region (20) disposed on the first region, the confinement region comprised of a III-V material and having a wider band gap than the first region;a chalcogen region (21) disposed on the confinement region; characterized by a high-k dielectric (22) disposed on the chalcogen region (21), the chalcogen region comprising an [S]n bridge (33) or an [Se]n bridge (33), where n is 2, between the confinement region and the dielectric region; anda metal gate (23) disposed on the dielectric.The device defined by claim 8, wherein the dielectric (22) comprises an Al2 O3 layer, wherein the confinement region (70) comprises AlInSb, wherein the Al2 O3 layer is disposed on the AlInSb region and said metal gate (23) is an Al gate which is disposed on the Al2 O3 region and wherein the first region comprises InSb.The device of claim 9, wherein the Al2 O3 layer has a thickness less than 3 nm.The device of claim 9 or 10, wherein the confinement region includes a region doped with Si or Te.The device of claim 11, including a source and drain contacts (76, 77) disposed on opposite sides of the Al gate (78).The device of claim 12, wherein the Al gate (88) is recessed into the AlInSb region (92) to provide an enhancement mode transistor.
FIELD OF THE INVENTION The invention relates to the field of Group III-V semiconductor devices. PRIOR ART AND RELATED ART Most integrated circuits today are based on silicon, a Group IV element of the periodic table. Compounds of Group III-V elements such as gallium arsenide (GaAs), indium antimonide (InSb), and indium phosphide (InP) are known to have far superior semiconductor properties than silicon, including higher electron mobility and saturation velocity. Unlike the Group III-V compounds, silicon easily oxidizes to form an almost perfect electrical interface. This gift of nature makes possible the near total confinement of charge with a few atomic layers of silicon dioxide. In contrast, oxides of Group III-V compounds are of poor quality, for instance they contain defects, trap charges, and are chemically complex.Quantum well field-effect transistors (QWFET) have been proposed based on a Schottky metal gate and an InSb well. They show promise in lowering active power dissipation compared to silicon-based technology, as well as improved high frequency performance. Unfortunately, the off-state gate leakage current is high because of the low Schottky barrier from Fermi level pinning of the gate metal on, for example, an InSb/AlInSb surface.The use of a high-k gate insulator has been proposed for QWFETs. See, as an example, Serial No. 11/0208,378, filed January 3, 2005 , entitled "QUANTUM WELL TRANSISTOR USING HIGH DIELECTRIC CONSTANT DIELECTRIC LAYER." However, there are problems in interfacing between a high-k material and, for instance, the InSb/AlInSb surface.Document JP 05 090252 A discloses that the surface of a compound semiconductor of indium phosphide or the like is cleaned with organic agent to remove a natural oxide film, and then an insulating film of silicon nitride or the like is formed on the surface of the compound semiconductor.Document " Novel InSb-based Quantum Well Transistors for Ultra-High Speed, Low Power Logic Applications" by Ashley T, et al SOLID-STATE AND INTEGRATED CIRCUITS TECHNOLOGY, 2004. PROCEEDINGS. 7TH INTERNATIONAL CONFERENCE ON BEIJING, CHINA 18-21 OCT. 2004, PISCATAWAY, NJ, USA,IEEE, US, 18 October 2004 (2004-10-18), pages 2253-2256, XP010805631, ISBN: 0-7803-8511-X , discloses InSb-based quantum well field-effect transistors with gate length down to 0.2 µm. SUMMARY There is provided a method of fabricating a semiconductor device as set out in claim 1, and a semiconductor device as set out in claim 8. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates a prior art high k dielectric interface between a silicon substrate and a metal gate.Figure 2 illustrates the interface between a Group III-V confinement region and a metal gate, including a high k dielectric and a chalcogenide region as described below.Figure 3 illustrates a confinement region interfaced with a high k dielectric through a chalcogenide region.Figure 4A illustrates a diphenyl-disulfide compound, with the phenyls being replaced.Figure 4B illustrates the compound of Figure 4A in place between the confinement region and a high k dielectric.Figure 5 illustrates the process carried out for forming a metal gate in a Group III-V semiconductor device.Figure 6 is a graph illustrating the benefits of using a high k dielectric on the gate leakage when compared to a Schottky metal gate.Figure 7 is a cross-sectional, elevation view of a semiconductor device with an alumina (Al2O3), high k dielectric layer.Figure 8 is a cross-sectional, elevation view of a Group III-V semiconductor device with a high k dielectric and a recessed metal gate. DETAILED DESCRIPTION Throughout the description, the following conversion factor has to be taken into account : 10Å = 1 nm. Processes and devices are described in connection with interfacing a high k dielectric with a Group III-V confinement region in a semiconductor device. In the following description, numerous specific chemistries are described, as well as other details, in order to provide a thorough understanding of the present invention. It will be apparent to one skilled in the art, that the present invention may be practiced without these specific details. However, the present invention is limited to the specific chemistries defined in claims 1 and 8. In other instances, well-known processing steps are not described in detail in order not to unnecessarily obscure the present invention.Figure 1 illustrates an interface between a metal gate 13 and a monocrystalline silicon body or substrate 10. Most typically, the silicon 10 comprises the channel region of a field-effect transistor with a gate 13 for controlling the transistor. Such devices perform particularly well when the equivalent oxide thickness (EOT) of the insulation between the channel region and gate is in the range of 5-30 Å and preferably in the range of 10-15 Å. While silicon dioxide (SiO2) provides an excellent dielectric, with layers this thin, it is difficult to obtain a reliable silicon dioxide dielectric. Rather, high k materials (e.g. dielectric constant of 10 or greater) are used. As shown in Figure 1 , a silicon dioxide region 11 is first formed (or is native) on the silicon 10. Then, a high k dielectric 12 such as hafnium dioxide (HfO2) is formed on the silicon dioxide region 11. Next, a metal gate, typically with a targeted work function, is formed on the high k dielectric. The high k dielectric such as HfO2or zirconium dioxide (ZrO2) provides an excellent interface. The high k dielectric may be formed in a low temperature deposition process utilizing an organic precursor such as an alkoxide precursor for the HfO2deposition in an atomic layer deposition (ALD) process. The metal gate, formed with electron beam evaporation or sputtering, may be a platinum, tungsten, palladium, molybdenum or other metals.The EOT, as shown to the right of the structure of Figure 1 , includes approximately 4 Å associated with the upper surface of the silicon 10, resulting from defects near the surface of the monocrystalline structure. Above this, approximately 5 Å of silicon dioxide region 11 is shown. Then, the high k dielectric is formed in the ALD process, its EOT is 3-4 Å. The resultant EOT for the structure shown in Figure 1 is 12-13 Å.To the left of the structure of Figure 1 , the physical thickness (PT) of the regions is shown. As can be seen, the high k dielectric is relatively thick (approximately 20 Å) when compared to the SiO2region 11. This relatively thick region allows the formation of a reliable, high quality dielectric with a low EOT (3-4 Å).As mentioned earlier, it is difficult to produce the corresponding interface to the structure of Figure 1 , where a Group III-V compound is used. The oxide formed from these compounds are of poor quality, and do not adhere well to the high k dielectric.In Figure 2 , the interface, more fully described below, between a Group III-V compound and a high k dielectric is illustrated. A Group III-V region 20 is illustrated with the bridging sulfur (S) atoms of the interface region 21, as one embodiment of a chalcogenide interface region. As will be described, these bridging atoms allow a better match to the high k dielectric region 22, illustrated as HfO2for one embodiment.The EOT for the structure of Figure 3 includes approximately 6 Å associated with the upper surface of the Group III-V compound, such as a confinement region 20, and particularly, native oxides on this region which are not entirely removed as well as lattice defects in the confinement region. The interface 21 may be a chalcogenide such as oxygen (O), S, selenium (Se), tellurium (Te), oxygen and tellurium not forming part of the present invention). (The heavier chalcogenide polonium (Po) is not favored because of its radioactivity.) The EOT of the interface region 21 is approximately 3 Å for the illustrated embodiment, corresponding to a few atomic layers. The PT for this region is 3-10 Å. Above this, a high k dielectric region 22 is formed having a PT of approximately 20 Å and an EOT of 3-4 Å. Finally, a metal gate 23, similar to the metal gate 13 of Figure 1 , is used.In a typical transistor, a quantum well of, for instance, InSb is confined between metamorphic buffer or confinement layers (e.g. AlInSb). These layers have a higher band gap than the well to mitigate the effects of the narrow band gap of the quantum well on device leakage and breakdown.In Figure 3 , the chalcogenide interface region is again shown between dielectric region 30 and a Group III-V confinement region 32. The chalcogenide is represented by "X," with the number of atomic layers shown as "n." For oxygen not forming part of the present invention, n is typically greater than 1, for example, three. A sterically okioxidizing agent (e.g. di-tert-butyl peroxide or di-iso-propyl-peroxide) may be used to deliver an O-containing substituent with a bulky leaving group (e.g. O-t Bu) which also reacts favorably with a standard ALD precursor. This prevents further reactivity with the atmosphere. The S or Se is preferably equal to 1, 2 or 3. This film may deposited from a monovalent dimer. Any one of a plurality of di-organic di-chalcogenide species can be used.In Figure 5 , a process is illustrated beginning with the growth of the Group III-V quantum well 50 which typically occurs over a first confinement layer. Again, as mentioned, the Group III-V well may comprise InSb or InP. As mentioned, in another process 51, the confinement region or layer is formed on the quantum well. This corresponds to, for instance, region 20 of Figure 2 . The confinement layers are typically a material compatible with the well, however with a larger bandgap. For a well of InSb, the metalloid AlInSb may be used. The processes 50 and 51 may be carried out using molecular beam epitaxy or metal organic chemical vapor deposition, by way of example.Prior to forming a chalcogenide layer, the native oxide and any other oxide on the confinement layer are removed. The process 52 of Figure 5 may be carried out by treating the surface with an acid, for instance, citric acid, HCl or HF.Next, as shown by process 53, the chalcogenide layer is formed. This formation is shown for one embodiment in conjunction with Figures 4A and 4B . In Figure 4A , a compound of di-phenyl-disulfide is shown which ultimately leaves a chalcogenide film juxtaposed between the metalloid-containing Group III-V confinement region and high k dielectric. Other di-chalcogenide may be used such as di-selenide.In the case of the di-phenyl, one phenyl is shown displaced by an antimony atom of the confinement layer, and the other with, for instance, Hf or Al atom from one of the precursors used in the formation of the high k dielectric. This leaves, as shown in Figure 4B , the S bridging atoms where the di-chalcogenide comprises S. Thus, one of the di-phenyl atoms is replaced during the process 53, and the other during the process 54 of Figure 5 , by the precursors for the high k dielectric. The same result can be achieved with the other di-chalcogenide. Ordinary precursors for the formation of the Hf0O2or ZrO2may be used.In one embodiment, the containment layer is AlInSb, as mentioned. Where this is used, Al2O3may be used as the high k dielectric to minimize valence mismatch. The Al2O3may be deposited using trimethylaluminum (TMA) and water precursors with an ALD process.Finally, as shown in Figure 5 , a metal gate deposition 55 occurs. Again, ordinary processing may be used to form the gate. Since the Group III-V material may have a low melting point, for example 525° C for InSb, ALD is used in one embodiment for the gate deposition. Where Al2O3is used as the high k dielectric, an aluminum gate may be used to provide more compatibility.Figure 6 illustrates the reduction in gate leakage obtained by using a high k dielectric such as Al2O3and a metal gate, as opposed to a Schottky metal gate. As can be readily seen in Figure 6 , the difference in leakage is several orders of magnitude less with a high k dielectric. The results of Figure 6 are for an aluminum gate, Al2O3dielectric, AlInSb confinement layer and an InSb quantum well.Figure 7 illustrates the structure of a transistor that may be fabricated with the above-described processing. This embodiment is particularly suitable for a depletion mode-like device since, for instance, when the gate is not embedded into the confinement layer as it is for the device of Figure 8 . A lower containment region, in one embodiment, comprising a Al.15In.85Sb layer 70 which is formed, for example, on a semi-insulating GaAs substrate. Then, the quantum well 72 of, for instance, InSb is grown on the lower confinement layer. Next, the upper confinement layer 73 comprising, in one embodiment, Al.20In.80Sb is formed. This layer includes a donor region, more specifically, in one embodiment, a Te doped region. The Te doping supplies carriers to the quantum well 73. The multilayer structure of Figure 7 may be grown using molecular beam epitaxy or metal organic chemical vapor deposition. The doped donor region is formed by allowing Te (or Si) dopants to flow into the molecular beam epitaxy chamber from, for example, a solid source.The thickness of the layer 73, along with the work function of the gate 78, determine the threshold voltage of the transistor, and as mentioned earlier, provide for the embodiment of Figure 7 , a depletion mode-like device. A lower work function is thus selected for the gate to reduce the threshold voltage. A source contact 76 and drain contact 77 are also illustrated in Figure 7 , along with an aluminum gate 78. By way of example, in one embodiment, layer 70 may be 3 µm thick, the quantum well 72 may be 20 nm thick, the confinement layer 73 may be 5 nm thick, and the Te, δ-doped donor region may be doped to a level of 1-1.8 x 1012cm-2, µ equal to 18-25000 cm-2v-1s-1with a gate length of 85 nm.Figure 8 illustrates another embodiment with a recessed gate for increasing the voltage threshold to provide a more enhancement mode-like device. Again, there is a higher band gap, lower confinement layer 80, a quantum well 81, and two doped upper confinement layers 91 and 92 separated by an etchant stop layer 90. Both layers 91 and 92 are doped as shown by the δ doping planes 82 and 89, respectively. The high k dielectric 87 is recessed into the layer 92 as is the metal gate 88. It is this recessing and the selection of the work function metal for the gate 88 which provides the increased threshold voltage. The layer thicknesses, doping levels, etc. may be the same as for the embodiment of FIG. 7 . The additional layer 92 may be a thickness of 45 nm.Thus, an interface in several embodiments, between a Group III-V confinement region and a high k dielectric region has been described along with devices using the interface.
A memory cell, e.g., a flash memory cell, includes a substrate, a flat-topped floating gate formed over the substrate, and a flat-topped oxide region formed over the flat-topped floating gate. The flat-topped floating gate may have a sidewall with a generally concave shape that defines an acute angle at a top corner of the floating gate, which may improve a program or erase efficiency of the memory cell. The flat-topped floating gate and overlying oxide region may be formed with without a floating gate thermal oxidation that forms a conventional "football oxide." A word line and a separate erase gate may be formed over the floating gate and oxide region. The erase gate may overlap the floating gate by a substantially greater distance than the word line overlaps the floating gate, which may allow the program and erase coupling to the floating gate to be optimized independently.
CLAIMS1. A method of forming a memory cell, the method comprising:forming a poly layer over a substrate;forming a patterned mask that covers a first portion of the poly layer and exposes a flat- topped second portion of the poly layer having a flat top surface;depositing an oxide layer over the exposed flat-topped second portion of the poly layer; removing portions of the poly layer to define a flat-topped floating gate including the second portion of the poly layer;depositing a spacer layer over the flat-topped floating gate and the oxide layer; and performing a source implant in the substrate adjacent the flat-topped floating gate, wherein the spacer layer shields the underlying flat-topped floating gate from the source implant.2. The method of Claim 1, wherein the method is performed without a floating gate thermal oxidation.3. The method of any of Claims 1-2, wherein the oxide layer has a flat bottom surface in contact with the flat top surface of the floating gate, and a flat top surface.4. The method of any of Claims 1-3, comprising performing a chemical mechanical planarization (CMP) to define the flat top surface of the oxide layer.5. The method of any of Claims 1-4, wherein the floating gate has at least one sidewall having a generally concave shape.6. The method of Claim 5, wherein the generally concave shape of the floating gate sidewall defines an acute angle at a top corner of the floating gate, which improves a program or erase efficiency of the memory cell.7. The method of any of Claims 1-6, wherein the spacer layer comprises a nitride layer having a thickness of less than 300A.8. The method of any of Claims 1-7, further comprising forming a word line and a separate erase gate over the floating gate.9. The method of Claim 8, wherein the word line overlaps the floating gate by a word line overlap distance and the erase gate overlaps the floating gate by an erase gate second distance substantially larger than the word line overlap distance.10. The method of Claim 9, wherein the erase gate overlap distance is at least three times as great as the word line overlap distance.11. The method of any of Claims 1-10, wherein the memory cell comprises a flash memory cell.12. A memory cell formed by a process comprising:forming a poly layer over a substrate;forming a patterned mask that covers a first portion of the poly layer and exposes a flat- topped second portion of the poly layer having a flat top surface;depositing an oxide layer over the exposed flat-topped second portion of the poly layer; removing portions of the poly layer to define a flat-topped floating gate including the second portion of the poly layer;depositing a spacer layer over the flat-topped floating gate and the oxide layer; and performing a source implant in the substrate adjacent the flat-topped floating gate, wherein the spacer layer shields the underlying flat-topped floating gate from the source implant.13. The memory cell of Claim 12, wherein the memory cell is formed without performing a floating gate thermal oxidation.14. The memory cell of any of Claims 12-13, wherein the generally concave shape of the floating gate sidewall defines an acute angle at a top comer of the floating gate, which improves a program or erase efficiency of the memory cell.15. The memory cell of any of Claims 12-14, wherein the process of forming the memory cell further comprises forming a word line and a separate erase gate over the floating gate. 16. A flash memory cell, comprising:a substrate;a flat-topped floating gate formed over the substrate and having a flat top surface; an oxide layer formed over the flat-topped floating gate;a doped source region in the substrate adjacent the floating gate and extending partially under the floating gate.17. The flash memory cell of Claim 16, wherein the oxide layer is flat-topped.18. The flash memory cell of any of Claims 16-17, wherein the generally concave shape of the floating gate sidewall defines an acute angle at a top comer of the floating gate.19. The flash memory cell of any of Claims 16-18, further comprising a word line and a separate erase gate formed over the floating gate. 20. The flash memory cell of Claim 19, wherein the word line overlaps the floating gate by a first distance and the erase gate overlaps the floating gate by a second distance substantially larger than the first distance.21. A memory cell formed by any of the methods of Claims 1-11.22. A semiconductor device, including any of the memory cells of Claims 12-21.
MEMORY CELL WITH A FLAT-TOPPED FLOATING GATE STRUCTURERELATED PATENT APPLICATIONThis application claims priority to commonly owned United States Provisional Patent Application No. 62/613,036 filed January 2, 2018, which is hereby incorporated by reference herein for all purposes.TECHNICAL FIELDThe present disclosure relates to memory cells, e.g., flash memory cells, and more particularly, to a memory cell having a flat-topped floating gate structure.BACKGROUNDCertain memory cells, e.g., flash memory cells, include at least one floating gate programmed and erased through one or more program/erase gates, word lines, or other conductive element(s). Some memory cells use a common program/erase gate extending over a floating gate to both program and erase the cell. In some implementations, the floating gate is formed by a Polyl layer, while the program/erase gate is formed by a Poly2 layer that partially overlaps the underlying Polyl floating gate in the lateral direction. For some memory cells, the manufacturing process includes a floating gate thermal oxidation process that forms a football-shaped oxide over the Poly 1 floating gate, as discussed below.Figure 1 illustrates a partial cross-sectional view of an example memory cell 10A, e.g., a flash memory cell, including a Polyl floating gate 14 and an overlying football-shaped oxide region (“football oxide”) 16 formed over a substrate 12, and a Poly2 gate 18 (e.g., a word line, erase gate, or common program/erase gate) extending partially over the floating gate 14. The football oxide 16 is formed over the floating gate 14 by a thermal oxidation process on floating gate 14, which defines upwardly-pointing tips 15 at the edges of floating gate 14. These FG tips 15 may define a conductive coupling to adjacent program/erase gates, e.g., the Poly2 gate 18 shown in Figure 1.After forming the floating gate 14 and football oxide 16, a source dopant implant may be performed, which is self-aligned by the lateral edge of the floating gate 14, followed by an anneal process that diffuses the source dopant outwardly such that the resulting source region extends partially under the floating gate 14, as shown in Figure 1. However, during the source dopant implant, a portion of the dopant may penetrate through the football oxide 16 and into the underlying floating gate 14, which may result in a dulling or blunting of one or more floating gate tips 15, e.g., after subsequent oxidation steps (wherein the dopant absorbed in the floating gate 14 promotes oxidation of the floating gate tips 15). This dulling or blunting of the floating gate tip(s) 15 may decrease the efficiency of erase and/or program operations of the memory cell 10 A.Figures 2A and 2B illustrate example cross-sections taken at selected times during a conventional manufacturing process for the conventional memory cell 10A shown in Figure 2, e.g., a flash memory cell including multiple floating gates. As shown in Figure 2A, a Polyl layer 30 may be deposited over a silicon substrate. A nitride layer may then be deposited and patterned using known techniques to form a hard mask 32. As shown in Figure 2B, a floating gate oxidation process may then be performed, which forms a football oxide 16 over areas of the Polyl layer 30 exposed through the nitride mask 32 (which subsequently defines the floating gates 14). The nitride mask 32 may subsequently be removed, followed by a plasma etch to remove portions of the Polyl layer 30 uncovered by each football oxide 16, which defines the lateral extent of each floating gate 14. This may be followed by a source implant and/or formation of a Poly2 layer (e.g., to form a word line, erase gate, coupling gate, etc.), depending on the particular implementation.Figure 3 illustrates another example mirrored memory cell 10B (e.g., a SuperFlash cell) including two spaced-apart floating gates 14, a word line 20 formed over each floating gate 14, and a common erase gate or“coupling gate” 22 formed between and extending over both floating gates 14 (such that the program and erase couplings to each respective floating gate 14 are decoupled), and a source region formed below the common erase gate. In this cell, the source region may be formed before forming the word lines 20 and the coupling gate 22. During the source implant, the portions of each floating gate 14 that are not masked by resist are relatively unprotected, such that a portion of the source dopant may penetrate through each football oxide 16 and into each underlying floating gate 14, which may result in a dulling or blunting of the floating gate tips 15 located over the source region, as discussed above.SUMMARYEmbodiments of the present disclosure provide a memory cell (e.g., flash memory cell) and method for forming a memory cell having at least one flat-topped floating gate and oxide cap (which may also be flat-topped). In some embodiments, the memory cell may be formed without performing a floating gate thermal oxidation, which is performed in conventional techniques to produce the conventional football oxide over the floating gate. The feature of removing the floating gate thermal oxidation step, and the resulting flat-topped floating gate and oxide cap may provide various advantages over conventional processes and memory cells, as discussed herein.Embodiments of the present invention may provide any or all of the following advantages. First, in some embodiments, the size of the floating gate as defined by openings etched in FG nitride does not grow. Thus, oxide encroachment under the edges of FG nitride during thermal oxidation may be reduced or eliminated. Further, the nitride spacer conventionally used to protect the FG tips during HVII (High Voltage Ion Implant) of the source region may be reduced in thickness or completely eliminated. Further, a thinner (or omitted) spacer moves the HVII closer to the FG edge, and may thus allow a lower HVII implant energy to be used.In addition, embodiments may provide an improvement in program/erase efficiency, which may allow the use of lower operating voltages (e.g., medium voltage (MV) devices instead of high voltage (HV)). The elimination of HV devices may simplify the process flow (reduce cost) and allow for further cell shrink. Further, disclosed processes may provide improved control of the cell in photolithography. The cell may have a strong sensitivity on the poly2 to polyl overlap making it an important control parameter in the Fab. The proposed may reduce the criticality of this alignment because the coupling of the poly2 to polyl may be set by the side wall alone. The top surface of the poly2 may be spaced away from the floating gate polyl with the thick oxide layer, e.g., as shown in Figure 4 discussed below.Some embodiments allow for varying the thickness or doping of the polyl independent of the memory cell, e.g., as defined by requirements for a poly2-polyl capacitor. In contrast, the conventional approach sets a narrow boundary of these polyl floating gate parameters, which are typically set to achieve a certain shape of the football oxidation created over the floating gate to create a sharp polyl tip for erase efficiency.One embodiment provides a method of forming a memory cell, including forming a poly layer over a substrate; forming a patterned mask that covers a first portion of the poly layer and exposes a flat-topped second portion of the poly layer having a flat top surface; depositing an oxide layer over the exposed flat-topped second portion of the poly layer; removing portions of the poly layer to define a flat-topped floating gate including the second portion of the poly layer; depositing a spacer layer over the flat-topped floating gate and the oxide layer; and performing a source implant in the substrate adjacent the flat-topped floating gate, wherein the spacer layer shields the underlying flat-topped floating gate from the source implant.The method may be performed without performing a floating gate thermal oxidation, which is performed in conventional techniques to produce the conventional“football” oxide over the floating gate.In some embodiments, the oxide layer is deposited over the exposed flat-topped second portion of the poly layer using an HDP (High Density Plasma) oxide deposition.In some embodiments, the oxide layer has a flat bottom surface in contact with the flat top surface of the floating gate, and a flat top surface. A chemical mechanical planarization (CMP) may be performed to define the flat top surface of the oxide layer.In some embodiments, the floating gate has at least one sidewall having a generally concave shape. The generally concave shape of the floating gate sidewall may define an acute angle at a top comer of the floating gate, which improves a program or erase efficiency of the memory cell.In some embodiments, the patterned mask comprises nitride. Further, in some embodiments, the spacer layer comprises a nitride layer having a thickness of less than 300 A, e.g., in the range of 150-250Ά.The method may further include forming a word line and a separate erase gate over the floating gate. In some embodiments, the word line overlaps the floating gate by a first distance and the erase gate overlaps the floating gate by a second distance substantially larger than the first distance. For example, the second distance may be at least 1.5 times, at least 2 times, at least 3 times, at least 4 times, at least 5 times, at least 6 times, at least 7 times, at least 8 times, at least 9 times, or at least 10 times as great as the first distance.In some embodiments, the memory cell comprises a flash memory cell, e.g., a SuperFlash memory cell.Other embodiments provide a memory cell formed by the process disclosed above, e.g., a process including forming a poly layer over a substrate; forming a patterned mask that covers a first portion of the poly layer and exposes a flat-topped second portion of the poly layer having a flat top surface; depositing an oxide layer over the exposed flat-topped second portion of the poly layer; removing portions of the poly layer to define a flat-topped floating gate including the second portion of the poly layer; depositing a spacer layer over the flat-topped floating gate and the oxide layer; and performing a source implant in the substrate adjacent the flat-topped floating gate, wherein the spacer layer shields the underlying flat-topped floating gate from the source implant.Thus, embodiments of the present invention provide a memory cell, e.g., flash memory cell, that is formed without performing a floating gate thermal oxidation that is performed in conventional techniques to produce the conventional“football oxide” over the floating gate.Other embodiments provides a memory cell, e.g., a flash memory cell, including a substrate, a flat-topped floating gate formed over the substrate and having a flat top surface, an oxide layer formed over the flat-topped floating gate, and a doped source region in the substrate adjacent the floating gate and extending partially under the floating gate. The memory cell may include a word line and a separate erase gate formed over the floating gate, wherein the word line overlaps the floating gate by a first distance and the erase gate overlaps the floating gate by a second distance substantially larger than the first distance.BRIEF DESCRIPTION OF THE DRAWINGSExample aspects of the present disclosure are described below in conjunction with the figures, in which:Figure 1 illustrates a partial cross-sectional view of an example conventional memory cell including a Poly 1 floating gate, a“football oxide” formed over the floating gate, and a Poly2 common program/erase gate extending partially over the floating gate.Figures 2A and 2B illustrate example cross-sections taken at selected times during a conventional process for forming floating gates with a conventional“football oxide” over each floating gate.Figure 3 illustrates example mirrored memory cell (e.g., a SuperFlash cell) including two floating gates, a word line formed over each floating gate, and a common erase gate formed over both floating gates, wherein the floating gate tips under the common erase gate may be dulled or blunted by conventional processing steps.Figure 4 illustrates a cross-section of an example memory cell structure including a floating gate with an overlying flat-topped oxide region including a“football oxide” and an additional oxide deposit, according to one embodiment of the present invention.Figure 5 illustrates an example process for forming the example memory cell structure shown in Figure 4, according to one embodiment. Figure 6 illustrates a cross-section of an example memory cell structure including a flat- topped floating gate with an overlying flat-topped oxide region, according to one embodiment of the present invention.Figure 7 illustrates an example process for forming the example memory cell structure shown in Figure 6, according to one embodiment.Figure 8 illustrates another example process for forming the example memory cell structure shown in Figure 6, according to one embodiment.Figure 9 illustrates a cross-section of an example memory cell including a flat-top floating gate, a flat-top oxide cap over the flat-top floating gate, and a word line and erase gate formed over the floating gate, according to one embodiment.DETAILED DESCRIPTIONEmbodiments of the present disclosure provide a memory cell (e.g., flash memory cell) and method for forming a memory cell having at least one flat-topped floating gate and oxide cap (which may also be flat-topped). The memory cell may be formed without performing a floating gate thermal oxidation, which is performed in conventional techniques to produce the conventional“football” oxide over the floating gate. The feature of removing the floating gate thermal oxidation, and the resulting flat-topped floating gate and oxide cap may provide various advantages over conventional processes and memory cells, as discussed herein.The disclosed concepts may be applied to any suitable types of memory cells, e.g., flash memory cells. For example, the disclosed concepts may be applied to certain SuperFlash memory cells manufactured by Microchip Technology Inc., having a headquarters at 2355 W. Chandler Blvd., Chandler, Arizona 85224, or modified versions of such memory cells.Figure 4 illustrates a cross-section of an example memory cell structure 100 formed according to an embodiment of the present invention. Memory cell structure 100 includes a floating gate 104 formed over a substrate 102, and a flat-topped oxide region or“oxide cap” 106 formed over the floating gate 104, a spacer layer 108 (e.g., nitride layer) formed over the floating gate 104/oxide 106 structure. Flat-topped oxide region 106 may be formed by forming a“football oxide” over a floating gate structure and a subsequent oxide deposit and processing to define a flat-topped oxide region 106. The example structure shown in Figure 4 may be applied or incorporated in any suitable memory cell, e.g., SuperFlash or other flash memory cells having one or more floating gates 104. Figure 5 illustrates an example method 150 of forming the example memory cell structure 100 shown in Figure 4. At 152, a gate oxidation is performed or occurs on a top surface of substrate 102. At 154, a polyl layer is deposited over the substrate 102. At 156, a nitride layer is deposited over the poly 1 layer 102. At 158, a floating gate structure is formed from the polyl layer, e.g., by a FG lithography and nitride etch process. At 160, a FG poly oxidation is performed, which may form a football-shaped oxide over the floating gate structure and define the concave upper surface of the floating gate structure. At 162, an HDP oxide deposition may be performed over the football-shaped oxide. At 164, a CMP may be performed on the HDP oxide to define the flat-topped oxide region 106 shown in Figure 4. At 166, a floating gate nitride removal process may be performed. At 168, a polyl etch may be performed to define the shape of floating gate 104 shown in Figure 4, by removing the portions of polyl on the lateral side of the illustrated floating gate 104. At 170, a spacer layer 108 may be deposited over the structure. For example, the spacer layer 108 may comprise a nitride layer having a thickness in the range of 200A-600A, or in the range of 30qA-50qA, e.g., a thickness of about 400A. Spacer layer 108 may be used for aligning a source implant, e.g., a HVII (High Voltage Ion Implant) source implant, to form a source region in the substrate 102. Spacer layer 108 may be a sacrificial layer that is removed after the source implant for subsequent processing of the cell, e.g., growing a tunnel oxide layer and depositing and etching a poly2 layer to form a word line, erase gate and/or other program or erase nodes.Figure 6 illustrates a portion of another example memory cell structure 200 having a flat-top floating gate 204 and a flat-top oxide cap or“stud” region 206 formed over the flat-top floating gate 204, according to one embodiment of the present invention. The flat-top floating gate 204 and overlying flat-top oxide cap 206 may be formed in any suitable manner, for example using the methods shown in Figures 7 or 8, discussed below.As shown in Figure 6, the process of forming memory cell structure 200 (e.g., using the method of Figure 7 or Figure 8) may form concave floating gate sidewalls 205, which may define acute (<90 degree) or reentrant upper comers or“tips” 207 of the floating gate 204, which may increase the erase and/or program efficiency of the memory cell. The floating gate sidewalls 205 may become concave due to stress forces, fluid flow of oxide as it grows, and/or the oxidation process itself.In addition, the oxide cap 206 created by this process may be offset inwardly from the sidewall oxide layer 211, to define a step in the oxide region 206 near the upper comers 207 of the floating gate 204. As a result of the this step, the nitride spacer 208 deposited over the oxide 206 may define vertically-extending regions 209 aligned over the upper comers of the floating gate, which act as shields that protect against a source implant dopant from penetrating down into the floating gate poly 204, to thereby maintain the acuteness of the floating gate tips 207.Figure 7 illustrates an example method 250 of forming the example memory cell structure 200 shown in Figure 6, according to an example embodiment. At 252, a gate oxidation is performed or occurs on a top surface of substrate 202. At 254, a polyl layer is deposited over the substrate 202. At 256, a nitride layer is deposited over the poly 1 layer 202. At 258, a flat-topped floating gate structure is formed from the polyl layer, e.g., by a FG lithography and nitride etch process. At 260, an HDP oxide deposition may be performed directly on the flat-topped floating gate structure. Thus, unlike example method 150 (Figure 5) to form the cell structure 100 shown in Figure 4, in this embodiment the FG poly oxidation step to form a football-shaped oxide over the floating gate structure (step 160 of method 150 discussed above) may be omitted. At 262, a CMP may be performed on the HDP oxide to define the flat-topped oxide region 206 shown in Figure 4. At 264, a floating gate nitride removal process may be performed. At 266, a polyl etch may be performed to define the shape of floating gate 204 shown in Figure 4, by removing the portions of polyl on the lateral side of the illustrated floating gate 204.At 268, a spacer layer 208 may be deposited over the structure. Due to reduced oxide pullback, the required or optimal thickness of spacer layer 208 may be reduced as compared with spacer layer 108 used in the formation of memory cell structure 100 shown in Figure 4, discussed above. For example, the spacer layer 208 may comprise a nitride layer having a thickness in the range of 100A-400A, or in the range of 150A-300A, e.g., a thickness of about 200A. At 270, a HVII (High Voltage Ion Implant) source implant may be performed, to form a source implant region in the substrate 202 that may be self-aligned with spacer layer 208. For example, the source implant may be self-aligned by an external lateral edge defined by spacer layer 208, e.g., lateral edge 220A or 220B shown in Figure 6, depending on the relevant dimensions of the various regions of spacer layer 208 and/or the intensity/power of the HVII source implant. Spacer layer 208 may be used for aligning a source implant, e.g., a HVII (High Voltage Ion Implant) source implant, to form a source region in the substrate 202. In addition, as discussed above, spacer layer 208 may include vertically-extending regions 209 aligned over the upper corners of the floating gate, which act as shields that protect against the source implant dopant from penetrating down into the floating gate poly 204, to thereby maintain the acuteness of the floating gate tips 207. Spacer layer 208 may be a sacrificial layer that is removed after the HVII source implant for subsequent processing of the cell, e.g., growing a tunnel oxide layer and depositing and etching a poly2 layer to form a word line, erase gate and/or other program or erase nodes.Figure 8 illustrates another example method 300 of forming the example memory cell structure 200 shown in Figure 6, according to an example embodiment. At 302, a gate clean oxidation is performed on a top surface of substrate 202. At 304, a FG poly (polyl) layer is deposited over the substrate 202. At 306, a FG poly implant is performed. At 308, a FG nitride clean and deposition is performed. At 310, a FG photoresist is formed. At 312, a FG nitride etch is performed. At 314, a cell Vt (voltage threshold) implant is performed. At 316, a resist strip is performed. At 318, a wet clean is performed. At 320, a FG poly oxide clean is performed.At 322, an HDP oxide deposition is performed over the floating gate structure, with a selected oxide thickness, e.g., in the range of 1000A-2500A, or in the range of 1300A-2000A, or in the range of 1500A-1800A, e.g., a thickness of about 1650A. At 324, a FG oxide CMP is performed, e.g., to a depth that leaves approximately 1200A of the nitride layer. At 326, a FG nitride removal may be performed, e.g., a plasma etch to remove the 1200A nitride thickness. At 328, a FG top up implant may be performed. At 330, a wet clean is performed. At 332, a POP (poly oxide poly) photoresist is formed. At 334, a FG/POP etch and in-situ ash process is performed. At 336, a resist strip is performed. At 338, a FG nitride spacer is deposited over the structure. At 340, a HVII (High Voltage Ion Implant) photoresist is formed.At 342, an HVII source implant is performed. As discussed above, the FG nitride spacer may include vertically-extending regions 209 aligned over the upper corners of the floating gate, which act as shields that protect against the HVII dopant from penetrating down into the FG poly, to thereby maintain the acuteness of the floating gate tips. At 344, a resist strip is performed. At 346, the FG nitride spacer is removed for subsequent processing of the cell. For example, a tunnel oxide layer may be grown over the structure, followed by depositing and etching a poly2 layer to form a word line, erase gate and/or other program or erase nodes.Figure 9 illustrates a portion of a memory cell 300 including the memory cell structure 200 shown in Figure 6, and a word line 310 extending over a first side of the floating gate 204, and an erase gate 312 extending partially over a second side of the floating gate 204. Word line 310 and erase gate 312 may be formed in any suitable manner, e.g., by growing a tunnel oxide 314 over the structure and depositing and etching a poly2 layer to define the word line 310 and erase gate 312.As shown, the erase gate 312 may overlap the floating gate 204 (“EG/FG overlap”) by a substantially greater distance than the word line 310 overlaps the floating gate 204 (“WL/FG overlap”). For example, the EG/FG overlap may be at least 1.5 times, at least 2 times, at least 3 times, at least 4 times, at least 5 times, at least 6 times, at least 7 times, at least 8 times, at least 9 times, or at least 10 times as great as the WL/FG overlap. This asymmetrical program/erase FG overlap over the flat-top floating gate 204 may provide certain advantages. For example, in addition to reducing the WL/FG overlap, a reduction in the floating gate 204 height/thickness (TFG) and/or doping may decrease unwanted sidewall coupling between the word line (poly2) 310 and floating gate (polyl) 204. As another example, in addition to increasing the EG/FG overlap, a reduction of the oxide cap height/thickness (Toe) may increase the coupling between the erase gate (poly2) 312 and floating gate (polyl) 304. Thus, the flat- top FG cell 300 may allow independent control of the polyl thickness (TFG) and/or doping, and the oxide cap thickness Toe. In addition, the disclosed techniques allow for independent optimization of program and erase efficiency in the memory cells.The present invention has been described in terms of one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated (e.g., methods of manufacturing, product by process, and so forth), are possible and within the scope of the invention.
Systems, methods and apparatus are described that can improve available bandwidth on a SoundWire bus without increasing the number of pins used by the SoundWire bus. A method performed at a master device coupled to a SoundWire bus includes providing a clock signal by a first master device over a clock line of a SoundWire bus to a first slave device and a second slave device coupled to the SoundWire bus, transmitting first control information from the first master device to the first slave device over a first data line of the SoundWire bus, and transmitting second control information from the first master device to the second slave device over a second data line of the SoundWire bus. The first control information may be different from the second control information and is transmitted concurrently with the second control information.
1. A method for data communication, comprising:transmitting a clock signal on a clock channel of the SoundWire bus from a first master device to a first slave device and a second slave device coupled to the SoundWire bus;transmitting first control information from the first master device to the first slave device on the first data channel of the SoundWire bus; andtransmitting second control information from the first master device to the second slave device on the second data channel of the SoundWire bus,wherein the first control information is different from the second control information and is transmitted concurrently with the second control information.2. The method of claim 1, wherein the first data channel is coupled to a master data terminal of the first master device and the second data channel is coupled to the first Auxiliary data terminal of the main control device.3. The method of claim 1, wherein the first control information is in a first frame directed to one or more slave devices coupled to the first data channel of the SoundWire bus transmitted, and wherein the second control information is transmitted in a second frame directed to one or more slave devices coupled to the second data channel of the SoundWire bus.4. The method of claim 1, wherein the first master device is configured to transmit control information on the first data channel and the second data channel.5. The method of claim 1, wherein control information is transmitted on three or more data channels of the SoundWire bus.6. The method of claim 1, wherein the second slave device includes a SoundWire bus interface circuit configured to support a single data channel.7. The method of claim 1, further comprising:sending a check command in the first control information and the second control information; andThe plurality of devices coupled to the SoundWire bus are enumerated based on responses to the ping command received from the plurality of devices.8. The method of claim 7, further comprising:Each device of the plurality of devices is assigned a channel number, where each channel number corresponds to a conductor associated with the first data channel or a conductor associated with the second data channel.9. The method of claim 8, further comprising:A field of a frame transmitted on the SoundWire bus is associated at the first master device with a number of a wire coupled to a destination representing the frame.10. The method of claim 7, wherein the plurality of devices includes at least twelve slave devices.11. The method of claim 1, further comprising:A ping command is sent in the first control information and the second control information, wherein the ping command signals a stream synchronization point event.12. The method of claim 1, wherein the first data channel is a master data channel carried on a first wire driven by the first master control device, and the second data channel is the main data channel carried on a second wire driven by a second master device, and wherein said first master device and said second master device are configured in an application processor or encoder-decoder Codec supply.13. The method of claim 12, further comprising:Synchronize the frame timing of the second master control device with the frame timing of the first master control device.14. The method of claim 12, further comprising:A stream synchronization point defined for the second master device is synchronized with a stream synchronization point defined for the first master device.15. The method of claim 12, further comprising:The timing of the group switching signal transmitted by the second master control device is synchronized with the timing of the group switching signal transmitted by the first master control device.16. The method of claim 15, wherein the group switching signal transmitted by the second master control device includes a broadcast write command.17. A device for data communications, comprising:A physical interface coupled to the SoundWire bus;an applications processor coupled to the SoundWire bus; andmultiple slave devices coupled to the SoundWire bus,wherein said application processor includes a processor configured to perform the following operations:providing a clock signal on a clock channel of the SoundWire bus to a first slave device and a second slave device coupled to the SoundWire bus;transmitting first control information to the first slave device on a first data channel of the SoundWire bus; andtransmitting second control information to the second slave device on a second data channel of the SoundWire bus,wherein the first control information is different from the second control information and is transmitted concurrently with the second control information.18. The apparatus of claim 17, wherein the second slave device includes a two-wire SoundWire interface.19. The apparatus of claim 17, wherein the first data channel is a primary data channel of the SoundWire bus, and the second data channel is a secondary data channel of the SoundWire bus, and wherein The applications processor includes SoundWire bus interface circuitry operable to drive three or more conductors of the SoundWire bus.20. The apparatus of claim 19, wherein the processor is further configured to:transmitting the first control information in a first frame directed to one or more slave devices coupled to the SoundWire bus via a first conductor; andThe second control information is transmitted in a second frame directed to one or more slave devices coupled to the SoundWire bus via a second conductor.21. The apparatus of claim 17, wherein the processor is further configured to:sending a check command in the first data channel and the second data channel; andThe plurality of devices coupled to the SoundWire bus are enumerated based on responses to the ping command received from the plurality of devices.22. The apparatus of claim 21, wherein the processor is further configured to:Each device of the plurality of devices is assigned a device number, where each device number is unique to a conductor coupling the corresponding device to the device.23. The apparatus of claim 22, wherein the processor is further configured to:Fields of a frame transmitted on the SoundWire bus are associated with the number of the wire coupled to the destination representing the frame.24. The device of claim 17, wherein the application processor includes:a first interface device configured to operate as a SoundWire master device and transmit a first frame on a master data channel of said SoundWire bus;a second interface device configured to operate as a SoundWire master device and transmit a second frame on the secondary data channel of the SoundWire bus; andA synchronization circuit configured to synchronize frame timing of the first interface device with frame timing of the second interface device.25. An equipment for data communications, comprising:means for providing a clock signal by a first master device on a clock channel of a SoundWire bus to a first slave device and a second slave device coupled to the SoundWire bus;means for transmitting first control information from the first master device to the first slave device on a first data channel of the SoundWire bus; andMeans for transmitting second control information from the first master device to the second slave device on a second data channel of the SoundWire bus, wherein the first control information is different from the second slave device. Control information is transmitted concurrently with the second control information, and wherein the second slave device includes a two-wire SoundWire interface.26. The equipment of claim 25, further comprising:means for sending a ping command in the first data channel and the second data channel; andMeans for enumerating a plurality of devices coupled to the SoundWire bus based on responses received from the plurality of devices to the ping command.27. Equipment according to claim 25, characterized in that:said means for communicating said first control information includes a first interface device configured to operate as a SoundWire master device and drive a first conductor of said SoundWire bus,said means for communicating said second control information includes a second interface device configured to operate as a SoundWire master device and drive a second conductor of said SoundWire bus, andwherein the equipment includes means for synchronizing frame timing of the first interface device with frame timing of the second interface device.28. A processor-readable medium storing processor-executable code, the processor-executable code including code for causing a processor to:Provide a clock signal by a first master device on a clock channel of the SoundWire bus to a first slave device and a second slave device coupled to the SoundWire bus;transmitting first control information from the first master device to the first slave device on the first data channel of the SoundWire bus; andtransmitting second control information from the first master device to the second slave device on a second data lane of the SoundWire bus, wherein the first control information is different from the second control information and is consistent with The second control information is transmitted concurrently, and wherein the second slave device includes a two-wire SoundWire interface.29. The processor-readable medium of claim 28, further comprising code for causing the processor to:sending a check command in the first data channel and the second data channel; andThe plurality of devices coupled to the SoundWire bus are enumerated based on responses to the ping command received from the plurality of devices.30. The processor-readable medium of claim 28, further comprising code for causing the processor to:communicating the first control information using a first interface device configured to operate as a SoundWire master device and drive a first conductor of the SoundWire bus;communicating the second control information using a second interface device configured to operate as a SoundWire master device and drive a second conductor of the SoundWire bus; andSynchronize the frame timing of the first interface device with the frame timing of the second interface device.
High-bandwidth SOUNDWIRE master device with multiple master data channelsCross-references to related applicationsThis application claims provisional patent application S/N.62/525,556, filed with the United States Patent and Trademark Office on June 27, 2017, and non-provisional patent application S/N.16, filed with the United States Patent and Trademark Office on June 19, 2018. /012,532, the entire contents of which are hereby incorporated by reference as if fully set forth below and for all applicable purposes.Technical fieldAt least one aspect relates generally to a data communications interface, and more specifically to a data communications interface for connecting devices in an audiovisual or multimedia system.backgroundElectronic devices, including mobile communication devices, wearable computing devices (such as smart watches), and tablet computers, support ever-increasing functionality and capabilities. Many electronic devices include internal microphones and speakers, and may include connectors that enable the use of audio-visual equipment, including headphones, external speakers, and the like. Communication may be provided through digital interfaces defined by one or more standards. In one example, a mobile communications device may employ an interface compliant with the SoundWire standard specified by the Mobile Industry Processor Interface (MIPI) Alliance. The SoundWire standard defines a multi-wire communications bus.The need for increased audio-visual capabilities continues to grow. For example, mobile communications devices may include cameras and stereo microphones that can be adjusted over time to improve performance. In another example, digital processing capabilities may permit an electronic device to implement a sound decoder that can provide signals to drive more than two speakers. In these and other examples, improved communications capabilities are needed to enable processing circuitry, controllers, codec devices, and other components to communicate audio data to multiple audio devices over a common communications bus.The available bandwidth on the conventional SoundWire bus can limit the number of audio peripherals that can be supported within a mobile communications device. Therefore, there is a continuing need for increased bandwidth and improved flexibility in connecting an increasing number of audio peripherals to the SoundWire bus.OverviewCertain aspects disclosed herein relate to systems and methods for improving available bandwidth on a SoundWire bus without increasing the number of pins used by the SoundWire bus.In various aspects of the present disclosure, a method performed at a master device coupled to a SoundWire bus includes: on a clock channel of the SoundWire bus from a first master device to a first slave device coupled to the SoundWire bus and The second slave device transmits the clock signal; transmits the first control information from the first master device to the first slave device on the first data channel of the SoundWire bus; and transmits the first control information from the first master device on the second data channel of the SoundWire bus. The controlling device transmits second control information to the second slave device. The first control information may be different from the second control information and may be transmitted concurrently with the second control information.In one aspect, the first data channel is the primary data channel of the SoundWire bus and the second data channel is the secondary data channel of the SoundWire bus. The first control information may be transmitted in a first frame directed to one or more slave devices coupled to the SoundWire bus. The second control information may be transmitted in a second frame directed to one or more slave devices coupled to the SoundWire bus.In one aspect, the first master device is configured to communicate control information over the first data channel and the second data channel.In some aspects, the first master device includes SoundWire bus interface circuitry operable to drive three or more conductors of the SoundWire bus. The second slave device may include SoundWire bus interface circuitry configured to support a single data channel.In certain aspects, the first master device may be configured to: send a ping command in the first data channel and the second data channel; based on the ping command received from the first slave device and the second slave device response to enumerate multiple devices coupled to the SoundWire bus. Enumerating the plurality of devices may include assigning a device number to each device of the plurality of devices. Each device number may be unique to a wire coupling the corresponding device to the first master device. The first master device may be configured to associate a field of a frame transmitted on the SoundWire bus with a number representing a wire to which the target of the frame is coupled.In some aspects, the first data channel is a primary data channel driven by a first master device and the second data channel is a primary data channel driven by a second master device. The first master device and the second master device may be provided in an application processor or codec. The first master device may be configured to synchronize the frame timing of the second master device with the frame timing of the first master device. The first master device may be configured to synchronize a stream synchronization point defined for the second master device with a stream synchronization point defined for the first master device. The first master device may be configured to synchronize the timing of the group switching signal transmitted by the second master device with the timing of the group switching signal transmitted by the first master device. The group switching signal transmitted by the second master device may include a broadcast write command.In various aspects of the present disclosure, an apparatus includes a physical interface coupled to a multi-wire link operating as a SoundWire bus. The apparatus may have an application processor coupled to the SoundWire bus and a plurality of slave devices coupled to the SoundWire bus. The applications processor may include a processor configured to provide, by a first master device, a clock signal on a clock channel of the SoundWire bus to a first slave device and a second slave device coupled to the SoundWire bus ;Transmitting first control information from the first master device to the first slave device on the first data channel of the SoundWire bus; and from the first master device to the second slave device on the second data channel of the SoundWire bus Send second control information. The first control information may be different from the second control information and may be transmitted concurrently with the second control information. A second slave device can be coupled to the SoundWire bus through the two-wire SoundWire interface.In some aspects, the first data channel is the primary data channel of the SoundWire bus and the second data channel is the secondary data channel of the SoundWire bus. The applications processor may include SoundWire bus interface circuitry operable to drive three or more conductors of the SoundWire bus. The processor may be configured to transmit first control information on a first conductor of the SoundWire bus in a first frame directed to the one or more slave devices. Control information may be communicated in a second frame directed to one or more slave devices coupled to a second conductor of the SoundWire bus.In certain aspects, the processor is configured to: send a ping command in the first data channel and the second data channel; and based on responses to the ping command received from the first slave device and the second slave device to enumerate multiple devices coupled to the SoundWire bus. The processor may be configured to assign a device number to each device of the plurality of devices. Each device number may be unique to the wire coupling the corresponding device to the device. The processor may be configured to associate a field of a frame transmitted on the SoundWire bus with a number representing a wire to which the destination of the frame is coupled.In one aspect, the application processor includes: a first interface device configured to operate as a SoundWire master device and transmit a first frame on a main data channel of the SoundWire bus; a second interface device configured to operate as a SoundWire master device and transmit a first frame on a main data channel of the SoundWire bus; The SoundWire master device operates and transmits the second frame on the secondary data channel of the SoundWire bus; and a synchronization circuit configured to synchronize the frame timing of the first interface device with the frame timing of the second interface device.In various aspects, an apparatus includes: means for providing, by a first master device, a clock signal on a clock channel of a SoundWire bus to a first slave device and a second slave device coupled to the SoundWire bus; means for transmitting first control information from a first master device to a first slave device on a first data lane of the SoundWire bus; and for transmitting first control information from a first master device to a second slave device on a second data lane of the SoundWire bus. A device for transmitting second control information from a slave device. The first control information may be different from the second control information and transmitted concurrently with the second control information. The second slave device may be a two-wire SoundWire interface.In one aspect, the apparatus includes: means for sending a ping command in a first data channel and a second data channel; and means for sending the ping command based on the ping command received from the first slave device and the second slave device. The response is a device that enumerates multiple devices coupled to the SoundWire bus.In one aspect, means for transmitting first control information includes a first interface device configured to operate as a SoundWire master device and drive a first conductor of the SoundWire bus. The means for transmitting the second control information may include a second interface device configured to operate as a SoundWire master device and drive a second conductor of the SoundWire bus. The apparatus may include means for synchronizing frame timing of the first interface device with frame timing of the second interface device.In various aspects, a processor-readable medium stores processor-executable code. The code, when executed by the processor, causes the processor to: provide a clock signal from a first master device on a clock channel of the SoundWire bus to a first slave device and a second slave device coupled to the SoundWire bus; transmitting the first control information from the first master device to the first slave device on the first data channel of the SoundWire bus; and transmitting the first control information from the first master device to the second slave device on the second data channel of the SoundWire bus. 2. Control information. The first control information may be different from the second control information and may be transmitted concurrently with the second control information. The second slave device may have a two-wire SoundWire interface.In one aspect, the code may cause the processor to: send a ping command in a first data channel and a second data channel; and based on responses to the ping command received from the first slave device and the second slave device to enumerate multiple devices coupled to the SoundWire bus.In one aspect, the code may cause the processor to transmit the first control information including: a first interface device configured to operate as a SoundWire master device and drive a first conductor of the SoundWire bus; transmitting the second control information includes: being a second interface device configured to operate as a SoundWire master device and drive a second conductor of the SoundWire bus; and synchronize frame timing of the first interface device with frame timing of the second interface device.Brief description of the drawingsFigure 1 depicts an apparatus employing data links between integrated circuit (IC) devices that may be adapted in accordance with certain aspects disclosed herein.Figure 2 illustrates an example of a system architecture for a SoundWire system that may be adapted in accordance with certain aspects disclosed herein.Figure 3 illustrates the control information transmitted in SoundWire frames.Figure 4 illustrates a system in which one or more slave devices are implemented with a single data channel.Figure 5 illustrates a system in which the application processor includes multiple SoundWire bus master devices.Figure 6 illustrates a system adapted to support single data pin slave devices in high bandwidth applications in accordance with certain aspects disclosed herein.Figure 7 illustrates data channel enumeration in control information transmitted in a SoundWire frame in accordance with certain aspects disclosed herein.8 illustrates a procedure for enumerating devices coupled to a SoundWire bus adapted to communicate control information over multiple data channels, in accordance with certain aspects disclosed herein.9 illustrates the use of multiple SoundWire bus master devices to support slave devices implemented as having a single data channel SoundWire interface in accordance with certain aspects disclosed herein.10 is a diagram illustrating an example of an apparatus employing processing circuitry adaptable in accordance with certain aspects disclosed herein.Figure 11 is a flow chart of a data transfer method operating on one of the two devices in the apparatus.12 is a diagram illustrating an example of a hardware implementation of an apparatus employing processing circuitry adapted in accordance with certain aspects disclosed herein.A detailed descriptionThe detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. This detailed description includes specific details to provide a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.Several aspects of data communications systems will now be presented with reference to various apparatus and methods. These apparatus and methods are described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, etc. (collectively, "elements"). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends on the specific application and the design constraints imposed on the overall system. By way of example, an element, or any portion of an element, or any combination of elements may be implemented with a "processing system" including one or more processors. Examples of processors include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), application specific integrated circuits (ASICs), systems on a chip (SOCs) , state machines, gating logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure.OverviewCertain aspects disclosed herein relate to systems and methods for providing a high-bandwidth SoundWire bus in which one or more master devices support multiple master data channels and a single common clock channel. In one implementation, the bus master device provides control information on the primary data channel and one or more secondary data channels. Two-wire slave devices can be transparently connected to the primary or secondary data channel and can receive control information.In another implementation, two or more master devices provide multiple synchronized master data lanes, where a common clock signal is used to control transmission on all data lanes. Synchronization logic controls the timing of frame transmissions, configuration activation, and other synchronization points for all data channels. For example, a primary master that provides a clock signal to two groups of slave devices can communicate control information to a first group of slave devices coupled to its master data channel, and a secondary master device can be synchronized to that clock signal and can couple to The second group of slave devices transmits control information to its master data channel. In the latter example, the timing of the group switching signals sent by the two masters can be synchronized by the primary master. The group switching signal can be provided in the broadcast write command.Examples of mobile communication devicesFigure 1 depicts an apparatus 100 that may employ communication links deployed within and/or between IC devices. In one example, apparatus 100 may include a radio communications device that communicates via a radio frequency (RF) communications transceiver 118 with a radio access network (RAN), a core access network, the Internet, and/or another network. Communications transceiver 118 may be implemented in or may be operably coupled to processing circuitry 102 . Processing circuitry 102 may be implemented using an SoC and/or may include one or more IC devices. In one example, processing circuitry 102 may include one or more application processors 104, one or more ASICs 108, and one or more peripheral devices 106, such as codecs, amplifiers, and other audiovisual components. Each ASIC 108 may include one or more processing devices, logic circuits, storage, registers, and the like. Application processor 104 may include processor 110 and memory 114 and may be controlled by an operating system 112 loaded from internal or external storage as data and instructions executable by processor 110 . Processing circuitry 102 may include or access a local database 116 implemented in memory 114 , for example, where the database 116 may be used to maintain operating parameters and other information for configuring and operating device 100 . The local database 116 may be implemented as a register set, or may be implemented in a database module, flash memory, magnetic media, non-volatile or persistent storage, optical media, tape, floppy or hard disk, or the like. The processing circuitry may also be operably coupled to internal and/or external devices, such as antenna 120, display 124, operator controls (such as buttons 128, 130 and keypad 126), and other components.Data bus 122 may be provided to support communications between application processor 104, ASIC 108, and/or peripheral devices 106. Data bus 122 may operate according to standard protocols defined for interconnecting certain components of mobile devices. For example, there are several types of interfaces that are defined for communication between the application processor of a mobile device and the display and camera components, or for codecs provided in the ASIC 108 with one of the peripherals 106 Communication between audio drivers. Some components use interfaces that comply with standards specified by the Mobile Industry Processor Interface (MIPI) Alliance. For example, the MIPI Alliance defines SLIMbus and SoundWire interface standards that enable mobile device designers to achieve design goals including scalability, reduced power, lower pin count, ease of integration, and consistency between system designs.The MIPI Alliance standard for SoundWire defines a multipoint multiwire interface that can be used to convey information in frames that can be transmitted over the interface using a double data rate clock. The SoundWire protocol supports configurable frame sizes and multiple channels can be defined. Digital audio data can be modulated using pulse density modulation. The SoundWire interface is optimized for low power and low latency and supports single-bit payload granularity.Overview of SoundWire ArchitectureFigure 2 illustrates an example of SoundWire system 200. A variety of devices can be connected to the SoundWire bus, including audio headsets, codecs, amplifiers, repeaters, switches, bridges and signal processing equipment. A 32kHz system clock can be distributed with minimal command and control. In the illustrated SoundWire system 200 , the application processor 202 or other IC device may include a codec or be configured to operate as a codec, and may be configured to operate via a physical device 204 as the SoundWire bus master device 204 . interface for communication. SoundWire bus master device 204 may include channel drivers and receivers, SoundWire encoders and decoders, state machines and/or sequencers, and other logic circuitry. In some examples, SoundWire bus master device 204 may be implemented in a codec. The channel drivers and receivers of the SoundWire bus master 204 may be coupled to the conductors of the multi-conductor bus 220 through designated terminals of the application processor 202 .In the illustrated example, the application processor 202 communicates with at least four slave devices 212, 214, 216, 218 associated with the audio input/output device 230 Union. The first slave device 212 includes an analog-to-digital converter (ADC 222) that digitizes input received from the left microphone 232, and the second slave device 214 includes an ADC 224 that digitizes the input received from the right microphone 234, The third driven device 216 includes a digital-to-analog converter (DAC 226 ) that provides an output to drive the left speaker 236 , and the fourth driven device 218 includes a DAC 228 that provides an output to drive the right speaker 238 .In SoundWire system 200 , application processor 202 is coupled to slave devices 212 , 214 , 216 , 218 via multi-conductor bus 220 . Multi-conductor bus 220 may be configured to provide clock channel 206 and one or more data channels 208, 210. Up to eight data channels 208, 210 may be provided on corresponding conductors of the multi-conductor bus 220. The SoundWire specification defines fixed frames that can be transmitted on multiple data channels. In practice, each data channel 208, 210 is assigned to one of the physical conductors of the multi-conductor bus 220. The frame can have rows and columns. In each row, the bit slots that can be assigned to the source are provided. The allocation to each bit slot can be changed from one source to any other source. Multi-wire bus 220 may be configured by SoundWire bus master 204 . SoundWire bus master 204 can control data transmission on up to eight data lanes 208, 210 of multi-conductor bus 220.Figure 3 illustrates control information 300 transmitted in the first 48 bits or column 0 of a SoundWire frame. SoundWire bus master 204 may use these 48 bits to transmit control information. Bits 00-03 select a command code, and the commands include a ping command 302, a read command 304, and a write command 306. Five other commands 318 (collectively referred to as reserved commands 308) are available for transmission. Certain bits in the control information 300 have different meanings for different commands 302, 304, 306, 308. In one example, bits 04-07 include the device address for read and write commands 304, 306. SoundWire bus master device 204 can address and support up to 11 slave devices 212, 214, 216, 218. In another example, bits 08-23 include the register address for read and write commands 304, 306. Some bits have the same meaning and/or setting regardless of the configured command. For example, bits 40-44 serve as dynamic synchronization patterns for some or all commands 302, 304, 306, 308.Bandwidth Limitation in SoundWireWhen the demand for aggregated bandwidth (ie, bandwidth for multiple devices) exceeds the bandwidth available on a single data channel 208, the SoundWire audio interface can distribute the data channels over up to 7 additional data channels. The SoundWire protocol is designed to provide low-gated, low-cost endpoint devices, including microphones and speakers. In some instances, low-cost endpoint devices are provided with a clock pin and a single data pin to reduce device cost.4 illustrates a system 400 in which an application processor 402 includes a SoundWire bus master device 404 that supports multiple data channels and a plurality of slave devices 420, one or more of the slave devices 412 of the plurality of slave devices 420. , 414, 416, 418 are implemented as having a single data channel SoundWire interface. Each slave device 412 , 414 , 416 , 418 may be coupled to a clock channel 406 and a master data channel 408 of SoundWire bus 422 . In the illustrated system 400, the bandwidth of the master data channel 408 used by the three slave devices 412, 414, 416 leaves insufficient capacity on the master data channel to support the fourth slave device 418. In a conventional SoundWire implementation, control information is transmitted only through the main data channel 408. In the illustrated system 400, the fourth slave device 418 has a single data pin and cannot use the secondary data channel 410 without being connected to the primary data channel 408. Therefore, the number of devices that can be connected to the conventional SoundWire bus 422 is limited due to the bandwidth of the main data channel 408.In certain aspects of the present disclosure, multiple master devices may be provided in the application processor to enable a greater number of slave devices to be coupled to the master data channel. Figure 5 illustrates a system 500 in which the application processor 502 includes a plurality of SoundWire bus master devices 504, 524, each SoundWire bus master device driving a corresponding master data channel 508, 528. In some examples, the SoundWire bus master device 504, 524 may be configured to drive one or more additional data channels. The availability of multiple master data channels 508, 528 enables the application processor 502 to communicate with a plurality of slave devices 512, 514, 516, 518, 520, one or more of the slave devices 512 , 514, 516, 518, 520 are implemented with a SoundWire interface providing a single data channel. Each of the first group of slave devices 512, 514, 516 may be coupled to the master data channel 508 of the primary SoundWire bus master 504, while the second group of slave devices 518, 520 may be coupled to the secondary The main data channel 528 of the SoundWire bus master device 524. The illustrated system 500 can accommodate slave devices 512, 514, 516, 518, 520 having a single data pin when the aggregate bandwidth exceeds the capacity of the master data channel 508 of the first SoundWire bus master device 504. A second SoundWire bus master 524 provides an additional master data channel 528 that carries control information.System 500 provides two clock channels 506, 526, which increases the pin count of applications processor 502 and increases the overall cost of system 500. Bus management complexity is also increased because the system includes two separate SoundWire buses 510, 522 that are managed independently. The SoundWire specification does not require synchronization of SoundWire bus master devices and does not provide procedures for synchronizing bus master devices. When clock signals, frame start timing, stream synchronization points (SSPs), and group switching are not synchronized or coordinated, operational issues can occur that affect audio recording and playback. Group switching is used to switch between system configurations. The registers of each slave device 512, 514, 516, 518 may be written to implement the new configuration, where a group switch signal is provided to the slave device 512, 514, 516, 518 such that the newly written register value is The new configuration is activated when applied to the corresponding slave devices 512, 514, 516, 518 at the same point in time. The use of two different SoundWire bus master devices 504, 524 may prevent the simultaneous adoption of new configurations.Improved SoundWire bandwidth using enhanced secondary data channel6 illustrates a system 600 adapted to enable an application processor 602 to support multiple single data pin slave devices 620 in high bandwidth applications in accordance with certain aspects disclosed herein. Application processor 602 includes a SoundWire bus master 604 adapted to communicate control information on a primary data channel 608 and on one or more secondary data channels, including secondary data channel 610 shown in FIG. 6 . Secondary data channel 610 may be used to support slave devices equipped with a single data channel. SoundWire bus master 604 may be coupled to various conductors of SoundWire bus 622 through designated terminals of applications processor 602 .In the illustrated system 600, the bus master 604 in the application processor 602 supports a SoundWire bus 622 having a clock channel 606, a primary data channel 608, and a secondary data channel 610. Slave devices 612, 614, 616, 618, 620 are implemented with SoundWire interfaces supporting a single data channel. Each slave device 612, 614, 616, 618, 620 is coupled to clock channel 606.A first group of slave devices 624 is coupled to the primary data channel 608 of the SoundWire bus 622 , while a second group of slave devices 626 is coupled to the secondary data channel 610 of the SoundWire bus 622 . Control information is transmitted concurrently on both data channels 608, 610. The primary data channel 608 carries control information about a first group of slave devices 624 and the secondary data channel 610 carries control information about a second group of slave devices 626 . The ability to transmit control information on the secondary data channel 610 enables the number of devices that can be connected to the SoundWire bus 622 to be determined independently of the bandwidth provided by the primary data channel 608 . Additionally, the ability to transmit control information with different or modified addresses on the primary data channel 608 and the secondary data channel 610 enables the total number of devices supported on the SoundWire bus 622 to be increased beyond the 11 devices imposed by the SoundWire specification for a single interface. limits.In some implementations, system 600 may maintain the 11 device limit imposed by the SoundWire specification for a single interface. In one example, the addressing scheme used on primary data lane 608 may be maintained on secondary data lane 610 . Other implementations use independent and/or extended addressing schemes on the primary data channel 608 and secondary data channel 610 to permit support of more than 11 slave devices on the SoundWire bus 622 . For example, the SlvStat (Slave Status) field 320 (see Figure 3) corresponding to each individual secondary data channel may be mapped to a higher logical address and/or slave device number managed by the SoundWire interface.Slave devices 612, 614, 616, 618, 620 connected to one of the clock channel 606 and data channels 608, 610 of the SoundWire bus 622 do not need to be aware of the existence of the two groups of slave devices 624, 626 and the SoundWire bus master device 604 associated adaptation. The SoundWire bus master 604 may be adapted to correlate the data channels 608, 610 used to connect each group of slave devices 624, 626 with corresponding fields of control information. Control information adapted or configured based on the physical configuration of the SoundWire bus 622 may be found primarily in the first 48 bits or column 0 of the transmitted frame. Figure 7 illustrates a field 700 in which data channel enumeration is tracked by the SoundWire bus master 604.In the illustrated system 600, the SoundWire bus master 604 is configured to assign channels for communication with slave devices 612, 614, 616, 618, 620 of two groups of slave devices 624, 626. slots and other timing information. The SoundWire bus master 604 may maintain enumeration information identifying the data channels 608, 610 from which data has been received or over which data is to be transferred.Examples of processing of data fields are illustrated in the following table. Tables 1-5 provided below illustrate an implementation in which the index "[N]" is added to the enumerated fields and associated with the data channels 608, 610. In the illustrated system 600, when N=0, information may be transmitted on the primary data channel 608, and when N=1, information may be transmitted on the secondary data channel 610.Table 1: Bits common to all commandsTable 2: Bits used for check commandTable 3: Bits used for read/write commandsTable 4: Bits used for read commandTable 5: Bits used for write commandsSoundWire bus master 604 may process parity received from either or both data channels 608 and 610, where the parity is consistent with the parity received by one of slave devices 612, 614, 616, 618, 620 Data transmitted by one or more slave devices on associated data channels 608 and 610 is correlated. In some examples, the SoundWire bus master 604 may calculate parity for each data channel 608, 610 independently to determine whether the data has been received by the SoundWire bus master 604 without errors. SoundWire bus master 604 may receive and/or respond to negative acknowledgments transmitted by one or more of slave devices 612, 614, 616, 618, 620 on associated data channels 608 and 610. In one example, the SoundWire bus master 604 may combine the NAK (Negative Acknowledgment) bits from each data lane 608, 610 to determine whether the data frame has been acknowledged by all slave devices 612, 614, 616, 618, 620 Receive without errors. In another example, the SoundWire bus master 604 may process NAK bits transmitted independently by one or more slave devices 612, 614, 616, 618, 620 on one of the data channels 608, 610 to identify the correct Signal the slave devices 612, 614, 616, 618, 620 that the data error was received.8 illustrates a procedure for enumerating devices coupled to SoundWire bus 622 that are adapted to communicate control information over multiple data channels 608, 610. The procedure 800 may begin at block 802 after initialization in response to a notification of a change in hardware configuration and/or in response to an interrupt.At block 804, the SoundWire bus master 604 may transmit a ping command. One or more unattached slave devices 612, 614, 616, 618, 620 may be configured based on the data channel (N) over which the unattached slave device 612, 614, 616, 618, 620 is communicating. to respond in Slv_Stat00[N]. In some examples, unattached slave devices 612, 614, 616, 618, 620 may respond on more than one data channel.At block 806 , the SoundWire bus master 604 may determine that no unattached slave devices 612 , 614 , 616 , 618 , 620 have responded, and the procedure may restart at block 802 or wait for a restart at block 802 . After the SoundWire bus master 604 has determined that at least one unattached slave device 612, 614, 616, 618, 620 has responded, the procedure continues at block 808.At block 808, the SoundWire bus master 604 may read device identifier (DevID) registers from the slave devices 612, 614, 616, 618, 620. SoundWire bus master 604 can issue a series of commands to read the DevID register. For each DevID read, the slave device 612, 614, 616, 618, 620 with the highest DevID value responds in each data channel. It is possible to report different DevIDs on each data channel.At block 808, the SoundWire bus master 604 may select a unique device number (DevNum) for each slave device 612, 614, 616, 618, 620, where the DevNum is assigned once per data channel.At block 812, the SoundWire bus master 604 may write DevNum=0 to a new SCP_DevNumber assigned to or otherwise corresponding to the desired device. The SoundWire bus master device 604 can write to all devices using the same frame by asserting a unique DevNum on each data channel. SoundWire bus master device 604 may perform independent writes to each slave device 612, 614, 616, 618, 620 in different transfers.At block 814, the SoundWire bus master 604 may determine whether a NAK response has been received. NAK responses can be received from any of these data channels. A NAK on a data channel simply indicates that the DevNum written through that data channel has not taken effect.At block 816, one or more newly connected devices have been enumerated on all data channels. The procedure may restart at block 802 or wait for restart at block 802.The combined slave status from all data channels may be taken to represent the enumerated slave device when the enumeration procedure has completed.SoundWire bus master device 604 may synchronize and/or coordinate the implementation of configuration changes at two or more slave devices 612, 614, 616, 618, 620. For example, it may be advantageous and/or desirable for configuration changes affecting multiple microphones 232, 234 and/or multiple speakers 236, 238 (see Figure 2) to be implemented simultaneously. SoundWire bus master device 604 can synchronize the implementation of configuration changes by communicating these changes to affected devices and using broadcast commands to implement these changes. Synchronization of configuration changes may be achieved by providing two or more configuration register sets in each of the slave devices 612, 614, 616, 616, 618, 620. One configuration register set may be used to control and configure certain functions and operations of each slave device 612, 614, 616, 618, 620, while another configuration register set may be unused and available to be controlled by the SoundWire bus master device 604 configuration without immediately affecting the operation of the slave devices 612, 614, 616, 618, 620. In normal operation, the SoundWire bus master 604 may configure all registers on an unused configuration register bank and may activate the changed configuration by sending a broadcast write (Bank-Switch command) to all devices. The bank switch command signals slave devices 612, 614, 616, 618, 620 on all data lanes to switch between configuration register banks.In some examples, a NAK response to the group switch command may be received from one of the data channels, the NAK response indicating a failed switch in at least one slave device 612, 614, 616, 618, 620. Failure of one or more slave devices 612, 614, 616, 618, 620 to implement a configuration change may result in a bus failure. In accordance with certain aspects disclosed herein, the SoundWire bus master device 604 may cancel and/or reverse the group switch command at all slave devices 612, 614, 616, 618, 620, including devices that did not return a NAK. In one example, the SoundWire bus master 604 may issue one or more immediate broadcast writes on all data lanes 608, 610 to switch back to the original configuration register set and thereby restore the previously running configuration. In another example, when partial configuration failures can be tolerated, the SoundWire bus master 604 may issue one or more immediate "switchback" broadcast writes on any data channel 608, 610 that returns a NAK. After receiving a NAK in response to the first set switch command, the SoundWire bus master 604 may retry reconfiguration by resending the set switch command for a defined or configured number of times. In the event of repeated signaling of command failures (i.e., received NAKs) in response to a series of repeated group switch commands, the SoundWire bus master 604 may determine that a serious or fundamental problem is affecting the system 600 and SoundWire bus master 604 may initiate a reset of the system.According to certain aspects, copying of the command code (see bits 01, 02, 03 in Table 1) may occur for all commands except where the new device number is written to the current device 0 (see block 812) (see also Table 3 bits 04-07) in addition to the write command transmitted during the final phase of the enumeration process. At block 812, different DevNums are written on different channels. In some instances, enumerations may be processed one data channel at a time. In one example, a sequence for enumerating data channels begins with a primary data channel and subsequently enumerates one secondary data channel at a time.9 illustrates a system 900 in which an application processor 902 includes multiple SoundWire bus master devices 904, 924, each of which supports multiple data channels to enable communication with multiple slave devices 920. One or more slave devices 912, 914, 916, 918 may be implemented with a single data channel SoundWire interface. The illustrated system 900 can accommodate slave devices 912, 914, 916, 918 having a single data pin when the aggregate bandwidth exceeds the capacity of the master data channel 908 driven by the first SoundWire bus master device 904. A second SoundWire bus master 924 provides an additional master data channel 928 that carries control information. In one example, the first SoundWire bus master device 904 is coupled to the clock channel 906 and the main data channel 908 through designated terminals of the application processor 902 , while the second SoundWire bus master device 924 is coupled to another designated terminal of the application processor 902 The terminal is coupled to an additional main data channel 928.System 900 provides the clock signal in a single clock channel 906 generated by one of the SoundWire bus masters 904, 924. The shared clock channel 906 is used by the SoundWire bus masters 904, 924. Synchronization circuit 926 may be configured to synchronize the signaling of SoundWire bus master devices 904, 924. In one example, synchronization module and/or circuitry 926 may be configured to ensure frame synchronization (Frame_Sync). Frame synchronization may include synchronization of frame size and frame start. In one example, synchronization module and/or circuitry 926 may be configured to synchronize bank switch events, and synchronization module and/or circuitry 926 may generate a bank switch trigger (BankSwTrig) signal to trigger a synchronized broadcast write command. In another example, the synchronization module and/or circuitry 926 may be configured to synchronize stream synchronization point (SSP) events associated with each SoundWire bus master device 904, 924. SSP events may be signaled by transmitting SSP bits at regular intervals in the ping frame to maintain alignment between links with different frame rates and/or sampling rates. SSP events can be used to maintain phase coherence between slave devices.In some aspects, the first SoundWire bus master device 904 is configured to generate a clock signal transmitted in the clock channel 906 of the SoundWire bus 910 and the clock signal is fed to a second SoundWire bus master device within the application processor 902 924. In another example, clock source 930 provides an internal clock signal 932 to each SoundWire bus master 904 , 924 which is used to generate each of the clock signals transmitted on clock channel 906 of SoundWire bus 910 version for use by SoundWire bus master devices 904 and 924.In various examples, the first SoundWire bus master device 904 operates as a primary master device, and one or more SoundWire bus master devices 924 are modified to operate as slave master devices. When the slave devices 912, 914, 916, 918 are coupled to the common clock channel 906 and use the same clock signal, the output pins on the application processor 902 are preserved.In one implementation, the primary master device may be configured to control the frame size such that each data channel 908, 928 carries frames with the same frame size. The main master device can be configured to control bus frame start events, control SSP events, and control group switching events. A slave master can configure its internal SoundWire bus clock rate to follow the master's bus clock rate. For example, when using a common clock source 930, the primary master and slave masters may generate synchronously divided SoundWire bus clocks. A slave master can use the same frame size as that used by the primary master. A slave master can use the frame start point controlled by the master. A slave master may use the same SSP event value as that used by the primary master. The primary master and slave masters may generate corresponding group switching commands upon synchronization software request or upon synchronization hardware initiation using an external trigger event provided by the primary master.Additional description of certain aspects10 is a conceptual diagram illustrating a simplified example of a hardware implementation of device 1000 employing processing circuitry 1002 that may be configured to perform one or more functions disclosed herein. According to aspects of the present disclosure, an element, or any portion of an element, or any combination of elements as disclosed herein may be implemented using processing circuitry 1002 . Processing circuitry 1002 may include one or more processors 1004, which are controlled by some combination of hardware and software modules. Examples of processor 1004 include: microprocessor, microcontroller, digital signal processor (DSP), ASIC, field programmable gate array (FPGA), programmable logic device (PLD), state machine, sequencer, gate control logic, discrete hardware circuitry, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors 1004 may include special purpose processors that perform specific functions and may be configured, enhanced, or controlled by one of the software modules 1016 . One or more processors 1004 may be configured by a combination of software modules 1016 loaded during initialization, and further configured by loading or unloading one or more software modules 1016 during operation.In the illustrated example, processing circuitry 1002 may be implemented with a bus architecture represented generally by bus 1010 . Bus 1010 may include any number of interconnecting buses and bridges depending on the specific application of processing circuit 1002 and overall design constraints. Bus 1010 links together various circuits including one or more processors 1004 and storage 1006 . Storage 1006 may include memory devices and mass storage devices, and may be referred to herein as computer-readable media and/or processor-readable media. Bus 1010 may also link various other circuits, such as timing sources, timers, peripherals, voltage regulators, and power management circuits. Bus interface 1008 may provide an interface between bus 1010 and one or more line interface circuits 1012. Line interface circuit 1012 may be provided for each networking technology supported by the processing circuit. In some examples, multiple networking technologies may share some or all of the circuitry or processing modules present in line interface circuit 1012. Each line interface circuit 1012 provides a means for communicating with various other devices over a transmission medium. Depending on the nature of the device, a user interface 1018 (eg, keypad, display, speakers, microphone, joystick) may also be provided and may be communicatively coupled to bus 1010 directly or through bus interface 1008 .Processor 1004 may be responsible for managing bus 1010 and for general processing, which may include execution of software stored in computer-readable media (which may include storage 1006 ). In this aspect, processing circuitry 1002 (including processor 1004) may be used to implement any of the methods, functions, and techniques disclosed herein. Storage 1006 may be used to store data manipulated by processor 1004 when executing software, and the software may be configured to implement any of the methods disclosed herein.One or more processors 1004 in processing circuitry 1002 may execute software. Software shall be construed broadly to mean instructions, set of instructions, code, code segments, program code, program, subroutine, software module, application, software application, software package, routine, subroutine, object, executable , threads of execution, procedures, functions, algorithms, etc., whether described in software, firmware, middleware, microcode, hardware description language, or other terms. The software may reside in storage 1006 in computer-readable form or on external computer-readable media. External computer-readable media and/or storage 1006 may include non-transitory computer-readable media. By way of example, non-transitory computer-readable media include: magnetic storage devices (e.g., hard drives, floppy disks, magnetic stripes), optical disks (e.g., compact discs (CDs) or digital versatile discs (DVDs)), smart cards, flash memory devices ( For example, "flash drive", card, stick, or key drive), random access memory (RAM), read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), registers, removable disks, and any other suitable medium for storing software and/or instructions that can be accessed and read by a computer. By way of example, computer-readable media and/or storage 1006 may also include carrier waves, transmission lines, and any other suitable medium for transmitting software and/or instructions that can be accessed and read by a computer. Computer-readable media and/or storage 1006 may reside within processing circuitry 1002 , within processor 1004 , external to processing circuitry 1002 , or distributed across multiple entities including processing circuitry 1002 . Computer-readable media and/or storage 1006 may be implemented in a computer program product. As an example, a computer program product may include a computer-readable medium in packaging materials. Those skilled in the art will recognize how to best implement the described functionality presented throughout this disclosure depending on the specific application and the overall design constraints imposed on the overall system.Storage 1006 may maintain software maintained and/or organized in loadable code segments, modules, applications, programs, etc., which may be referred to herein as software modules 1016. Each software module 1016 may include instructions and data that, when installed or loaded onto processing circuitry 1002 and executed by one or more processors 1004 , contribute to a runtime image 1014 that controls one or more processes. operation of the device 1004. When executed, certain instructions may cause processing circuitry 1002 to perform functions in accordance with certain methods, algorithms and processes described herein.Some software modules 1016 may be loaded during initialization of the processing circuit 1002, and these software modules 1016 may configure the processing circuit 1002 to enable performance of the various functions disclosed herein. For example, some software modules 1016 may configure internal devices and/or logic circuits 1022 of the processor 1004 and may manage access to external devices such as line interface circuits 1012, bus interface 1008, user interface 1018, timers, math coprocessors, etc.) access. Software module 1016 may include a control program and/or operating system that interacts with interrupt handlers and device drivers and controls access to various resources provided by processing circuitry 1002 . These resources may include memory, processing time, access to wire interface circuitry 1012, user interface 1018, etc.One or more processors 1004 of processing circuitry 1002 may be multifunctional, whereby some software modules 1016 are loaded and configured to perform different functions or different instances of the same function. The one or more processors 1004 may additionally be adapted to manage background tasks initiated in response to inputs from, for example, the user interface 1018, line interface circuitry 1012, and device drivers. To support execution of multiple functions, the one or more processors 1004 may be configured to provide a multitasking environment whereby each of the multiple functions is implemented as needed or desired by the one or more processors 1004 The task set of the service. In one example, a multitasking environment may be implemented using a time-sharing program 1020 that passes control of the processor 1004 between different tasks, whereby each task completes any outstanding operations and/or Control of one or more processors 1004 is returned to time-sharing program 1020 in response to input, such as an interrupt. When a task has control of one or more processors 1004, the processing circuitry is effectively dedicated to the purpose targeted by the functionality associated with the controlling task. Time-sharing program 1020 may include an operating system, a main loop that transfers control on a round-robin basis, functions that allocate control to one or more processors 1004 based on prioritization of functions, and/or by transferring control to one Control of one or more processors 1004 is provided to an interrupt-driven main loop that handles functions in response to external events.Figure 11 is a flowchart 1100 of a method operating in one or more master devices coupled to a SoundWire bus.At block 1102, the first device may provide a clock signal provided by the first master device on a clock channel of the SoundWire bus to a first slave device and a second slave device coupled to the SoundWire bus. The first data channel may be coupled to a primary data terminal of the first master device and the second data channel may be coupled to a secondary data terminal of the first master device. The first control information may be transmitted in a first frame directed to one or more slave devices coupled to a first conductor of the SoundWire bus. Control information may be communicated in a second frame directed to one or more slave devices coupled to a second conductor of the SoundWire bus.At block 1104, the first master device may transmit first control information from the first master device to the first slave device on the first data channel of the SoundWire bus.At block 1106, the first master device may transmit second control information from the first master device to the second slave device on a second data channel of the SoundWire bus. The first control information may be different from the second control information and transmitted concurrently with the second control information. The first master device may communicate additional control information from the first master device to other slave devices on one or more other data lanes of the SoundWire bus. The first control information may be different from the additional control information and may be transmitted concurrently with the additional control information.In some examples, the first master device may be configured to drive the clock channel, the first data channel, and the second data channel of the SoundWire bus. The first master device may include SoundWire bus interface circuitry operable to drive three or more conductors of the SoundWire bus. The second slave device may include SoundWire bus interface circuitry configured to support a single data channel.In some examples, the first master device may send a ping command in the first data channel and the second data channel and based on responses to the ping command received from the first slave device and the second slave device to enumerate multiple devices coupled to the SoundWire bus. Multiple devices can be enumerated by assigning a device number to each of the devices. Each device number may be unique to the data conductor coupling the corresponding device to the first master device. The plurality of devices may include at least twelve slave devices. The first master device may associate a field of a frame transmitted on the SoundWire bus with a number representing the data conductor to which the target of the frame is coupled.The first data channel may be a master data channel associated with the first master control device, and the second data channel may be a master data channel associated with the second master control device. The first master device and the second master device may be provided in an application processor or codec. The first master device may include circuitry or modules operable to synchronize the frame timing of the second master device with the frame timing of the first master device. The first master device may include circuitry or modules operable to synchronize the SSP defined for the second master device with the SSP defined for the first master device. The first master device may include circuitry or a module operable to synchronize the timing of the group switching signal transmitted by the second master device with the timing of the group switching signal transmitted by the first master device. The group switching signal transmitted by the second master device may include a broadcast write command.Figure 12 illustrates an example of a hardware implementation of apparatus 1200 employing processing circuitry 1202. The processing circuitry typically has a processor 1216, which may include one or more of: a microprocessor, a microcontroller, a digital signal processor, a sequencer, and a state machine. Processing circuitry 1202 may be implemented using a bus architecture represented generally by bus 1220. Bus 1220 may include any number of interconnecting buses and bridges depending on the specific application of processing circuit 1202 and overall design constraints. Bus 1220 will include one or more processors and/or hardware modules (comprised of processor 1216 , modules or circuits 1204 , 1206 , 1208 and 1208 ) that may be configured to communicate over the connectors or conductors of multi-wire communication link 1214 The various circuits represented by PHY 1212, and computer-readable storage medium 1218) are linked together. Bus 1220 may also link various other circuits, such as timing sources, peripherals, voltage regulators, and power management circuits.Processor 1216 is responsible for general processing, including execution of software stored on computer-readable storage medium 1218 . This software, when executed by processor 1216, causes processing circuitry 1202 to perform the various functions described above for any particular device. Computer-readable storage medium 1218 may also be used to store data manipulated by processor 1216 in executing software, including data decoded from symbols transmitted over multi-wire communication link 1214, which may be configured for the data channel and clock channel. Processing circuitry 1202 further includes at least one of modules 1204, 1206, 1208, and 1208. Each module 1204, 1206, 1208, and 1208 may be a software module running in the processor 1216, a software module resident/stored in a computer-readable storage medium 1218, one or more hardware modules coupled to the processor 1216 , or some combination thereof. 1204, 1206, 1208, and/or 1208 may include microcontroller instructions, state machine configuration parameters, or some combination thereof.In one configuration, multi-wire communication link 1214 operates according to the SoundWire protocol. Apparatus 1200 may include a device configured to provide a clock signal on a clock channel of multi-conductor communication link 1214 to a first slave device and a second slave device coupled to multi-conductor communication link 1214 and operating in accordance with the SoundWire protocol. Module or circuit 1204. Apparatus 1200 may include a module or circuit 1206 configured to: transmit first control information from a first master device to a first slave device on a first data channel of the SoundWire bus; and on a second data channel of the SoundWire bus The second control information is transmitted from the first master device to the second slave device on the data channel. Apparatus 1200 may include a module or circuit 1208 adapted to configure the first control information and the second control information. The first control information may be different from the second control information, and the first control information may be transmitted concurrently with the second control information.It is to be understood that the specific order or hierarchy of steps in the disclosed processes is an illustration of exemplary approaches. It is understood that the specific order or hierarchy of steps in these processes may be rearranged based on design preferences. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other aspects. Accordingly, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language of the claims, wherein reference to the singular form of an element is not intended to be limited unless expressly stated otherwise. It means "there is and only one", but "one or more". Unless specifically stated otherwise, the term "an" refers to one or more. All structural and functional equivalents to elements of the various aspects described throughout this disclosure that are now or hereafter known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be covered by the claims. Furthermore, nothing disclosed herein is intended to be dedicated to the public, whether or not such disclosure is explicitly recited in the claims. No claim element is to be construed as means plus function unless the element is explicitly recited using the phrase "means for."
For fabricating a field effect transistor having dual gates, on a buried insulating layer in SOI (semiconductor on insulator) technology, a first layer of first semiconductor material is deposited on the buried insulating material. The first layer of first semiconductor material is patterned to form a first semiconductor island having a first top surface and a second semiconductor island having a second top surface. The first and second semiconductor islands are comprised of the first semiconductor material. An insulating material is deposited to surround the first and second semiconductor islands, and the insulating material is polished down until the first and second top surfaces of the first and second semiconductor islands are exposed such that sidewalls of the first and second semiconductor islands are surrounded by the insulating material. A gate dopant is implanted into the second semiconductor island. A layer of back gate dielectric material is deposited on the first and second top surfaces of the first and second semiconductor islands. An opening is patterned through the layer of back gate dielectric material above the first semiconductor island such that a bottom wall of the opening is formed by the first top surface of the first semiconductor island. A second layer of second semiconductor material is grown from the exposed first top surface of the first semiconductor island and onto the layer of back gate dielectric material. A front gate dielectric is formed over a portion of the second layer of second semiconductor material disposed over the second semiconductor island. A front gate electrode is formed over the front gate dielectric. The second semiconductor island forms a back gate electrode, and a portion of the layer of back gate dielectric material under the front gate dielectric forms a back gate dielectric.
I claim: 1. A method for fabricating a field effect transistor having dual gates, on a buried insulating layer in SOI (semiconductor on insulator) technology, the method including the steps of:A. depositing a first layer of first semiconductor material on said buried insulating material; B. patterning said first layer of first semiconductor material to form a first semiconductor island having a first top surface and a second semiconductor island having a second top surface, wherein said first and second semiconductor islands are comprised of said first semiconductor material; C. depositing an insulating material to surround said first and second semiconductor islands; D. polishing down said insulating material until said first and second top surfaces of said first and second semiconductor islands are exposed, and such that sidewalls of said first and second semiconductor islands are surrounded by said insulating material; E. implanting a gate dopant into said second semiconductor island; F. depositing a layer of back gate dielectric material on said first and second top surfaces of said first and second semiconductor islands; G. patterning an opening through said layer of back gate dielectric material above said first semiconductor island such that a bottom wall of said opening is formed by said first top surface of said first semiconductor island; H. growing a second layer of second semiconductor material from said exposed first top surface of said first semiconductor island and onto said layer of back gate dielectric material; I. forming a front gate dielectric over a portion of said second layer of second semiconductor material disposed over said second semiconductor island; and J. forming a front gate electrode over said front gate dielectric, wherein said second semiconductor island forms a back gate electrode, and wherein a portion of said layer of back gate dielectric material under said front gate dielectric forms a back gate dielectric. 2. The method of claim 1, further including the step of:implanting a drain and source dopant into exposed portions of said second layer of second semiconductor material to form a drain region and a source region of said field effect transistor; and forming spacers comprised of silicon dioxide (SiO2) on sidewalls of said front gate dielectric and said front gate electrode. 3. The method of claim 1, wherein said buried insulating material is comprised of silicon dioxide (SiO2) formed on a silicon substrate, and wherein said first layer of first semiconductor material is comprised of silicon having a thickness in a range of from about 500 angstroms to about 1000 angstroms.4. The method of claim 3, wherein said second layer of second semiconductor material is comprised of silicon epitaxially grown from said exposed first top surface of said first semiconductor island.5. The method of claim 1, wherein said back dielectric material has a dielectric constant that is higher than that of silicon dioxide (SiO2).6. The method of claim 5, wherein said back dielectric material is comprised of silicon nitride (Si3N4).7. The method of claim 6, wherein a length of said front gate electrode is in a range of from about 20 nanometers to about 100 nanometers, and wherein said layer of back gate dielectric material has a thickness in a range of from about 10 angstroms to about 30 angstroms.8. The method of claim 1, wherein said insulating material deposited in said step C is comprised of silicon dioxide (SiO2).9. The method of claim 1, wherein said first semiconductor island is covered with a masking structure comprised of photoresist material during implantation of said gate dopant into said second semiconductor island in said step E.10. The method of claim 1, wherein said gate dopant implanted into said second semiconductor island during said step E is comprised of an N-type dopant for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor).11. The method of claim 1, wherein said gate dopant implanted into said second semiconductor island during said step E is comprised of a P-type dopant for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor).12. The method of claim 1, wherein said front gate dielectric is comprised of a dielectric material having a dielectric constant that is higher than that of silicon dioxide (SiO2), and wherein said front gate electrode is comprised of polysilicon.13. A method for fabricating a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) having dual gates, on a buried insulating layer comprised of silicon dioxide (SiO2) formed on a silicon substrate, in SOI (semiconductor on insulator) technology, the method including the steps of:A. depositing a first layer of first semiconductor material on said buried insulating material; wherein said first layer of first semiconductor material is comprised of silicon having a thickness in a range of from about 500 angstroms to about 1000 angstroms; B. patterning said first layer of first semiconductor material to form a first semiconductor island having a first top surface and a second semiconductor island having a second top surface, wherein said first and second semiconductor islands are comprised of said first semiconductor material; C. depositing an insulating material comprised of silicon dioxide (SiO2) to surround said first and second semiconductor islands; D. polishing down said insulating material until said first and second top surfaces of said first and second semiconductor islands are exposed, and such that sidewalls of said first and second semiconductor islands are surrounded by said insulating material; E. implanting a gate dopant into said second semiconductor island; wherein said first semiconductor island is covered with a masking structure comprised of photoresist material during implantation of said gate dopant into said second semiconductor island; and wherein said gate dopant implanted into said second semiconductor island is comprised of an N-type dopant for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor); or wherein said gate dopant implanted into said second semiconductor island is comprised of a P-type dopant for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor); F. depositing a layer of back gate dielectric material on said first and second top surfaces of said first and second semiconductor islands; wherein said back dielectric material is comprised of silicon nitride (Si3N4) having a thickness in a range of from about 10 angstroms to about 30 angstroms; G. patterning an opening through said layer of back gate dielectric material above said first semiconductor island such that a bottom wall of said opening is formed by said first top surface of said first semiconductor island; H. growing a second layer of second semiconductor material from said exposed first top surface of said first semiconductor island and onto said layer of back gate dielectric material; wherein said second layer of second semiconductor material is comprised of silicon epitaxially grown from said exposed first top surface of said first semiconductor island; I. forming a front gate dielectric over a portion of said second layer of second semiconductor material disposed over said second semiconductor island; J. forming a front gate electrode over said front gate dielectric, wherein said second semiconductor island forms a back gate electrode, and wherein a portion of said layer of back gate dielectric material under said front gate dielectric forms a back gate dielectric; and wherein a length of said front gate electrode is in a range of from about 20 nanometers to about 100 nanometers; and wherein said front gate dielectric is comprised of a dielectric material having a dielectric constant that is higher than that of silicon dioxide (SiO2), and wherein said front gate electrode is comprised of polysilicon; K. implanting a drain and source dopant into exposed portions of said second layer of second semiconductor material to form a drain region and a source region of said MOSFET; and L. forming spacers comprised of silicon dioxide (SiO2) on sidewalls of said front gate dielectric and said front gate electrode.
TECHNICAL FIELDThe present invention relates generally to fabrication of field effect transistors having scaled-down dimensions, and more particularly, to fabrication of a field effect transistor having dual gates in SOI (semiconductor on insulator) technology, for minimizing short-channel effects in the field effect transistor.BACKGROUND OF THE INVENTIONReferring to FIG. 1, a common component of a monolithic IC is a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) 100 which is fabricated within a semiconductor substrate 102. The scaled down MOSFET 100 having submicron or nanometer dimensions includes a drain extension junction 104 and a source extension junction 106 formed within an active device area 126 of the semiconductor substrate 102. The drain extension junction 104 and the source extension junction 106 are shallow junctions to minimize short-channel effects in the MOSFET 100 having submicron or nanometer dimensions, as known to one of ordinary skill in the art of integrated circuit fabrication.The MOSFET 100 further includes a drain contact junction 108 with a drain silicide 110 for providing contact to the drain of the MOSFET 100 and includes a source contact junction 112 with a source silicide 114 for providing contact to the source of the MOSFET 100. The drain contact junction 108 and the source contact junction 112 are fabricated as deeper junctions such that a relatively large size of the drain silicide 110 and the source silicide 114 respectively may be fabricated therein to provide low resistance contact to the drain and the source respectively of the MOSFET 100.The MOSFET 100 further includes a gate dielectric 116 and a gate electrode 118 which may be comprised of polysilicon. A gate silicide 120 is formed on the polysilicon gate electrode 118 for providing contact to the gate of the MOSFET 100. The MOSFET 100 is electrically isolated from other integrated circuit devices within the semiconductor substrate 102 by shallow trench isolation structures 121. The shallow trench isolation structures 121 define the active device area 126, within the semiconductor substrate 102, where a MOSFET is fabricated therein.The MOSFET 100 also includes a spacer 122 disposed on the sidewalls of the gate electrode 118 and the gate dielectric 116. When the spacer 122 is comprised of silicon nitride (Si3N4), then a spacer liner oxide 124 is deposited as a buffer layer between the spacer 122 and the sidewalls of the gate electrode 118 and the gate dielectric 116.A long-recognized important objective in the constant advancement of monolithic IC (Integrated Circuit) technology is the scaling-down of IC dimensions. Such scaling-down of IC dimensions reduces area capacitance and is critical to obtaining higher speed performance of integrated circuits. Moreover, reducing the area of an IC die leads to higher yield in IC fabrication. Such advantages are a driving force to constantly scale down IC dimensions.As the dimensions of the MOSFET 100 are scaled down further, the junction capacitances formed by the drain and source extension junctions 104 and 106 and by the drain and source contact junctions 108 and 112 may limit the speed performance of the MOSFET 100. Thus, referring to FIG. 2, a MOSFET 150 is formed with SOI (semiconductor on insulator) technology. In that case, a layer of buried insulating material 152 is formed on the semiconductor substrate 102, and a layer of semiconductor material 154 is formed on the layer of buried insulating material 152. A drain 156 and a source 158 of the MOSFET 150 are formed in the layer of semiconductor material 154. Elements such as the gate dielectric 116 and the gate electrode 118 having the same reference number in FIGS. 1 and 2 refer to elements having similar structure and function. Processes for formation of such elements 116, 118, 152, 154, 156, and 158 of the MOSFET 150 are known to one of ordinary skill in the art of integrated circuit fabrication.In FIG. 2, the drain 156 and the source 158 are formed to extend down to contact the layer of buried insulating material 152. Thus, because the drain 156, the source 158, and a channel region 160 of the MOSFET 150 do not form a junction with the semiconductor substrate 102, junction capacitance is minimized for the MOSFET 150 to enhance the speed performance of the MOSFET 150 formed with SOI (semiconductor on insulator) technology.In addition, referring to FIGS. 1 and 2, as the dimensions of the MOSFETs 100 and 150 are scaled down further, the occurrence of undesired short-channel effects increases, as known to one of ordinary skill in the art of integrated circuit fabrication. With short-channel effects, the threshold voltage of the MOSFET changes such that electrical characteristics of such a MOSFET become uncontrollable, as known to one of ordinary skill in the art of integrated circuit fabrication. In the prior art MOSFETs 100 and 150 of FIGS. 1 and 2, the gate dielectric 116 and the gate electrode 118 are formed on one surface of the channel region of the MOSFET. However, for controlling the electrical characteristics of the MOSFET, forming a gate dielectric and a gate electrode on a plurality of surfaces of the channel region of the MOSFET is desired to minimize undesired short channel effects.SUMMARY OF THE INVENTIONAccordingly, in a general aspect of the present invention, a field effect transistor is fabricated to have dual gates on two surfaces of the channel region of the field effect transistor formed in SOI (semiconductor on insulator) technology, to minimize undesired short channel effects.In one embodiment of the present invention, in a method for fabricating a field effect transistor having dual gates, on a buried insulating layer in SOI (semiconductor on insulator) technology, a first layer of first semiconductor material is deposited on the buried insulating material. The first layer of first semiconductor material is patterned to form a first semiconductor island having a first top surface and a second semiconductor island having a second top surface. The first and second semiconductor islands are comprised of the first semiconductor material. An insulating material is deposited to surround the first and second semiconductor islands, and the insulating material is polished down until the first and second top surfaces of the first and second semiconductor islands are exposed such that sidewalls of the first and second semiconductor islands are surrounded by the insulating material.In addition, a gate dopant is implanted into the second semiconductor island. A layer of back gate dielectric material is deposited on the first and second top surfaces of the first and second semiconductor islands. An opening is patterned through the layer of back gate dielectric material above the first semiconductor island such that a bottom wall of the opening is formed by the first top surface of the first semiconductor island. A second layer of second semiconductor material is grown from the exposed first top surface of the first semiconductor island and onto the layer of back gate dielectric material. A front gate dielectric is formed over a portion of the second layer of second semiconductor material disposed over the second semiconductor island. A front gate electrode is formed over the front gate dielectric. The second semiconductor island forms a back gate electrode, and a portion of the layer of back gate dielectric material under the front gate dielectric forms a back gate dielectric.The present invention may be used to particular advantage when the first semiconductor material forming the first and second semiconductor islands are comprised of silicon and when the second layer of second semiconductor material is silicon epitaxially grown from the top surface of the first semiconductor island through the opening in the layer of back gate dielectric material.In this manner, the back gate dielectric and the back gate electrode are formed on a bottom surface of the channel region of the field effect transistor, and the front gate dielectric and the front gate electrode are formed on a top surface of the channel region of the field effect transistor. With formation of such gate dielectrics and gate electrodes on a plurality of surfaces of the channel region of the field effect transistor, electrical characteristics of the field effect transistor are better controlled to minimized undesired short channel effects. In addition, because the field effect transistor is formed in SOI (semiconductor on insulator) technology, junction capacitance is minimized to enhance the speed performance of the field effect transistor.These and other features and advantages of the present invention will be better understood by considering the following detailed description of the invention which is presented with the attached drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a cross-sectional view of a conventional MOSFET (Metal Oxide Semiconductor Field Effect Transistor) fabricated within a semiconductor substrate, without dual gate dielectrics and gate electrodes formed on a plurality of surfaces of the channel region, according to the prior art;FIG. 2 shows a cross-sectional view of a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) fabricated with SOI (semiconductor on insulator) technology for minimizing junction capacitance, without dual gate dielectrics and gate electrodes formed on a plurality of surfaces of the channel region, according to the prior art; andFIGS. 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14 show cross-sectional views for illustrating the steps for fabricating a field effect transistor to have dual gates on two surfaces of the channel region of the field effect transistor formed in SOI (semiconductor on insulator) technology, to minimize undesired short channel effects according to an embodiment of the present invention.The figures referred to herein are drawn for clarity of illustration and are not necessarily drawn to scale. Elements having the same reference number in FIGS. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, and 14 refer to elements having similar structure and function.DETAILED DESCRIPTIONIn the cross-sectional view of FIG. 3, for fabricating a MOSFET (Metal Oxide Semiconductor Field Effect Transistor) with SOI (semiconductor on insulator) technology, a layer of buried insulating material 204 is deposited on a semiconductor substrate 202. In one embodiment of the present invention, the layer of buried insulating material 204 is comprised of silicon dioxide (SiO2) 204 deposited on the semiconductor substrate 202 comprised of silicon. Processes for deposition of the layer of buried insulating material 204 on the semiconductor substrate 202 are known to one of ordinary skill in the art of integrated circuit fabrication.Further referring to FIG. 3, a first layer of first semiconductor material 206 is deposited on the layer of buried insulating material 204. In one embodiment of the present invention, the first layer of first semiconductor material 206 is comprised of silicon having a thickness in a range of from about 500 angstroms to about 1000 angstroms. Processes for deposition of the first layer of first semiconductor material 206 on the layer of buried insulating material 204 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 4, a first masking structure 208 and a second masking structure 210 are formed on the first layer of first semiconductor material 206. The first and second masking structures 208 and 210 are comprised of photoresist material according to one embodiment of the present invention. Processes for patterning photoresist material to form the first and second masking structures 208 and 210 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 5, any exposed regions of the first layer of first semiconductor material 206 not under the first and second masking structures 208 and 210 are etched away to form a first semiconductor island 212 and a second semiconductor island 214. The first semiconductor island 212 is comprised of the first semiconductor material 206 remaining under the first masking structure 208, and the second semiconductor island 214 is comprised of the first semiconductor material 206 remaining under the second masking structure 210. Processes for etching away the exposed regions of the first layer of first semiconductor material 206 which is comprised of silicon for example are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 6, the first and second masking structures 208 and 210 are etched away from a first top surface 216 of the first semiconductor island 212 and from a second top surface 218 of the second semiconductor island 214. Processes for etching away the first and second masking structures 208 and 210 which are comprised of photoresist material for example are known to one of ordinary skill in the art of integrated circuit fabrication.Further referring to FIG. 6, an insulating material 220 is conformally deposited to surround the first and second semiconductor islands 212 and 214. The insulating material 220 is conformally deposited to surround the top surfaces 216 and 218 and the sidewalls of the first and second semiconductor islands 212 and 214. The insulating material 220 is comprised of silicon dioxide (SiO2) having a thickness in a range of from about 2,000 angstroms to about 3,000 angstroms according to one example embodiment of the present invention. Processes for conformally depositing such an insulating material 220 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 7, the insulating material 220 is polished down until the top surfaces 216 and 218 of the first and second semiconductor islands 212 and 214 are exposed. Processes such as CMP (chemical mechanical polishing) processes for polishing down the insulating material 220 are known to one of ordinary skill in the art of integrated circuit fabrication. Referring to FIG. 8, a gate dopant is implanted into the second semiconductor island 214 while a masking structure 221 is patterned to cover the first semiconductor island 212. The masking structure 221 is comprised of photoresist material according to one embodiment of the present invention, and processes for patterning the masking structure 221 are known to one of ordinary skill in the art of integrated circuit fabrication. The masking structure 221 blocks the gate dopant from being implanted into the first semiconductor island 212.Referring to FIG. 8, the gate dopant is an N-type dopant such as phosphorous or arsenic for example for fabrication of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor). Alternatively, the gate dopant is a P-type dopant such as boron for example for fabrication of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor). Processes for implantation of such a gate dopant are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 9, a layer of back gate dielectric material 222 is deposited on the exposed top surfaces 216 and 218 of the first and second semiconductor islands 212 and 214. The layer of back gate dielectric material 222 is comprised of a dielectric material having a dielectric constant that is higher than that of silicon dioxide (SiO2). In one embodiment of the present invention, the layer of back gate dielectric material 222 is comprised of silicon nitride (Si3N4). When the layer of back gate dielectric material 222 has a dielectric constant that is higher than the dielectric constant of silicon dioxide (SiO2), the layer of back gate dielectric material 222 has a higher thickness than if the layer of back gate dielectric material 222 were comprised of silicon dioxide (SiO2) to minimize undesired tunneling current through the layer of back gate dielectric material 222. Processes for depositing such a layer of back gate dielectric material 222 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 10, a layer of masking material 224 such as photoresist material for example is patterned to form an opening 226 through the layer of back gate dielectric material 222. The opening 226 through the layer of back gate dielectric material 222 is disposed over the first semiconductor island 212 such that the top surface 216 of the first semiconductor island 212 forms the bottom wall of the opening 226. Processes for patterning the layer of masking material 224 which is comprised of photoresist material for example and for etching the opening 226 through the layer of back gate dielectric material 222 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 11, a second layer of second semiconductor material 228 is grown from the top surface 216 of the first semiconductor island 212, through the opening 226 of the layer of back gate dielectric material 222, and onto the layer of back gate dielectric material 222. In one embodiment of the present invention, the second layer of second semiconductor material 228 is comprised of silicon that is epitaxially grown from the top surface 216 of the first semiconductor island 212 that is comprised of silicon. Processes for epitaxially growing such a second layer of second semiconductor material 228 from the top surface 216 of the first semiconductor island212 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 12, a front gate dielectric 230 is formed on the second layer of second semiconductor material 228 over the second semiconductor island 214. A front gate electrode 232 is formed on the front gate dielectric 230. In one embodiment of the present invention, the front gate dielectric 230 is comprised of a dielectric material such as a metal oxide having a dielectric constant that is higher than that of silicon dioxide (SiO2). When the front gate dielectric 230 is comprised of a dielectric material having a dielectric constant that is higher than the dielectric constant of silicon dioxide (SiO2), the front gate dielectric 230 has a higher thickness than if the front gate dielectric 230 were comprised of silicon dioxide (SiO2) to minimize undesired tunneling current through the front gate dielectric 230. Processes for forming such a front gate dielectric 230 are known to one of ordinary skill in the art of integrated circuit fabrication.In one embodiment of the present invention, the front gate electrode 232 formed on the front gate dielectric 230 is comprised of polysilicon. In an example embodiment of the present invention, the length 233 of the front gate electrode 232 is in a range of from about 20 nanometers to about 100 nanometers. In that case, in the example embodiment of the present invention, the thickness of the layer of back gate dielectric material 222 has a thickness in a range of from about 10 angstroms to about 30 angstroms. Processes for formation of such a front gate electrode 232 formed on the front gate dielectric 230 are known to one of ordinary skill in the art of integrated circuit fabrication.Referring to FIG. 13, a drain and source dopant is implanted into exposed regions of the second layer of second semiconductor material 228 to form a drain region 234 and a source region 236 that extend down to contact the layer of back gate dielectric material 222. The channel region of the MOSFET is the portion of the second layer of second semiconductor material 228 disposed under the front gate dielectric 230 between the drain region 234 and the source region 236. When the front gate electrode 232 is a semiconductor material such as polysilicon for example, the drain and source dopant is also implanted into the front gate electrode 232.The drain and source dopant is an N-type dopant for forming the drain region 234 and the source region 236 of an NMOSFET (N-channel Metal Oxide Semiconductor Field Effect Transistor). Alternatively, the drain and source dopant is a P-type dopant for forming the drain region 234 and the source region 236 of a PMOSFET (P-channel Metal Oxide Semiconductor Field Effect Transistor). Processes for implantation of such a dopant are known to one of ordinary skill in the art of integrated circuit fabrication. After implantation of the drain and source dopant, a thermal anneal is performed to activate the drain and source dopant in the drain region 234, the source region 236, and the front gate electrode 232, and the gate dopant in the second semiconductor island 214. Thermal anneal processes for activating dopant are known to one of ordinary skill in the art of integrated circuit fabrication. Referring to FIG. 14, spacers 238 are formed on the sidewalls of the front gate dielectric 230 and the front gate electrode 232. The spacers 238 are comprised of silicon dioxide (SiO2) according to one embodiment of the present invention, and processes for formation of such spacers 238 are known to one of ordinary skill in the art of integrated circuit fabrication.In this manner, the second semiconductor island 214 forms a back gate electrode, and a portion of the layer of back gate dielectric material 222 under the front gate dielectric 230 forms a back gate dielectric of the MOSFET. The back gate dielectric and the back gate electrode 214 are formed on a bottom surface of the channel region of the MOSFET, and the front gate dielectric 230 and the front gate electrode 232 are formed on a top surface of the channel region of the MOSFET. With formation of such gate dielectrics and gate electrodes on a plurality of surfaces of the channel region of the MOSFET, electrical characteristics of the MOSFET are better controlled to minimized undesired short channel effects. In addition, because the MOSFET is formed in SOI (semiconductor on insulator) technology, junction capacitance is minimized to enhance the speed performance of the MOSFET.The foregoing is by way of example only and is not intended to be limiting. For example, any specified material or any specified dimension of any structure described herein is by way of example only. In addition, as will be understood by those skilled in the art, the structures described herein may be made or used in the same way regardless of their position and orientation. Accordingly, it is to be understood that terms and phrases such as "over," "sidewall," "below," "top," "bottom," and "on" as used herein refer to relative location and orientation of various portions of the structures with respect to one another, and are not intended to suggest that any particular absolute orientation with respect to external objects is necessary or required.The present invention is limited only as defined in the following claims and equivalents thereof.
Systems and methods relate to a mixed-width single instruction multiple data (SIMD) instruction which has at least a source vector operand comprising data elements of a first bit-width and a destination vector operand comprising data elements of a second bit-width, wherein the second bit-width is either half of or twice the first bit-width. Correspondingly, one of the source or destination vector operands is expressed as a pair of registers, a first register and a second register. The other vector operand is expressed as a single register. Data elements of the first register correspond to even-numbered data elements of the other vector operand expressed as a single register, and data elements of the second register correspond to data elements of the other vector operand expressed as a single register.
CLAIMSWHAT IS CLAIMED IS:1. A method of performing a mixed-width single instruction multiple data (SIMD) operation, the method comprising:receiving, by a processor, a SIMD instruction comprising:at least a first source vector operand comprising a first set of source data elements of a first bit-width; andat least a destination vector operand comprising destination data elements of a second bit-width, wherein the second bit-width is twice the first bit-width,wherein the destination vector operand comprises a pair of registers including a first register comprising a first subset of the destination data elements and a second register comprising a second subset of the destination data elements; andbased on a sequential order of the first set of source data elements, executing the SIMD instruction in the processor, comprising:generating the first subset of destination data elements in the first register from even-numbered source data elements of the first set; and generating the second subset of destination data elements in the second register from odd-numbered source data elements of the first set.2. The method of claim 1, wherein the SIMD instruction is one of a square function, left-shift function, increment, or addition by a constant value of the source data elements of the first set.3. The method of claim 1, wherein the first set of source data elements are in respective SIMD lanes, and generating from each one of the source data elements, a destination data element in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane.4. The method of claim 1, wherein the SIMD instruction further comprises a second source vector operand comprising a second set of source data elements of the first bit-width, and the sequential order of the first set of source data elements corresponds to a sequential order of the second set of source data elements, wherein executing the SIMD instruction in the processor comprises:generating the first subset of destination data elements in the first register from even-numbered source data elements of the first set and even-numbered source data elements of the second set; and generating the second subset of destination data elements in the second register from odd-numbered source data elements of the first set and even-numbered source data elements of the second set.5. The method of claim 4, wherein the SIMD instruction is a multiplication or addition of the source data elements of the first set with corresponding source data elements of the second set.6. The method of claim 4, wherein the first set of source data elements and second set of source data elements are in respective SIMD lanes, and generating from each one of the source data elements of the first set and corresponding one of the source data elements of the second set, a destination data element in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane.7. A method of performing a mixed-width single instruction multiple data (SIMD) operation, the method comprising:receiving, by a processor, a SIMD instruction comprising:at least a source vector operand comprising source data elements of a first bit-width; andat least a destination vector operand comprising destination data elements of a second bit-width, wherein the second bit-width is half of the first bit-width,wherein the source vector operand comprises a pair of registers including a first register comprising a first subset of the source data elements and a second register comprising a second subset of the source data elements; andbased on a sequential order of the destination data elements, executing the SIMD instruction in the processor, comprising:generating even-numbered destination data elements from corresponding first subset of source data elements in the first register; andgenerating odd-numbered destination data elements from corresponding second subset of source data elements in the second register.8. The method of claim 7, wherein the SIMD instruction is a right-shift function of the source data elements.9. The method of claim 7, wherein the destination data elements are in respective SIMD lanes, and generating each one of the destination data elements from a source data element in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane.10. A non-transitory computer-readable storage medium comprising instructions executable by a processor, which when executed by the processor cause the processor to perform mixed-width single instruction multiple data (SIMD) operation, the non- transitory computer-readable storage medium, comprising:a SIMD instruction comprising:at least a first source vector operand comprising a first set of source data elements of a first bit-width; andat least a destination vector operand comprising destination data elements of a second bit-width, wherein the second bit-width is twice the first bit-width,wherein the destination vector operand comprises a pair of registers including a first register comprising a first subset of the destination data elements and a second register comprising a second subset of the destination data elements; andbased on a sequential order of the first set of source data elements:code for generating the first subset of destination data elements in the first register from even-numbered source data elements of the first set; andcode for generating the second subset of destination data elements in the second register from odd-numbered source data elements of the first set.1 1. The non-transitory computer-readable storage medium of claim 10, wherein the SIMD instruction is one of a square function, left-shift function, increment, or addition by a constant value of the source data elements of the first set.12. The non-transitory computer-readable storage medium of claim 10, wherein the first set of source data elements are in respective SIMD lanes, and comprising code for generating from each one of the source data elements, a destination data element in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane.13. The non-transitory computer-readable storage medium of claim 10, wherein the SIMD instruction further comprises a second source vector operand comprising a second set of source data elements of the first bit-width, and the sequential order of the first set of source data elements corresponds to a sequential order of the second set of source data elements, the non-transitory computer-readable storage medium comprising:code for generating the first subset of destination data elements in the first register from even-numbered source data elements of the first set and even-numbered source data elements of the second set; andcode for generating the second subset of destination data elements in the second register from odd-numbered source data elements of the first set and even-numbered source data elements of the second set.14. The non-transitory computer-readable storage medium of claim 13, wherein the SIMD instruction is a multiplication or addition of the source data elements of the first set with corresponding source data elements of the second set.15. The non-transitory computer-readable storage medium of claim 13, wherein the first set of source data elements and second set of source data elements are in respective SIMD lanes, comprising code for generating from each one of the source data elements of the first set and corresponding one of the source data elements of the second set, a destination data element in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane.16. A non-transitory computer-readable storage medium comprising instructions executable by a processor, which when executed by the processor cause the processor to perform mixed-width single instruction multiple data (SIMD) operation, the non- transitory computer-readable storage medium comprising:a SIMD instruction comprising:at least a source vector operand comprising source data elements of a first bit-width; andat least a destination vector operand comprising destination data elements of a second bit-width, wherein the second bit-width is half of the first bit-width,wherein the source vector operand comprises a pair of registers including a first register comprising a first subset of the source data elements and a second register comprising a second subset of the source data elements; andbased on a sequential order of the destination data elements:code for generating even-numbered destination data elements from corresponding first subset of source data elements in the first register; andcode for generating odd-numbered destination data elements from corresponding second subset of source data elements in the second register.17. The non-transitory computer-readable storage medium of claim 16, wherein the SIMD instruction is a right-shift function of the source data elements.18. The non-transitory computer-readable storage medium of claim 16, wherein the destination data elements are in respective SIMD lanes, and comprising code for generating each one of the destination data elements from a source data element in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane.
MIXED-WIDTH SIMD OPERATIONS HAVING EVEN-ELEMENT ANDODD-ELEMENT OPERATIONS USING REGISTER PAIR FOR WIDE DATAELEMENTSField of Disclosure[0001] Aspects of this disclosure pertain to operations involving two or more vectors where data elements of at least one vector are of a different bit-width than data elements of at least one other vector. Such operations are referred to as mixed- width operations. More specifically, some aspects relate to mixed-width single instruction multiple data (SIMD) operations involving at least a first vector operand and a second vector operand where at least one of the first or second vector operands have data elements which may be stored in even or odd register pairs.Background[0002] Single instruction multiple data (SIMD) instructions may be used in processing systems for exploiting data parallelism. Data parallelism exists when a same or common task needs to be performed on two or more data elements of a data vector, for example. Rather than use multiple instructions, the common task may be performed on the two or more data elements in parallel by using a single SIMD instruction which defines the same instruction to be performed on multiple data elements in corresponding multiple SIMD lanes.[0003] SIMD instructions may include one or more vector operands such as source and destination vector operands. Each vector operand would include two or more data elements. For SIMD instructions, all data elements belonging to the same vector operand may generally be of the same bit-width. However, some SIMD instructions may specify mixed-width operands where data elements of a first vector operand may be of a first bit-width and data elements of a second vector operand may be of a second bit-width, where the first and second bit-widths differ from each other. Execution of SIMD instructions with mixed-width operands may involve several challenges.[0004] FIGS. 1A-C illustrate examples of challenges involved in conventional implementations for executing SIMD instructions with mixed-width operands. With reference to FIG. 1A, a first conventional implementation for executing SIMD instruction 100 is illustrated. It is assumed that SIMD instruction 100 may be executed by a conventional processor (not shown) which supports a 64-bit instruction set architecture (ISA). This means that instructions such as SIMD instruction 100 may specify operands with bit- widths up to 64-bits. The 64-bit operands may be specified in terms of 64-bit registers or a pair of 32-bit registers.[0005] The object of SIMD instruction 100 is to execute the same instruction on each data element of source operand 102. Source operand 102 is a 64-bit vector comprising eight 8-bit data elements labeled 0-7. Source operand 102 may be stored in a single 64-bit register or a pair of 32-bit registers. The same instruction or common operation to be executed on each of the eight data elements 0-7 may be, for example, multiplication, square function, left-shit function, increment function, addition (e.g., with a constant value or immediate fields in the instruction or with values provided by another vector operand), etc., the result of which may consume more than 8-bits, and up to 16-bits of storage for each of the eight resulting data elements. This means that the result of SIMD instruction 100 may consume twice the storage space that source operand 102 may consume, i.e., two 64-bit registers or two pairs of 32-bit registers.[0006] Since the conventional processor configured to implement SIMD instruction 100 does not include instructions which specify operands of bit-widths greater than 64-bits, SIMD instruction 100 may be divided into two component SIMD instructions 100X and 100Y. SIMD instruction 100X specifies the common operation to be performed on data elements labeled with even-numbers (or "even-numbered data elements") 0, 2, 4, and 6 of source operand 102. SIMD instruction 100X specifies destination operand 104x which is 64-bits wide and includes 16-bit data elements labeled A, C, E, and G, each of which i composed of high (H) 8-bits and low (L) 8-bits. The results of the common operation on even-numbered 8-bit data elements 0, 2, 4, and 6 of source operand 102 are correspondingly written to 16-bit data elements A, C, E, and G of destination operand 104x. SIMD instruction 100Y is similar to SIMD instruction 100X with the difference that SIMD instruction 100Y specifies the common operation on data elements labeled with odd-numbers (or "odd-numbered data elements") 1, 3, 5, and 7 of source operand 102 with the results to be written to 16-bit data elements B, D, F, H of destination operand 104y which is also a 64-bit operand similar to destination operand 104x of SIMD instruction 100X. In this manner, each of the SIMD instructions 100X and 100Y can specify one 64-bit destination operand, and together, SIMD instructions 100X and 100Y can accomplish the execution of the common operation on each of the data elements 0-7 of source operand 102. However, due to the two separate instructions needed to implement SIMD instruction 100 increases code space.[0007] FIG. IB illustrates a second conventional implementation of SIMD instruction 100 using a different set of component SIMD instructions 120X and 120Y. SIMD instructions 120X and 120Y each specify the common operation on each of the 8-bit data elements 0-7 of source operand 102. SIMD instruction 120X specifies destination operand 124x into which the low (L) 8-bits of the results are to be written, to corresponding 8-bit result data elements A-H of destination operand 124x (while the high (H) 8-bits of the results are discarded). Similarly, instruction 120Y specifies destination operand 124y into which the high (H) 8-bits of the results are to be written, to corresponding 8-bit data elements A-H of destination operand 124y (while the low (L) 8-bits of the results are discarded). This second conventional implementation of SIMD instruction 100 also suffers from increased code space for the two component SIMD instructions 120X and 120Y. Moreover, as can be appreciated, the second conventional implementation also incurs wastage of power in calculating and discarding either the high (H) 8-bits (e.g., in executing instruction 120X) or the low (L) 8-bits (e.g., in executing instruction 120Y) for each of the data elements 0-7 of source operand 102.[0008] FIG. 1C illustrates a third conventional implementation of SIMD instruction 100 using yet another set of component SIMD instructions 140X and 140Y, which are similar in some ways to SIMD instructions 100X and 100Y of FIG. 1A. The difference lies in which ones of the data elements of source operand 102 are operated on by each SIMD instruction. In more detail, rather than even-numbered 8-bit data elements, SIMD instruction 140X specifies the common operation to be performed on the lower four data elements 0-3 of source operand 102. The results are written to 16-bit data elements A, B, C, D of destination operand 144x. However, execution of SIMD instruction 140X involves the spreading out of the results of the operation on the lower four 8-bit data elements (spanning 32 bits) across all 64-bits of destination operand 140X. SIMD instruction 144y is similar and specifies the spreading out of the results of operation on the upper four 8-bit data elements 4-7 of source operand 102 across 16-bit data elements E, F, G, H of 64-bit destination operand 144y. Apart from increased code size as in the first and second conventional implementations, these spreading out data movements as seen in the third conventional implementation may need additional hardware such as a crossbar.[0009] Accordingly, there is a need for improved implementations of mixed-width SIMD instructions which avoid the aforementioned drawbacks of the conventional implementations.SUMMARY[0010] Exemplary aspects include systems and methods related to a mixed-width single instruction multiple data (SIMD) instruction which has at least a source vector operand comprising data elements of a first bit-width and a destination vector operand comprising data elements of a second bit-width, wherein the second bit-width is either half of or twice the first bit-width. Correspondingly, one of the source or destination vector operands is expressed as a pair of registers, a first register and a second register. The other vector operand is expressed as a single register. Data elements of the first register correspond to even-numbered data elements of the other vector operand expressed as a single register, and data elements of the second register correspond to data elements of the other vector operand expressed as a single register.[0011] For example, an exemplary aspect relates to a method of performing a mixed-width single instruction multiple data (SIMD) operation, the method comprising: receiving, by a processor, a SIMD instruction comprising at least a first source vector operand comprising a first set of source data elements of a first bit-width, and at least a destination vector operand comprising destination data elements of a second bit-width, wherein the second bit-width is twice the first bit-width. The destination vector operand comprises a pair of registers including a first register comprising a first subset of the destination data elements and a second register comprising a second subset of the destination data elements. Based on a sequential order of the first set of source data elements, the method includes executing the SIMD instruction in the processor, comprising generating the first subset of destination data elements in the first register from even-numbered source data elements of the first set, and generating the second subset of destination data elements in the second register from odd-numbered source data elements of the first set.[0012] Another exemplary aspect relates to a method of performing a mixed-width single instruction multiple data (SIMD) operation, the method comprising receiving, by a processor, a SIMD instruction comprising at least a source vector operand comprising source data elements of a first bit-width, and at least a destination vector operand comprising destination data elements of a second bit-width, wherein the second bit- width is half of the first bit-width. The source vector operand comprises a pair of registers including a first register comprising a first subset of the source data elements and a second register comprising a second subset of the source data elements. Based on a sequential order of the destination data elements, the method includes executing the SIMD instruction in the processor, comprising generating even-numbered destination data elements from corresponding first subset of source data elements in the first register, and generating odd-numbered destination data elements from corresponding second subset of source data elements in the second register.[0013] Another exemplary aspect relates to a non-transitory computer-readable storage medium comprising instructions executable by a processor, which when executed by the processor cause the processor to perform mixed-width single instruction multiple data (SIMD) operation. The non-transitory computer-readable storage medium, comprises a SIMD instruction, which comprises at least a first source vector operand comprising a first set of source data elements of a first bit-width, and at least a destination vector operand comprising destination data elements of a second bit-width, wherein the second bit-width is twice the first bit-width. The destination vector operand comprises a pair of registers including a first register comprising a first subset of the destination data elements and a second register comprising a second subset of the destination data elements. Based on a sequential order of the first set of source data elements, the non- transitory computer-readable storage medium includes code for generating the first subset of destination data elements in the first register from even-numbered source data elements of the first set, and code for generating the second subset of destination data elements in the second register from odd-numbered source data elements of the first set.[0014] Yet another exemplary aspect relates to a non-transitory computer-readable storage medium comprising instructions executable by a processor, which when executed by the processor cause the processor to perform mixed-width single instruction multiple data (SIMD) operation, the non-transitory computer-readable storage medium comprising a SIMD instruction. The SIMD instruction comprises at least a source vector operand comprising source data elements of a first bit-width, and at least a destination vector operand comprising destination data elements of a second bit-width, wherein the second bit-width is half of the first bit-width. The source vector operand comprises a pair of registers including a first register comprising a first subset of the source data elements and a second register comprising a second subset of the source data elements. Based on a sequential order of the destination data elements, the non-transitory computer-readable storage medium includes code for generating even-numbered destination data elements from corresponding first subset of source data elements in the first register, and code for generating odd-numbered destination data elements from corresponding second subset of source data elements in the second register.BRIEF DESCRIPTION OF THE DRAWINGS[0015] The accompanying drawings are presented to aid in the description of aspects of the invention and are provided solely for illustration of the aspects and not limitation thereof.[0016] FIGS. 1A-C illustrate conventional implementations of mixed-width SIMD instructions.[0017] FIGS. 2A-C illustrate exemplary implementations of mixed-width SIMD instructions according to aspects of this disclosure.[0018] FIGS. 3A-B illustrate methods of performing mixed-width single instruction multiple data (SIMD) operations.[0019] FIG. 4 illustrates an exemplary wireless device 400 in which an aspect of the disclosure may be advantageously employed.DETAILED DESCRIPTION[0020] Aspects of the invention are disclosed in the following description and related drawings directed to specific aspects of the invention. Alternate aspects may be devised without departing from the scope of the invention. Additionally, well-known elements of the invention will not be described in detail or will be omitted so as not to obscure the relevant details of the invention.[0021] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects. Likewise, the term "aspects of the invention" does not require that all aspects of the invention include the discussed feature, advantage or mode of operation.[0022] The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of aspects of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises", "comprising,", "includes" and/or "including", when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.[0023] Further, many aspects are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the invention may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the aspects described herein, the corresponding form of any such aspects may be described herein as, for example, "logic configured to" perform the described action.[0024] Exemplary aspects of this disclosure relate to implementation of mixed-width SIMD operations which avoid data movement across SIMD lanes and reduce code size. For example, rather than decompose a SIMD operation into two or more component SIMD instructions (e.g., conventional execution of SIMD instruction 100 in FIGS. 1A-C), exemplary aspects include a single SIMD instruction which specifies one or more vector operands as a pair of operands, which may be expressed in terms of a pair of registers. By specifying at least one vector operand (either a source or a destination operand) as a pair of registers or a register pair, the single exemplary SIMD instruction can be used in place of two or more component conventional SIMD instructions. Therefore, code size is reduced for mixed- width SIMD operations.[0025] It is noted that in this disclosure, reference is made to expressing operands in terms of registers, in order to follow the customary instruction formats where an instruction specifies an operation to be performed on one or more registers. Thus, a SIMD instruction may be of a format where a common operation is specified for one or more operands which are expressed in terms of registers. Thus, an exemplary mixed-width SIMD instruction according to this disclosure includes at least one vector operand expressed in terms of a single register and at least one other vector operand expressed in terms of a pair of registers. These references to registers may pertain to logical or architectural registers used by a program comprising exemplary SIMD instructions. They may also pertain to physical registers of a physical register file, without restriction. In general, the references to registers are meant to convey storage elements of a certain size.[0026] Accordingly, an exemplary method of executing a mixed-width single instruction multiple data (SIMD) operation in a processor coupled to a register file may involve specifying a SIMD instruction with at least a first vector operand comprising data elements of a first bit-width and at least a second vector operand data elements of a second bit-width. The first vector operand can be a source vector operand and the second vector operand can be a destination vector operand. Correspondingly, the data elements of the source vector operand may be referred to as source data elements and data elements of the destination vector operand may be referred to as destination data elements.[0027] A one-to-one correspondence exists between the source data elements and the destination data elements in an exemplary mixed-width SIMD instruction. In general, when the operation specified in the mixed-width SIMD instruction is performed on a source data element, a specific corresponding destination data element is generated. For example, consider a mixed-width SIMD operation for left-shifting the source vector operand to form a destination vector operand. In this example, each source data element generates a specific destination data element when a left-shift of the source data element is performed. [0028] In one exemplary aspect of this disclosure, the second bit-width of the destination data elements can be less than, and specifically, half the size of the first bit-width of the source data elements. In this aspect, the destination vector operand can be expressed as a pair of registers and the source vector operand can be expressed as a single register.[0029] In another exemplary aspect of this disclosure, the second bit-width of the destination data elements can be greater than, and specifically, twice the size of the first bit-width of the source data elements. In this aspect, the source vector operand can be expressed as a single register and the destination vector operand can be expressed as a pair of registers.[0030] In order to illustrate the specific mapping between source and vector data elements of the source and destination vector operands, respectively, a sequential order is assigned to the data elements of the vector operand whose data elements have a smaller bit width. For example, a sequential order is assigned to data elements of the vector operand which is expressed as a single register. Based on the sequential order, even-numbered data elements (e.g., corresponding to numbers 0, 2, 4, 6, etc.) and odd-numbered data elements (e.g., corresponding to numbers 1, 3, 5, 7, etc.) are identified for the vector operand expressed as a single register. The pair of registers of the other vector operand are referred to as a first register and a second register, which comprise a first subset and a second subset of data elements respectively. Accordingly, the even-numbered data elements of the vector operand expressed as a single register are then assigned a correspondence with data elements of the first subset or first register, and the odd- numbered data elements are assigned a correspondence with data elements of the second subset or second register. In this manner, large data movements across SIMD lanes are avoided for source data elements during execution of the specified SIMD operation to generate corresponding destination data elements.[0031] Exemplary aspects may also relate to SIMD operations which specify more than two vector operands, such as, include a third operand of a third bit-width, and beyond. One example is disclosed where two source vector operands, each expressed as a single register are specified for a mixed-width SIMD instruction to generate a destination vector operand expressed as a pair of register. Numerous other such instruction formats are possible within the scope of this disclosure. For the sake of simplicity, exemplary aspects for implementing mixed-width SIMD operations will be discussed with relation to some example SIMD instructions and bit-widths of operands, while keeping in mind that these are merely for the sake of explanation. As such, the features discussed herein can be extended to any number of operands and bit-widths of data elements for mixed- width vector operations.[0032] In FIGS. 2A-C, exemplary aspects pertaining to SIMD instructions 200, 220, and 240 are shown. Each of these SIMD instructions 200, 220, and 240 can be executed by a processor (e.g., processor 402 shown in FIGS. 4-5) configured to execute SIMD instructions. More specifically, each of these SIMD instructions 200, 220, and 240 may specify one or more source vector operands and one or more destination vector operands, where the source and destination vector operands may be expressed in terms of registers (e.g., 64-bit registers). The source and destination vector operands of SIMD instructions 200, 220, and 240 include corresponding source and destination data elements, each of which fall under one or more SIMD lanes. The number of SIMD lanes in the execution of a SIMD instruction corresponds to the number of parallel operations which are performed in the execution of the SIMD instruction. A processor or execution logic configured to implement the example SIMD instructions 200, 220, and 240, can accordingly include hardware (e.g., an arithmetic and logic unit (ALU) comprising a number of left/right shifters, adders, multipliers, etc.,) required to implement the parallel operations specified by the SIMD instructions 200, 220, and 240.[0033] Accordingly, with reference to FIG. 2A, a first exemplary aspect is illustrated for execution of SIMD instruction 200. In one example, the processor is assumed to be capable of supporting a 64-bit instruction set architecture (ISA). SIMD instruction 200 may specify the same operation or common instruction to be performed on source data elements of source vector operands expressed in terms of a single 64-bit register.[0034] The same operation or common instruction specified in SIMD instruction 200 may be, for example, a square function, a left-shift function, an increment function, an addition by a constant value, etc., on eight 8-bit source data elements (which can be implemented with logic elements such as eight 8-bit left-shifters, eight 8-bit adders, etc.) which produces corresponding eight resulting destination data elements which can consume up to 16-bits of storage. As shown, SIMD instruction 200 may specify source vector operand 202 comprising eight 8-bit data elements. A numerical order may be assigned to these eight 8-bit data elements of source vector operand 202, which is shown by the reference numerals 0-7. The result of SIMD instruction 200 can be expressed using eight 16-bit destination data elements or 128-bits altogether, which cannot be stored in a single 64-bit register. Rather than decompose SIMD instruction 200 into two or more instructions to handle this problem (e.g., as in conventional implementations of SIMD instruction 100 shown in FIGS. 1A-C), a destination vector operand is specified as a pair of component vector operands. The pair of component destination vector operands can be expressed as a corresponding pair of registers 204x, 204y. Note that the pair of registers need not be stored in consecutive physical locations in a register file or even have consecutive logical register numbers. As such, SIMD instruction 200 specifies destination vector operand expressed in terms of a pair of component vector operands or registers 204x, 204y (e.g., a pair of 64-bit registers), and source vector operand 202, which is expressed as a single register 202.Further, first component destination vector operand expressed as first register 204x of the pair includes a first subset of the results of SIMD instruction 200 performed on even-numbered source data elements 0, 2, 4, and 6 of source vector operand 202. These results are illustrated by destination data elements A, C, E, and G, which have a one-to- one correspondence to even-numbered source data elements 0, 2, 4, and 6, which means that large movements across SIMD lanes is avoided for the results in this exemplary arrangement of destination data elements A, C, E, and G. Similarly, second component destination vector operand expressed as a second register 204y of the pair includes a second subset of the results of SIMD instruction 200 performed on odd-numbered source data elements 1, 3, 5, and 7 of source vector operand 202. These results are illustrated by destination data elements B, D, F, and H, which have a one-to-one correspondence to odd-numbered source data elements 1, 3, 5, and 7, which means that once again, large movements across SIMD lanes is avoided for the results in this exemplary arrangement of destination data elements B, D, F, and H. Accordingly, in this case, even-numbered source data elements 0, 2, 4, and 6 of source vector operand 202 correspond to or generate destination data elements A, C, E, and G of first register 204x; and odd-numbered source data elements 1, 3, 5, and 7 of source vector operand 202 correspond to or generate destination data elements B, D, F, and H of second register 204y. [0036] Considering eight 8-bit SIMD lanes, e.g., referred to as SIMD lanes 0-7, with each lane comprising a respective source data element 0-7, it is seen that the amount of movement involved to generate a corresponding destination data element A-H is contained within the same SIMD lane or an adjacent SIMD lane. In other words, a first set of source data elements (e.g., source data element 0-7) are in respective SIMD lanes, and from each one of the source data elements, a destination data element (e.g., a corresponding destination data element A-H) is generated in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane. For example, even-numbered source data elements 0, 2, 4, and 6 in SIMD lanes 0, 2, 4, and 6, respectively, generate destination data elements A, C, E, and G, which are respectively contained within SIMD lanes 0-1, 2-3, 4-5, and 6-7. Similarly, odd-numbered source data elements 1, 3, 5, and 7 in SIMD lanes 0, 2, 4, and 6, respectively, generate destination data elements B, D, F, and H, which are respectively also contained within SIMD lanes 0-1, 2-3, 4-5, and 6-7.[0037] Accordingly, in the first exemplary aspect of FIG. 2A, mixed-width SIMD instruction 200 involves efficient use of instruction space or code space (since only one SIMD instruction is used, rather than two or more component SIMD instructions), whose implementation or execution avoids large data movements across SIMD lanes.[0038] With reference now to FIG. 2B, another exemplary aspect is illustrated with relation to mixed-width SIMD instruction 220. SIMD instruction 220 involves two source vector operands: first source vector operand expressed as a single register 222 and second source vector operand expressed as a single register 223, which have a first set and second set, respectively, of four 16-bit source data elements. SIMD instruction 220 may specify a same or common operation such as a multiplication (e.g., with rounding) on the two source vector operands, wherein four 16-bit source data elements of the first set (in register 222) are multiplied by corresponding four 16-bit source data elements of the second set (in register 223) to produce four 32-bit results (where implementation of SIMD instruction 220 can involve logic elements such as four 16x16 multipliers). Since 128-bits are needed to be able to store these four 32-bit results, a destination vector operand is specified in terms of a pair of component vector operands: first component destination vector operand and second component destination vector operand (these may be expressed as a first 64-bit register 224x and a second 64-bit register 224y correspondingly). It is noted that SIMD instruction 220 may also be applicable to addition of source data elements of the first set with corresponding source data elements of the second set, where the corresponding results may consume more than 16-bits (even if not all 32-bits) for each destination data element.[0039] In FIG. 2B, source data elements of the first and second sets are assigned a sequential order, representatively shown as 0, 1, 2, 3 and Ο', , 2', 3', respectively. First component destination vector operand in first register 224x holds a first subset of the results of SIMD instruction 220 (shown as 32-bit destination data elements A and C) corresponding to even-numbered source data elements of the source operands 222 and 223; and similarly, second component destination vector operand in second register 224y holds a second subset of the results of SIMD instruction 220 (shown as 32-bit data elements B and D) corresponding to odd-numbered source data elements of the source operands 222 and 223. In this case, it is seen that even-numbered source data elements (0, 0') and (2, 2') of first source vector operand 222 and second source vector operand 223, respectively, generate data elements A and C of first destination vector operand 224x; and odd-numbered data elements (1, ) and (3, 3') of first source vector operand 222 and second source vector operand 223, respectively, generate data elements B and D of second destination vector operand 224y.[0040] Once again, it is seen in the second exemplary aspect of FIG. 2B, mixed-width SIMD instruction 220 accomplishes code space efficiency by utilizing a single mixed-width SIMD instruction rather than two or more component SIMD instructions. Moreover, it is also seen that movements across SIMD lanes is minimized in this aspect as well. In general, the first set of source data elements and second set of source data elements are in respective SIMD lanes, and generate from each one of the source data elements of the first set and corresponding one of the source data elements of the second set, a destination data element in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane. For example, considering four 16-bit SIMD lanes 0-3 which comprise the first set of source data elements 0-3 (or second set of source data elements 0'-3'), respectively, data movement for a source data element of the first and second to generate a corresponding destination data elements A-D is contained within the same SIMD lane and at most an adjacent SIMD lane (e.g., even-numbered source data elements (0, 0') and (2, 2') in SIMD lanes 0 and 2, respectively, generate destination data elements A and C in SIMD lanes 0-1 and 2-4; and similarly, odd-numbered source data elements (1, ) and (3, 3') in SIMD lanes 1 and 3, respectively, generate destination data elements B and D in SIMD lanes 0-1 and 2-4).[0041] FIG. 2C represents a third exemplary aspect related to mixed-width SIMD instruction 240. Unlike mixed-width SIMD instructions 200 and 220, a source vector operand of mixed-width SIMD instruction 240 is specified as a pair of component vector operands or expressed as a register pair. It is noted that mixed-width SIMD instruction 240 is different from mixed-width SIMD instruction 220 because mixed-width SIMD instruction 220 included two separate source vector operands, where data elements of one source vector operand were specified to interact (e.g., get multiplied with) data elements of another source vector operand. On the other hand, in mixed-width SIMD instruction 240, a pair of component source vector operands is specified because not doing so would have consumed two separate instructions. For example, SIMD instruction 240 may involve a common operation of a right-shift function from 16-bits to 8-bits to be performed on eight 16-bit source data elements in order to obtain a result of eight 8-bit destination data elements (where implementation of SIMD instruction 240 can involve logic elements such as eight 8-bit right-shifters). However, since eight 16- bit source data elements consume 128-bits, conventional implementations would have split up this operation to be performed using two component SIMD instructions. On the other hand, in the exemplary aspect of FIG. 2C, a source vector operand pair comprising first component source vector operand in first register 242x and a second component source vector operand in second register 242y are specified by SIMD instruction 240. Accordingly, code space is efficiently used.[0042] The destination vector operand is expressed as a single 64-bit register 244 in this case and comprises eight 8-bit destination data elements which are results of SIMD instruction 240. Accordingly, a sequential order is assigned to the destination data elements of the destination vector operand in register 244, which are shown with reference numerals 0-7. The source data elements of the pair of component source vector operands (expressed as a pair of registers 242x, 242y) are arranged such that first register 242x comprising a first subset of source data elements A, C, E, and G, will generate the results corresponding to even-numbered destination data elements 0, 2, 4, and 6 of the destination vector operand in register 244, respectively; and second register 242y comprising a second subset of source data elements B, D, F, and H, will generate the results corresponding to odd-numbered destination data elements 1, 3, 5, and 7, respectively, of destination vector operand in register 244.[0043] Thus code space can be effectively utilized and data movement across SIMD lanes can be minimized even in cases where the source vector operands are wider than the destination vector operands, by specifying a pair of component source vector operands or expressing the source vector operand as a pair of registers. Movement across SIMD lanes in execution of SIMD instruction 240 is also minimized. In general, it is seen that the destination data elements are in respective SIMD lanes, and each one of the destination data elements is generated from a source data element in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane. For example, considering eight 8-bit SIMD lanes corresponding to the eight destination data elements 0-7, it is seen that source data elements A, C, E, and G, will move from SIMD lanes 0-1, 2-3, 4-5, and 6-7 respectively to generate the results corresponding to even-numbered destination data elements in SIMD lanes 0, 2, 4, and 6; and source data elements B, D, F, and H, will move from SIMD lanes 0-1, 2-3, 4-5, and 6-7 respectively to generate the results corresponding to even-numbered destination data elements in SIMD lanes 1, 3, 5, and 7. In either case, the movement is contained within two SIMD lanes.[0044] Accordingly, it will be appreciated that aspects include various methods for performing the processes, functions and/or algorithms disclosed herein. For example, as illustrated in FIG. 3A, an aspect can include a method 300 of performing a mixed-width single instruction multiple data (SIMD) operation, in accordance with FIGS. 2A-B, for example.[0045] In Block 302, method 300 includes receiving, by a processor (e.g., processor 402 of FIG. 4, which will be explained below), and with reference, for example, to FIG. 2A, a SIMD instruction (e.g., SIMD instruction 200) comprising at least a first source vector operand (e.g., in register 202) comprising a first set of source data elements (e.g., source data elements 0-7) of a first bit-width (e.g., 8-bits); and at least a destination vector operand (e.g., in register pair 204x, 204y) comprising destination data elements (e.g., destination data elements A-H) of a second bit-width (e.g., 16-bits), wherein the second bit-width is twice the first bit-width, wherein the destination vector operand comprises a pair of registers including a first register (e.g., 204x) comprising a first subset of the destination data elements (e.g., destination data elements A, C, E, G) and a second register comprising a second subset of the destination data elements (e.g., destination data elements B, D, F, H).[0046] In Block 303 (which is shown to include Blocks 304 and 306), method 300 further includes executing the mixed-width SIMD instruction in the processor. Specifically, considering a sequential order (e.g., 0-7) assigned to the source data elements in Block 304, Block 306 includes executing the SIMD instruction in the processor. In further detail, Block 306 is made of the components Blocks 306a and 306b which may be performed in parallel.[0047] Block 306a includes generating the first subset of destination data elements (e.g., destination data elements A, C, E, G) in the first register (e.g., first register 204x) from even-numbered source data elements (e.g., source data elements 0, 2, 4, 6) of the first set.[0048] Block 306b includes generating the second subset of destination data elements (e.g., destination data elements B, D, F, H) in the second register (e.g., second register 204y) from odd-numbered source data elements (e.g., source data elements 1, 3, 5, 7) of the first set.[0049] In general, the SIMD instruction of method 300 can be one of a square function, left- shift function, increment, or addition by a constant value, of the source data elements of the first set. Code space efficiency is achieved by utilizing a single SIMD instruction in method 300. Movement across SIMD lanes is also minimized in method 300, where the first set of source data elements are in respective SIMD lanes, and method 300 includes generating from each one of the source data elements (e.g., source data element 0 in SIMD lane 0), a destination data element (e.g., destination data element A) in the respective SIMD lane (e.g., SIMD lane 0) or a SIMD lane adjacent (e.g., SIMD lane 1) to the respective SIMD lane.[0050] It will also be noted that although not shown separately, method 300 can also include a method for implementing SIMD instruction 220 of FIG. 2B, which further comprises, for example, receiving in Block 302, a second source vector operand, comprising a second set of source data elements of the first bit-width (e.g., first and second source vector operands in registers 222 and 223), and the sequential order of the first set of source data elements corresponds to a sequential order of the second set of source data elements. In this case, based on the sequential order assigned in Block 304, Block 306 includes executing the SIMD instruction in the processor, comprising Block 306a for generating the first subset of destination data elements in the first register from even- numbered source data elements of the first set and even-numbered source data elements of the second set; and Block 306b for generating the second subset of destination data elements in the second register from odd-numbered source data elements of the first set and even-numbered source data elements of the second set. In this case, the SIMD instruction can be a multiplication or addition of the source data elements of the first set with corresponding source data elements of the second set, wherein the first set of source data elements and second set of source data elements are in respective SIMD lanes, and generating from each one of the source data elements of the first set and corresponding one of the source data elements of the second set, a destination data element in the respective SIMD lane or a SIMD lane adjacent to the respective SIMD lane.[0051] With reference to FIG. 3B, another method for performing the processes, functions and/or algorithms disclosed herein is illustrated. For example, as illustrated in FIG. 3B, method 300 includes another method of performing a mixed-width single instruction multiple data (SIMD) operation, in accordance with FIG. 2C, for example.[0052] In Block 352, method 350 includes receiving, by a processor (e.g., processor 402), a SIMD instruction (e.g., SIMD instruction 240) comprising: at least a source vector operand (e.g., in registers 242x, 242y) comprising source data elements (e.g., source data elements A-H) of a first bit-width (e.g., 16-bits); and at least a destination vector operand (e.g., in register 244) comprising destination data elements (e.g., destination data elements 0-7) of a second bit-width (e.g., 8-bits), wherein the second bit-width is half of the first bit-width, wherein the source vector operand comprises a pair of registers including a first register (e.g., first register 242x) comprising a first subset of the source data elements (e.g., destination data elements 0, 2, 4, 6) and a second register (e.g., second register 242y) comprising a second subset of the source data elements (e.g., destination data elements 1, 3, 5, 7).[0053] In Block 354, a sequential order is assigned to the destination data elements, and in Block 356, the SIMD instruction is executed. Block 356 includes sub blocks 356a and 356b, which can also be performed in parallel. [0054] Block 356a includes generating even-numbered destination data elements (e.g., destination data elements 0, 2, 4, 6) from corresponding first subset of source data elements in the first register (e.g., source data elements A, C, E, G).[0055] Block 356b includes generating odd-numbered destination data elements (e.g., destination data elements 1, 3, 5, 7) from corresponding second subset of source data elements in the second register (e.g., source data elements B, D, F, H).[0056] In exemplary aspects, the SIMD instruction of method 350 may be a right-shift function of the source data elements, wherein the destination data elements are in respective SIMD lanes (e.g., SIMD lanes 0-7), and generating each one of the destination data elements (e.g., destination data element 0) from a source data element (e.g., source data element A) in the respective SIMD lane (e.g., SIMD lane 0) or a SIMD lane adjacent (e.g., SIMD lane 1) to the respective SIMD lane.[0057] Referring to FIG. 4, a block diagram of a particular illustrative aspect of wireless device 400 according to exemplary aspects. Wireless device 400 includes processor 402 which may be configured (e.g., include execution logic) to support and implement the execution of exemplary mixed-width SIMD instructions, for example, according to methods 300 and 350 of FIG. 3A and FIG. 3B, respectively. As shown in FIG. 4, processor 402 may be in communication with memory 432. Processor 402 may include a register file (not shown) which holds physical registers corresponding to the registers (e.g., logical registers) in terms of which operands of the exemplary SIMD instructions are expressed. The register file may be supplied with data from memory 432 in some aspects. Although not shown, one or more caches or other memory structures may also be included in wireless device 400.[0058] FIG. 4 also shows display controller 426 that is coupled to processor 402 and to display 428. Coder/decoder (CODEC) 434 (e.g., an audio and/or voice CODEC) can be coupled to processor 402. Other components, such as wireless controller 440 (which may include a modem) are also illustrated. Speaker 436 and microphone 438 can be coupled to CODEC 434. FIG. 4 also indicates that wireless controller 440 can be coupled to wireless antenna 442. In a particular aspect, processor 402, display controller 426, memory 432, CODEC 434, and wireless controller 440 are included in a system-in-package or system-on-chip device 422. [0059] In a particular aspect, input device 430 and power supply 444 are coupled to the system- on-chip device 422. Moreover, in a particular aspect, as illustrated in FIG. 4, display 428, input device 430, speaker 436, microphone 438, wireless antenna 442, and power supply 444 are external to the system-on-chip device 422. However, each of display 428, input device 430, speaker 436, microphone 438, wireless antenna 442, and power supply 444 can be coupled to a component of the system-on-chip device 422, such as an interface or a controller.[0060] It should be noted that although FIG. 4 depicts a wireless communications device, processor 402 and memory 432 may also be integrated into a set top box, a music player, a video player, an entertainment unit, a navigation device, a personal digital assistant (PDA), a fixed location data unit, a communications device, or a computer. Further, at least one or more exemplary aspects of wireless device 400 may be integrated in at least one semiconductor die.[0061] Those of skill in the art will appreciate that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0062] Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.[0063] The methods, sequences and/or algorithms described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.[0064] Accordingly, an aspect of the invention can include computer readable media (e.g., a non-transitory computer readable storage medium) embodying a method for implementing mixed-width SIMD instructions (e.g., according to methods 300 and 350 described above, for implementing SIMD instructions of FIGS. 2A-C). Accordingly, the invention is not limited to illustrated examples and any means for performing the functionality described herein are included in aspects of the invention.[0065] While the foregoing disclosure shows illustrative aspects of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the aspects of the invention described herein need not be performed in any particular order. Furthermore, although elements of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
Implementations describe providing isolation in virtualized systems using trust domains. In one implementation, a processing device includes a memory ownership table (MOT) that is access-controlled against software access. The processing device further includes a processing core to execute a trust domain resource manager (TDRM) to manage a trust domain (TD), maintain a trust domain control structure (TDCS) for managing global metadata for each TD, maintain an execution state of the TD in at least one trust domain thread control structure (TD-TCS) that is access-controlled against software accesses, and reference the MOT to obtain at least one key identifier (key ID) corresponding to an encryption key assigned to the TD, the key ID to allow the processing device to decrypt memory pages assigned to the TD responsive to the processing device executing in the context of the TD, the memory pages assigned to the TD encrypted with the encryption key.
1. A processing device, the processing device comprising:a memory ownership table (MOT), the MOT being accessed for software access;Processing the core, the processing core:Performing a Trusted Domain (TD) and managing a Trusted Domain Resource Manager (TDRM) of the TD;Maintaining a Trusted Domain Control Structure (TDCS) for managing global metadata of one or more of the TD or other TDs executed by the processing device;Accessing one or more trusted domain thread control structures (TD-) that are accessed by software access by the TDCS and for software access from at least one of the TDRM, virtual machine manager (VMM), or the other TD Maintaining the execution state of the TD in TCS);Referring to the MOT to obtain at least one key identifier (ID) corresponding to an encryption key assigned to the TD, the key ID allowing the processing device to decrypt in response to the processing device in the TD a memory page that is executed in the context and assigned to the TD, the memory page assigned to the TD being encrypted by the encryption key;Referring to the MOT to obtain a guest physical address corresponding to a host physical memory page assigned to the TD, wherein a match between the guest physical address obtained from the MOT and the accessed guest physical address is allowed to be responsive to The processing device is executed in the context of the TD and is accessed by a processing device assigned to the memory page of the TD.2.The processing device of claim 1, wherein the VMM comprises a TDRM component to provide memory management for at least one of: the TD, the other TD, or one or more virtual machines via an extended page table (EPT) (VM).3.The processing device of claim 1 wherein said TD-TCS references said TDCS, wherein said TDCS maintains a count of one or more TD-TCS corresponding to a logical processor of said TD, and wherein said The TD-TCS stores the user execution state and the hypervisor execution state of the TD.4.The processing device of claim 1 wherein said encryption key is generated by a multi-key total memory encryption (MK-TME) engine of said processing device.5.A processing apparatus according to claim 4, wherein said MK-TME engine generates a plurality of encryption keys accessed via a key ID assigned to said TD for encrypting and decrypting said memory of said TD a page, and encrypting and decrypting a memory page corresponding to a persistent memory assigned to the TD, and wherein the MOT tracks the plurality of keys via a key ID associated with each entry in the MOT ID.6.The processing device of claim 2 wherein said processing core references said MOT of a host physical memory page accessed as part of a page traversal operation to access a guest physical memory page mapped by said EPT.7.The processing device of claim 1, wherein the TD comprises at least one of: an operating system (OS) for managing one or more applications or a facility for managing one or more virtual machines (VMs) The VMM is described, and wherein the TD enters an operation to transfer an operational context of the processing core from at least one of the VMMs to the OS of the TD or from the TDRM to the VMM of the TD.8.The processing device of claim 1 wherein said TDRM is not included in a Trusted Computing Base (TCB) of said TD.9.The processing device of claim 1 wherein said TDCS comprises a signature structure, said signature structure capturing a cryptographic measurement of said TD, said cryptographic measurement being signed by a hardware trusted root of said processing device, and wherein said A signature structure is provided to the prover for verifying the cryptographic measurements.10.The processing device of claim 1 wherein said processing core further maintains a measurement state of said TD in said TDCS, said TDCS being said to be from said at least said TDRM performed by said processing device Software access to the VMM or the software of the other TDs is accessed.11.The processing device of claim 1, wherein the TDRM manages the TD and the other TDs.12.A method, the method comprising:Identifying the TD exit event by executing the Trusted Domain Resource Manager (TDRM) to manage the trusted device (TD) executing on the processing device;In response to identifying the TD exit event, a first key identifier (ID) corresponding to a first encryption key assigned to the TD is utilized to save a user execution state of the TD and a TD hypervisor execution state In a trusted domain thread control structure (TD-TCS) corresponding to a logical processor assigned to the TD, the execution state is encrypted by the first encryption key, wherein the TD-TCS is directed to Software access of at least one of the TDRM, virtual machine manager (VMM) or other TD performed by the processing device is accessed;Modifying a key ID status of the processing device from the first key ID to a second key ID corresponding to at least one of the TDRM or the VMM;The TDRM execution and control status and the exit information of the TDRM are loaded to cause the processing device to operate in the context of the TDRM.13.The method of claim 12 further comprising:Performing a TD entry event in the context of the TDRM;Using a second key identifier (ID) corresponding to a second encryption key assigned to the TDRM to control a structure from a trusted domain resource manager corresponding to the logical processor assigned to the TD (TD-RCS) loading TDRM execution control specified by the TDRM, the execution state being encrypted by the second encryption key, wherein the TD-RCS is used from the TD or executed by the processing device An extended page table (EPT) of at least one of the other VMs to access control;Modifying a key ID status of the processing device from the second key ID to a first key ID corresponding to the TD;The user execution state and the hypervisor execution state are loaded from the TD-TCS to cause the processing device to operate in the context of the TD.14.The method of claim 13 wherein said TDCS and TD-TCS are controlled by confidentiality and access via a memory ownership table (MOT) of said processing device, said MOT comprising a first for said TDCS An entry, the first entry associating the first key ID with the TD, wherein the MOT utilizes the first key ID to enforce memory access to a memory page corresponding to the TD Memory confidentiality.15.The method of claim 12 wherein said MOT is accessed via a range register.16.The method of claim 14, wherein the TDRM execution and control states are loaded from the TD-RCS structure that is accessed via the EPT and the MOT, wherein the MOT is included for the TD-RCS a second entry of a structure, the second entry associating the second key ID with a physical memory page containing the TD-RCS, and wherein the MOT is enforced using the second key ID Memory confidentiality to memory accesses corresponding to memory pages of the TDRM.17.The method of claim 12 wherein said VMM is a root VMM comprising said TDRM to manage one or more TDs, wherein said TD comprises a non-root VMM to manage one or more virtual machines (VMs), And wherein the TD exit transfers an operational context of the processing core from the non-root VMM or the one or more VMs of the TD to the root VMM and TDRM.18.The method of claim 12 wherein said encryption key is generated by a multi-key total memory encryption (MK-TME) engine of said processing device, and wherein said MK-TME engine generates an assignment via a key ID a plurality of encryption keys to the TD for encrypting a short memory page or a persistent memory page of the TD, and wherein the MOT tracks the plurality of encryption key IDs, wherein each referenced in the MOT The host physical page has a key id.19. A system comprising:a memory device to store one or more instructions;A processing device operatively coupled to the memory device, the processing device executing the one or more instructions to:Performing a Trusted Domain Resource Manager (TDRM) to manage a Trusted Domain (TD), wherein the TDRM is not included in a Trusted Computing Base (TCB) of the TD;Maintaining a hypervisor execution state and a user execution state of the TD in a Trusted Domain Thread Control Structure (TD-TCS) for the TDRM, virtual machine manager (executed by the processing device) Software access to at least one of VMM) or other TDs is accessed;Referring to the MOT to obtain at least one encryption key identifier (ID) corresponding to an encryption key assigned to the TD, the key ID allowing the processing device to decrypt in response to the processing device being a memory page that is executed in the context of the TD and assigned to the TD, the memory page assigned to the TD being encrypted by the encryption key identified via the encryption key ID;Referring to the MOT to obtain a guest physical address corresponding to a host physical memory page assigned to the TD, wherein the matching of the guest physical address with the accessed guest physical address is allowed to be responsive to the processing device being The processing device accessing the memory page that is executed in the context of the TD and assigned to the TD is accessed.20.The system of claim 19 wherein said VMM comprises a TDRM component to provide memory management for one or more of: TD, said other TD or one or more virtual via an Extended Page Table (EPT) Machine (VM).21.The system of claim 19, wherein said TD-TCS corresponds to a logical processor of said TD, said TD-TCS storing said hypervisor execution status of said TD and said user on a TD exit operation Executing a state and loading a user and hypervisor execution state of the TD on a TD entry operation, wherein the TD-TCS is for at least one of the TDRM, the VMM, or the other TDs performed by the processing device A software access is controlled by access.22.The system of claim 19 wherein said encryption key is generated by a multi-key total memory encryption (MK-TME) engine of said processing device, and wherein said MK-TME engine generates an assignment via a key ID a plurality of encryption keys to the TD for encrypting a short memory page or a persistent memory page of the TD, and wherein the MOT tracks the via one key ID associated with each entry in the MOT A plurality of encryption key IDs are described.23.The system of claim 19, wherein the VMM includes the TDRM to manage the TD, wherein the TD comprises an operating system (OS) or a non-root VMM to manage one or more virtual machines (VMs), and Wherein the TD entry operation transfers the operational context of the processing core from the TDRM to the non-root VMM of the TD.24.An apparatus comprising: means for performing the method of any one of claims 12 to 18.25.At least one machine readable medium comprising a plurality of instructions responsive to being executed on a computing device, causing the computing device to perform the method of any one of claims 12-18.
Provide isolation in a virtualized system using trusted domainsThe present disclosure relates to computer systems; and more particularly to providing isolation in a virtualized system using a trusted domain.Background techniqueModern processing devices use disk encryption to protect still data. However, the data in the memory is plaintext and vulnerable to attack. An attacker can use a variety of techniques, including software and hardware based bus scans, memory scans, hardware probes, etc. to retrieve data from memory. This data from the memory may include sensitive data, such as privacy sensitive data, IP sensitive data, and also a key for file encryption or communication. Data exposure is further exacerbated by the use of virtualization-based hosting services provided by cloud service providers to move data and enterprise workloads to current trends in the cloud.DRAWINGS1A is a block diagram showing an example computing system that provides isolation in a virtualized system using a trusted domain, in accordance with one implementation.FIG. 1B is a block diagram showing another example computing system that provides isolation in a virtualized system using a trusted domain, in accordance with one implementation.2A is a block diagram of an example of a trusted domain architecture in accordance with one implementation.2B is a block diagram of another example of a trusted domain architecture in accordance with one implementation.3 is a block diagram of another example of a trusted domain architecture in accordance with one implementation.4 is a flow diagram of an example method for providing isolation in a virtualized system using a trusted domain, in accordance with one implementation.5 is a flow diagram of an example method for executing a trusted domain exit routine when providing isolation in a virtualized system using a trusted domain, in accordance with one implementation.6 is a flow diagram of an example method for executing a trusted domain entry routine when providing isolation in a virtualized system using a trusted domain, in accordance with one implementation.7A is a block diagram showing a microarchitecture for a processor in which one implementation of the present disclosure may be used.7B is a block diagram showing an in-order pipeline and register renaming phase, an out-of-order issue/execution pipeline, implemented in accordance with at least one implementation of the present disclosure.8 shows a block diagram of a microarchitecture for a processing device that includes logic for providing isolation in a virtualized system using a trusted domain, in accordance with one implementation.9 is a block diagram of a computer system in accordance with one implementation.Figure 10 is a block diagram of a computer system in accordance with another implementation.11 is a block diagram of a system on a chip in accordance with one implementation.Figure 12 illustrates another implementation of a block diagram of a computing system.Figure 13 illustrates another implementation of a block diagram of a computing system.Detailed waysAn architecture that provides isolation in a virtualized system using a trusted domain (TD) is described. The current trend in computing is to place data and enterprise workloads in the cloud by leveraging managed services provided by cloud service providers (CSPs). Because of the data and enterprise workloads hosted in the cloud, CSP customers (herein referred to as tenants) are requesting better security and isolation solutions for their workloads. Specifically, the customer is searching for operations that enable software solutions provided by CSPs outside of the Trusted Computing Base (TCB) of the tenant software. A system's TCB refers to a collection of hardware, firmware, and/or software components that have the ability to affect the integrity of the overall operation of the system.In an implementation of the present disclosure, a TD Architecture and Instruction Set Architecture (ISA) extension for TD architecture (referred to herein as TD Extension (TDX)) is provided for client (tenant) software executing in an untrusted CSP infrastructure Provide confidentiality (and integrity). The TD architecture (which can be a system-on-a-chip (SoC) capability) provides isolation between TD workloads and CSP software such as the Virtual Machine Manager (VMM) of the CSP. The components of the TD architecture may include 1) memory encryption via the MK-total memory encryption (MK-TME) engine, 2) resource management capabilities, referred to herein as Trusted Domain Resource Manager (TDRM); TDRM may be virtual machine monitors Software extensions of (VMM), and 3) execution state and memory isolation capabilities in the processor provided by the memory ownership table (MOT) managed by the CPU and the TD control structure via CPU access control. The TD architecture provides the ability for the processor to deploy TDs that utilize the MK-TME engine, MOT, and access control TD control architecture for secure operation of TD workloads.In one implementation, the tenant's software is executed using an architectural concept called TD. TD (also known as tenant TD) refers to a tenant workload (eg, it may include only an operating system (OS) along with other ring-3 applications running on top of the OS, or a virtual machine (VM) running on top of the VMM) Other ring-3 applications). Each TD is independent of other TD operations in the system and uses one or more logical processors, memories, and I/Os that are assigned by the TDRM on the platform. Each TD is cryptographically isolated in memory using at least one exclusive encryption key of the MK-TME engine for encrypting the memory associated with the trusted domain (hold code and/or data).In an implementation of the present disclosure, the TDRM in the TD architecture acts as a host for the TD and has full control over the core and other platform hardware. TDRM assigns one or more logical processors to software in the TD. However, the TDRM cannot access the execution state of the TD on the assigned one or more logical processors. Similarly, TDRM assigns physical memory and I/O resources to the TD, but accesses the memory state of the TD due to the use of separate encryption keys enforced by the CPU per TD and other integrity and playback controls on the memory. I don't know. The software executed in the TD operates through reduced permissions so that TDRM can maintain control of platform resources. However, in the defined case, TDRM cannot affect the confidentiality or integrity of the TD state in the memory or CPU structure.Conventional systems for providing isolation in a virtualized system do not completely extract CSP software from the tenant's TCB. Moreover, conventional systems can significantly increase the TCB using separate chipset subsystems that are avoided by implementations of the present disclosure. The TD architecture of the implementation of the present disclosure provides isolation between customer (tenant) workloads and CSP software by explicitly reducing TCB by removing CSP software from the TCB. By providing secure isolation for CSP customer workloads (tenant TDs), technical improvements to conventional systems are provided, and CSP software is removed from the customer's TCB while meeting the security and functional requirements of the CSP. In addition, the TD architecture scales to multiple TDs, which can support multiple tenant workloads. Furthermore, the TD architecture described herein is generic and can be applied to any dynamic random access memory (DRAM) or storage-based memory (SCM)-based memory (eg, non-volatile dual in-line memory modules (NV-DIMMs). )). As such, implementations of the present disclosure allow software to take advantage of performance benefits such as NVDIMM Direct Access Storage (DAS) mode for SCM without compromising platform security requirements.FIG. 1A is a schematic block diagram of a computing system 100 that provides isolation in a virtualized system using TD, in accordance with implementations of the present disclosure. Virtualization system 100 includes a virtualization server 110 that supports multiple client devices 101A-101C. Virtualization server 110 includes at least one processor 112 (also referred to as a processing device) that executes TDRM 180. The TDRM 180 may include a VMM (which may also be referred to as a hypervisor) that may instantiate one or more TDs 190A-190C (accessible by the client devices 101A-101C via the network interface 170). Client devices 101A-101C may include, but are not limited to, desktop computers, tablet computers, laptop computers, netbooks, notebook computers, personal digital assistants (PDAs), servers, workstations, cellular telephones, mobile computing devices, smart phones, Internet facility or any other type of computing device.A TD can refer to a tenant (eg, customer) workload. For example, the tenant workload may include only the OS along with other ring-3 applications running on top of the OS, or may include VMs running on top of the VMM along with other ring-3 applications. In an implementation of the present disclosure, each TD may be cryptographically isolated in memory using a separate exclusive key for encrypting the memory associated with the TD (hold code and data).Processor 112 may include one or more cores 120 (also referred to as processing cores 120), range registers 130, memory management units (MMUs) 140, and one or more output ports 150. 1B is a schematic illustration of a detailed view of processor core 120 executing TDRM 180 in communication with MOT 160 and one or more Trusted Domain Control Structures (TDCS) 124 and one or more Trusted Domain Thread Control Structures (TDTCS) 128. Block diagram, as shown in Figure 1A. TDTCS and TD-TCS can be used interchangeably herein. The processor 112 can be used in, but not limited to, a desktop computer, tablet computer, laptop computer, netbook, notebook computer, PDA, server, workstation, cellular telephone, mobile computing device, smart phone, Internet facility, or any other type. In the system of computing devices. In another implementation, the processor 112 can be used in an SoC system.Computing system 100 represents a processing system based on PENTIUM IIITM, PENTIUM 4TM, XeonTM, Itanium, XScaleTM, and/or StrongARMTM microprocessors available from Intel Corporation of Santa Clara, California, although other systems may be used (including Other micro-processing devices, engineering workstations, PCs for set-top boxes, etc.). In one implementation, the sample system 100 executes a version of the WINDOWSTM operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (eg, UNIX and Linux), embedded software, and/or graphical user interfaces may be used. Thus, implementations of the present disclosure are not limited to any specific combination of hardware circuitry and software.One or more processing cores 120 execute instructions of the system. Processing core 120 includes, but is not limited to, prefetch logic for fetching instructions, decoding logic for decoding instructions, execution logic for executing instructions, and the like. In an implementation, computing system 100 includes components, such as processor 112, to employ an execution unit that includes logic for executing an algorithm for processing data.The virtualization server 110 includes a main memory 114 and a secondary storage 118 to store program binaries and OS drive events. The data in secondary storage 118 may be stored in a block called a page, and each page may correspond to a set of physical memory addresses. Virtualization server 110 may employ virtual memory management in which applications (e.g., TD 190A-190C) run by one or more cores 120 use virtual memory addresses mapped to guest physical memory addresses, and guest physical memory addresses are mapped through MMU 140 to Host/system physical address.The core 120 can execute the MMU 140 to load pages from the secondary storage 118 into the main memory 114 (which includes volatile memory and/or non-volatile memory) for use by the processor 112 (eg, at On the core)) The running software is accessed faster. When one of the TDs 190A-190C attempts to access a virtual memory address corresponding to the physical memory address of the page loaded into the main memory 114, the MMU 140 returns the requested data. Core 120 may perform the VMM portion of TDRM 180 to translate the guest physical address into the host physical address of the main memory and provide parameters that allow core 120 to read, traverse, and interpret the protocols of these mappings.In one implementation, processor 112 implements a TD architecture and ISA extension (TDX) for the TD architecture. The TD architecture provides isolation between TD workloads 190A-190C and CSP software executing on processor 112 (eg, TDRM 180 and/or CSP VMM (eg, root VMM 180)). The components of the TD architecture may include 1) memory encryption via the MK-TME engine 145, 2) resource management capabilities referred to herein as TDRM 180, and 3) TD control structures via the MOT 160 and via access control (ie, TDCS 124 and The TDTCS 128) provides execution status and memory isolation capabilities in the processor 112. The TDX architecture provides the ability of the processor 112 to deploy the TD 190A-190C, which utilizes the MK-TME engine 145, the MOT 160, and the access control TD control architecture (ie, TDCS 124 and TDTCS 128) for the TD workload 190A. -190C for safe operation.In an implementation of the present disclosure, TDRM 180 acts as a host and has full control of core 120 and other platform hardware. The TDRM 180 assigns one or more logical processors to the software in the TD 190A-190C. However, TDRM 180 cannot access the 190A-190C execution state of the TD on the assigned one or more logical processors. Similarly, TDRM 180 assigns physical memory and I/O resources to TD 190A-190C, but is unaware of the memory state of accessing TD 190A due to the separate encryption key and other integrity and playback controls on the memory.The processor can utilize the MK-TME engine 145 to encrypt (and decrypt) the memory used during execution with respect to a separate encryption key. With total memory encryption (TME), any memory access by software executing on core 120 can be encrypted in memory by an encryption key. MK-TME is an enhancement to TME that allows the use of multiple encryption keys (the number of supported keys is implementation dependent). The processor 112 can utilize the MK-TME engine 145 to encrypt different pages using different MK-TME keys. The MK-TME engine 145 can be utilized in the TD architecture described herein to support one or more encryption keys per TD 190A-190C to help achieve cryptographic isolation between different CSP client workloads. For example, when using the MK-TME engine 145 in a TD architecture, the CPU by default enforces the use of a TD-specific key to encrypt the TD (all pages). In addition, the TD may further select a particular TD page as plaintext or select a particular TD page that is encrypted using a different ephemeral key that is opaque to the CSP software.Each TD 190A-190C is a software environment that supports a software stack consisting of a VMM (eg, using Virtual Machine Extensions (VMX)), an OS, and/or application software (hosted by the OS). Each TD 190A-190C operates independently of the other TDs 190A-190C and uses one or more logical processors, memories, and I/Os that are assigned by the TDRM 180 on the platform. The software executing in TD 190A-190C operates with reduced privileges so that TDRM 180 can maintain control of platform resources; however, in the defined case, TDRM cannot affect the confidentiality or integrity of TD 190A-190C. Additional details of the TD architecture and TDX are described in more detail below with respect to FIG. 1B.Implementations of the present disclosure are not limited to computer systems. Alternative implementations of the present disclosure may be used in other devices, such as handheld devices and embedded applications. Some examples of handheld devices include cellular telephones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. An embedded application may include a microcontroller, a digital signal processing device (DSP), a system on a chip, a network computer (NetPC), a set top box, a network hub, a wide area network (WAN) switch, or any of the instructions that may execute one or more instructions in accordance with at least one implementation. Other systems.One implementation may be described in the context of a single processing device desktop or server system, although alternative implementations may be included in the multi-processing device system. Computing system 100 can be an example of a "hub" system architecture. Computing system 100 includes a processor 112 to process data signals. As an illustrative example, processor 112 includes a complex instruction set computer (CISC) microprocessing device, a reduced instruction set computing (RISC) microprocessing device, a very long instruction word (VLIW) microprocessing device, and a processing device that implements an instruction set combination. Or any other processing device (eg, such as a digital signal processing device). Processor 112 is coupled to a processing device bus that transfers data signals, storage instructions, data, between processor 112 and other components in computing system 100, such as main memory 114 and/or secondary storage device 118. Or any combination thereof. Other components of computing system 100 may include graphics accelerators, memory controller hubs, I/O controller hubs, wireless transceivers, flash BIOS, network controllers, audio controllers, serial expansion ports, I/O controllers, and the like. . These elements perform conventional functions well known to those skilled in the art.In one implementation, processor 112 includes a level 1 (L1) internal cache memory. Depending on the architecture, processor 112 may have a single internal cache or a multi-level internal cache. Other implementations include a combination of both internal and external caches (depending on the particular implementation and needs). The register file stores different types of data in various registers, including integer registers, floating point registers, vector registers, banked registers, shadow registers, checkpoint registers, status registers, configuration registers, and instruction pointer registers.It should be noted that the execution unit may or may not have a floating point unit. In one implementation, processor 112 includes a microcode (ucode) ROM for storing microcode that, when executed, executes algorithms for certain macroinstructions or handles complex situations. Here, the microcode is potentially updatable to handle logic errors/repairs of the processor 112.Alternative implementations of execution units can also be used in microcontrollers, embedded processing devices, graphics devices, DSPs, and other types of logic circuits. System 100 includes a main memory 114 (which may also be referred to as memory 114). Main memory 114 includes DRAM devices, static random access memory (SRAM) devices, flash memory devices, or other memory devices. Main memory 114 stores instructions and/or data represented by data signals to be executed by processor 112. Processor 112 is coupled to main memory 114 via a processing device bus. A system logic chip, such as a memory controller hub (MCH), can be coupled to the processing device bus and main memory 114. The MCH can provide high bandwidth memory paths to the main memory 114 for instruction and data storage as well as for storage of graphics commands, data, and textures. For example, the MCH can be used to direct data signals between the processor 112, the main memory 114, and other components in the system 100, and to bridge data signals between the processing device bus, the memory 114, and the system I/O. The MCH can be coupled to the memory 114 through a memory interface. In some implementations, the system logic chip can provide a graphics port for coupling to the graphics controller through an accelerated graphics port (AGP) interconnect.Computing system 100 can also include an I/O controller hub (ICH). The ICH can provide a direct connection to some I/O devices via a local I/O bus. The local I/O bus is a high speed I/O bus for connecting peripherals to memory 114, chipset, and processor 112. Some examples are audio controllers, firmware hubs (flash BIOS), wireless transceivers, data storage devices, legacy I/O controllers with user input and keyboard interfaces, serial expansion ports (eg universal serial bus (USB)) ) and the network controller. The data storage device may include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.For another implementation of the system, the instructions executed by the processing device core 120 described above can be used with a system on a chip. One implementation of a system on a chip includes a processing device and a memory. The memory of one such system is flash memory. The flash memory can be located on the same die as the processing device and other system components. In addition, other logic blocks such as a memory controller or graphics controller can also be located on the system on a chip.Referring to FIG. 1B, this figure depicts a block diagram of the processor 112 of FIG. 1A in accordance with one implementation of the present disclosure. In one implementation, the processor 112 can execute the application stack 101 via a single core 120 or across several cores 120. As discussed above, the processor 112 can provide a TD architecture and TDX to provide confidentiality for client software running in a client/tenant (ie, TD 190A) (in an untrusted cloud service provider (CSP) infrastructure) (and integrity). The TD architecture provides: memory isolation via MOT 160; CPU state isolation (which is combined with CPU key management via TDCS 124 and/or TDTCS 128); and CPU measurement infrastructure for TD 190A software.In one implementation, the TD architecture provides ISA extensions (referred to as TDX) that support the confidential operations of OS and OS managed applications (virtualized and non-virtualized). A platform with enabled TDX, such as a platform including processor 112, can function as multiple cryptographic contexts called TDs. For ease of explanation, a single TD 190A is depicted in Figure 1B. Each TD 190A can run VMMs, VMs, OSs, and/or applications. For example, TD 190A is depicted as hosting VM 195A.In one implementation, TDRM 180 may include as part of VMM functionality (eg, root VMM). A VMM can refer to software, firmware, or hardware to create, run, and manage virtual machines (VMs), such as the VM 195A. It should be noted that the VMM can create, run, and manage one or more VMs. As depicted, VMM 110 is included as a component of one or more processing cores 120 of processing device 122. VMM 110 can create and run VM 195A and assign one or more virtual processors (e.g., vCPUs) to VM 195A. VM 195A may be referred to herein as Visitor 195A. The VMM may allow the VM 195A to access the hardware of the underlying computing system (e.g., computing system 100 of Figure 1A). The VM 195A can execute a guest operating system (OS). The VMM can manage the execution of the guest OS. The guest OS can function to control access by the virtual processor of VM 195A to the underlying hardware and software resources of computing system 100. It should be noted that when there are many VMs 195A operating on processing device 112, the VMM can manage each of the guest OSs executing on many visitors. In some implementations, the VMM can be implemented by the TD 190A to manage the VM 195A. This VMM may be referred to as a tenant VMM and/or a non-root VMM and is discussed in further detail below.TDX also provides a programming interface to the TD management layer of the TD architecture called TDRM 180. TDRM can be implemented as part of the CSP/root VMM. The TDRM 180 manages the operation of the TD 190A. While TDRM 180 can assign and manage resources such as CPU, memory, and input/output (I/O) to TD 190A, TDRM 180 is designed to operate outside of the TCB of TD 190A. A system's TCB refers to a collection of hardware, firmware, and/or software components that have a trusted ability to affect the overall operation of the system.In one implementation, the TD architecture is thus the ability to protect software running in the TD 190A. As discussed above, components of the TD architecture may include 1) memory encryption via a TME engine with multiple key extensions to TME (eg, MK-TME engine 145 of FIG. 1A), 2) software resource management layer (TDRM) 180), and 3) execution state and memory isolation capability in the TD architecture.2A is a block diagram depicting an example computing system implementing a TD architecture 200. The TD Architecture 200 supports two types of TDs. The first type of TD is a TD, where the tenant trusts the CSP to enforce confidentiality and does not implement the TD architecture of the implementation of the present disclosure. This type of legacy TD is depicted as TD1 210. The TD1 210 is a CSPTD with a TCB 202 managed by a CSP VMM. The TD1 210 may include a CSP VMM 212 that manages the CSP VM 214 and/or one or more tenant VMs 216A, 216B. In this case, the tenant VMs 216A, 216B are managed by the CSP VMM 212 in the 216A, 216B TCB 202 of the VM. In an implementation of the present disclosure, tenant VMs 216A, 216B may still utilize memory encryption via TME or MK-TME (described further below) in this model.Other types of TDs are TDs in which the tenant does not trust the CSP to enforce confidentiality and therefore relies on the CPU of the TD architecture with implementations of the present disclosure. This type of TD is shown in both variants as TD2 220 and TD3 230. The TD2 220 is shown to have a virtualization mode (e.g., VMX) that is utilized by the tenant VMM (non-root) 222 running in the TD2 220 to the managed tenant VMs 225A, 225B. The TD3 230 does not include software that uses virtualization mode, but instead runs the enlighten OS 235 directly in the TD3230. TD2 220 and TD3 230 are tenant TDs with hardware enforcement TCB 204, as described in the implementation of the present disclosure. In one implementation, TD2 220 or TD3 230 may be the same as TD 190A described with respect to Figures 1A and/or 1B.The TDRM 180 manages the lifecycle of all three types of TDs 210, 220, 230, including the allocation of resources. However, TDRM 180 is not in the TCBs of TD type TD2 220 and TD3 230. The TD architecture 200 does not impose any architectural restrictions on the number or mix of active TDs on the system. However, due to other constraints, software and certain hardware limitations in a particular implementation may limit the number of TDs running concurrently on the system.2B is a block diagram depicting an example of TD architecture 250 and interaction between TD 220 and TDRM 280. In one implementation, TD 220 and TDRM 280 are the same as the counterparts described with respect to Figure 2A. The TD architecture 250 can be the same as the TD architecture provided by the computing device 100 of FIGS. 1A and 1B and/or the TD architecture 200 of FIG. 2A. The TD framework 250 provides a layer that manages the lifecycle of the TDs active on the system. Processor support for TDs is provided by a form of processor operation called TDX operation. There are two types of TDX operations: resource manager operations and tenant operations. Typically, TDRM 180 operates in TDX resource manager operations, and TDs (eg, TD2 220) operate in TDX tenant operations. The transition between resource manager operations and tenant operations is called TDX transfer.There are two types of TDX transfer: TD entry 270 and TD exit 260. The transition from TDX resource manager operations to TDX tenant operations is referred to as TD entry 270. The transition from the TDX tenant operation to the TDX resource manager operation is referred to as TD exit 260.The processor behavior in TDX resource manager operations is similar to the processor behavior outside of TDX operations. The main difference is that the set of TDX operations (TDX instructions) is available, and the values that can be loaded into certain control registers are limited to the modes and capabilities that define the TDRM 180.Processor behavior in TDX tenant operations is similarly defined to facilitate isolation. For example, instead of normal operation, certain events cause TD to exit 260 to TDRM 180. These TD exits 260 do not allow the TDRM 180 to modify the TD 220 behavior or status. The TDRM 180 uses platform capabilities to maintain control of platform resources. Software running in the TD 220 can use software visible information to determine that it is running in the TD 220 and can enforce local measurement policies on additional software loaded into the TD 220. However, the security status of the TD 220 is verified by the remote prover to ensure confidentiality.The TD architecture 250 is designed to minimize the compatibility impact on software that relies on virtualization when running in the TD 220, and thus, the VMs 225A, 225B running in tenant operations and the tenant VMMs 222 operating in tenant operations. Most interactions between the two are unchanged. If VMM 222 does not exist in TD 220, the VM OS can be modified to work with TDRM 180 as the root VMM.In one implementation, TDRM 180 may explicitly decide to cause TD to exit 260, such as terminating TD 120 or managing memory resources (eg, generating assigned memory resources, requesting free memory resources, etc.). The TD architecture 250 also provides the TDRM 180 with the ability to force TD exit 260 for preemption. On TD exit 260, the TD architecture enforces saving the execution state of TD 220 in the memory of the CPU access control assigned to TD 220, and the execution state uses unique encryption of TD 220 that is not visible to TDRM 180 or other TDs. The key (discussed further below) is encrypted to protect the confidentiality of the TD state from the TDRM 180 or other TD. The TD execution state can be protected from spoofing, remapping, and/or playback, similarly via integrity control of the memory.TD entry 270 is a supplemental event to TD exit 260. For example, TD entry 270 may occur when TDRM 180 schedules TD 220 to run on a logical processor and transfer execution to software running in TD 220. During TD entry 270, TD architecture 250 enforces saving the execution state of TDRM 180 in a memory owned by TDRM that is encrypted using a unique encryption key that is assigned for use by TDRM 180 alone.A TD such as TD 220 can be set by TDRM 180 using TDCREATE (to create TDCS), TDTCREATE (to create TD-TCS), and TDADDPAGE instructions, which causes memory belonging to TD 220 to be encrypted (using invisible to TDRM 180 or other TD) Or the unique encryption key of the inaccessible TD). All TD memories are encrypted using the TD's unique key before executing any instructions belonging to the TD. Although specific reference names are referenced herein, other names of the instructions may be utilized in implementations of the present disclosure and are not limited to the specific names provided herein.In one implementation, TDRM 180 may initiate each TD 220 via a small software image (similar to an IBB or initial boot block) after signature verification and use the platform trusted root to record IBB measurements (for subsequent proof). It is the IBB software executed in the TD220 that is responsible for completing the measurement start of the TD 220 and requesting additional resources from the TDRM 180. The TD 220 has the option of using a single encryption key for the entire TD 220 or for using additional encryption keys for different tenant VMs 225A, 225B (and/or containers or different memory resources such as NVRAM) when operating within the TD 220. Therefore, when the TD 220 is first set, the TD 220 uses the exclusive CPU-generated MK-TME key. Thereafter, TD 220 can optionally set an additional MK-TME encryption key for each tenant software managed context (eg, tenant VM 225A and 225B, container or other memory type) operating within TD 220.To minimize the software compatibility impact on the two VMMs of the CSP (eg, TDRM Root VMM 180 and Tenant VMM 222), virtualization (eg, VMX) operations may remain unmodified within TD 220 in TD Architecture 250. Similarly, operations of the VMM software, such as extended page table (EPT) management, may remain under the control of the tenant VMM 222 (if one is active in the TD 220 and is not managed by the TDRM 180). When TDRM 180 assigns physical memory to each TD 220, TD architecture 250 includes a MOT (ie, MOT 160 as described with respect to Figures 1A and 1 B). The processor 112 consults the MOT managed by the TDRM 180 to assign an allocation of memory to the TD 220. This allows the TDRM 180 to manage the full capabilities of the memory as a resource without any visibility into the data residing in the assigned TD memory. In some implementations, as discussed above, the platform (eg, root) VMM and TDRM 180 can be in the same encryption key domain, thus sharing memory management and scheduler functionality (but still remaining outside the tenant's TCB) .FIG. 3 is a block diagram depicting another example of a TD architecture 300. The TD Architecture 300 depicts the I/O concept of the TD. In one implementation, TD architecture 300 may allow all I/O devices (eg, NIC 320, storage device 330, single root input/output virtualization (SR-IOV) NIC 240, etc.) to attach to TD1 210, which trusts CSP And TDRM (for example, legacy TD 1 210). In one implementation, TD architecture 300 may not allow devices (including SR-IOV and scalable I/O) to be directly assigned to TDs (eg, tenant TD2 220), such as tenant TD2 220, that do not trust CSP software. In contrast, TDRM 180 may provide the ability to share memory 310 between a CSP TD (eg, TD1 210) and other TDs (eg, tenant TD 2 220) to implement synthesis in a non-CSP TD (eg, tenant TD2 220) ("syn ") device (eg, syn NIC 325, syn storage device 335). In some implementations, tenant TDs that do not trust CSP software (eg, tenant TD2 220) may be responsible for protecting I/O data. The TD architecture 300 may not protect the I/O data exposed via the shared memory 310. In some implementations, I/O data can be protected by using existing security protocols between communication endpoints.Referring back to FIG. 1B, MOT 160 (which may be referred to as TD-MOT) is a structure, such as a table, managed by processor 112 to enforce the assignment of physical memory pages to an execution TD, such as TD 190A. The processor 112 also uses the MOT 160 to enforce that physical addresses referenced by software operating as tenant TD 190A or TDRM 180 cannot access memory that is not explicitly assigned to it.MOT 160 enforces the following attributes. First, software other than TD 190A should not be able to access (read/write/execute) any memory belonging to a different TD (this includes TDRM 180) in clear text. Second, a memory page assigned to a particular TD (e.g., TD 190A) via MOT 160 should be accessible from any processor in the system (where the processor is executing the TD to which the memory is assigned).The MOT 160 structure is used to maintain metadata attributes for each 4 KB memory page. Additional structures can be defined for the additional page size (2MB, 1GB). Metadata for each 4KB memory page is directly indexed by the physical page address. In other implementations, other page sizes may be supported by a hierarchical structure (like a page table).The 4KB page referenced in the MOT 160 may belong to one running instance of the TD 190A. The 4KB page referenced in the MOT 160 may be a valid memory or marked as invalid (and thus may be IO, for example). In one implementation, each TD instance 190A includes a page that holds TDCS 124 for the TD 190A.In one implementation, the MOT 160 is aligned on a 4 KB memory boundary and occupies a physically contiguous memory area that is protected from software access after platform initialization. In implementation, the MOT is a micro-architectural structure and cannot be accessed directly by software. Architecturally, the MOT 160 maintains the following security attributes for each 4KB host physical memory page:- Page status 162 - valid/invalid bit (whether the page is a valid memory)- Page category – DRAM, NVRAM, IO, reserved- Page Status 163 - (4-bit vector) specifies whether the page is:- Bit 1 - Idle (page not assigned to TD and not used by TDRM)- Bit 2 assignment (page assigned to TD or TDRM)- Bit 3 - Block (page that was blocked during its release/(re)assignment)- Bit 4 - Pending (dynamic page assigned to TD but not yet accepted by TD)- TDID 164 - (40 bit) TD Identifier that assigns a page to a specific unique TD. The address of the TDCS.In some implementations, extended MOT 160 entries can be supported, which also include:- Page Key ID 165 - (8 bits - size is implementation specific) specifies the per-page encryption key that is expected to match the key ID obtained during the processor page traversal of the physical memory referenced by the TD. If the MOT 160 entry is not an extension entry, the page key ID is derived from the TDCS 124. One of the key Id values specified in the MOT can be used to share memory content with the TDRM (or root VMM). The shared page can hold the input and output buffers for transmission to the hardware device managed by the TDRM. Similarly, shared pages can be used to emulate virtual devices exposed by the TDRM to the TD.- Guest physical address 166 - (52 bits) specifies the expected guest physical address used by the software executing in the TD. (Use this field when TDRM 180 expects to perform memory remapping and implement the ability to swap memory).- Visitor License 167 - asserts on the final page (for user, hypervisor execution, read, write). There may be multiple sets of these permission bits to support the VMM being executed in the TD.The MOT 160 may be enabled when TDX is enabled in the processor 112 (eg, via the CR4 enable bit after enumeration based on the CPUID). Once MOT 160 is enabled, MOT 160 can be used by processor 112 to enforce memory access control for all physical memory accesses initiated by software, including TDRM 180. In one implementation, access control is enforced during page traversal of memory accesses by software. Physical memory accesses performed by processor 112 to memory not assigned to tenant TD 190A or TDRM 180 fail to abort page semantics.In an implementation of the present disclosure, TDRM 180 manages memory resources via MOT 160 using a MOT operational instruction (TDMOTOP) having the following instruction leaves:Add page to MOT (TDMOTADDPAGE) - Marks the idle MOT 160 entry corresponding to the host physical address (HPA) as (exclusively) assigned to the TD 190A specified by the TDID. Any other previous page state caused a failure. This instruction forces a cross-thread TLB to shoot down to confirm that no other TD 190A is buffering the mapping to this HPA. This instruction leaf can be called by TDRM 180. If the TDRM 180 has enabled the extended MOT, the instructions may specify an initial guest physical address (GPA) that maps to the designated HPA. The processor 112 verifies that the GPA is mapped to the HPA by traversing the EPT structure managed by the TDRM 180. A variant of the add page can be implemented that assigns the page to TD (TDMOTAUGPAGE) but does not capture the measurement of the page.TDMOTREVOKEPAGE from the MOT - Marks the specified page as a free page. This instruction forces the cross-threaded TLB to shoot down to confirm subsequent TD 190A access checks for HPA ownership, and the page content is cleared by processor 112. The TD 190A access that experienced a MOT 160 page fault during TLB fill causes the processor 112 to invalidate the TDCS 124, which prevents additional TDs from entering the TD 190A. This instruction leaf can be called by TDRM 180.Blocking page in MOT (TDMOTBLOCKPAGE) - Marks an idle or assigned MOT 160 entry corresponding to HPA as blocked for software use. Any other previous page state causes the TDRM 180 to fail. This instruction forces the cross-threaded TLB to shoot down to confirm subsequent TD 190A access checks for HPA ownership. This instruction leaf can be called by TDRM 180.Unlock page in MOT (TDMOTUNBLOCKPAGE) - Marks blocked MOT 160 entries corresponding to HPA as valid for software use/assignment. Any other previous page state caused a failure. This instruction leaf can be called by TDRM 180.After the TD software has cleared any secrets in the memory, the memory assigned to the TD 190A can be returned to the TDRM 180 via the explicit TDCALL. The extended operation of MOT 160 is used in the following situations: (1) the VMM in TD 190A may have remapped the GPA used within the TD, and/or (2) TDRM 180 may want to exchange the memory assigned to TD 190A . In both cases, a TDRM 180 EPT violation will be generated by a mismatched GPA used during page traversal. The following extended MOT instruction leaves solve the above situation:Modifying the PGA in the MOT (TDMOTMODPMA) - To handle the first case above, the TDRM 180 uses this extended MOT 160 instruction to update the MOT 160 security attributes of the page used by the TD 190A. The TDRM 180 provides a GPA that is used by the CPU to traverse the EPT structure managed by the TD VMM and retrieve the new GPA referenced by the TD VMM. Processor 112 then performs a traversal of TDRM 180 EPT to find the referenced HPA, and if the page is assigned to active TD 190A, updates the expected GPA attributes to match the mismatched GPAs reported during the failed traversal. The TDRM 180 can then restart the TD 190A.For the second case above, the TDRM 180 has unmapped the GPA from its EPT structure, and in the event of a failure, the page should be marked as software unavailable using the blocking page (TDMODBLOCKPAGE) in the MOT instruction (by dumping Clear), and the extended MOT 160 instructions: TDEXTRACT and TDINJECT should be used to create a password-protected exchangeable version of the page content that can be restored for the new assigned HPA. The TDEXTRACT (and TDINJECT) instructions capture (and verify accordingly) the cryptographically signed integrity information of the exchanged TD pages so that they can be verified upon recovery. The password information can include a counter to ensure that the malicious TDRM cannot replay the stale page.In one implementation, initialization of TDRM 180 begins by enabling TDX in processor 112 (by setting, for example, the CR4.TDXE bit or via the VMX MSR control bit during VMXON). TDX support can be enumerated via CPUID. Once TDX is enabled, TDRM 180 executes (ie, runs) the TDX mode command (TDXON) to enable the TDX mode of the processor; alternatively, the mode can be enabled as part of VMXON. TDXON registers a naturally aligned 4 KB memory area that the logical processor uses for the TDRM 180 status area. In one implementation, the TDRM 180 status area is stored as a TDRM status 185 in the TDRM Control Structure (TDRCS) 182; the TD-RCS can also be implemented as a new type of VMCS containing only host status, control, and TD exit information. In one implementation, the TDCS and TD-TCS are accessed via the MOT 160 (eg, the encryption key ID stored in the MOT 160 is used to enforce memory access control). In another implementation, the TDCS and TD-TCS are accessed via a storage device in one or more defined range registers of processor 112 (e.g., range register 130) that are inaccessible to software access. The TDRM state 185 is described in further detail below. The physical address of the 4KB page for TDRCS 182 is provided in the operand to TDXON. The TDRM 180 makes this page inaccessible to all TD 190As via the MOT 160. The TDRM 180 should initialize and access the TDRCS 185. The TDRM 180 should use a separate TDRCS 185 for each logical processor.In one implementation, the example TDRM state 185 initialized by the TDRM 180 and loaded by the processor 112 on the TD exit may include, but is not limited to, the following states depicted in Table 1 below:The field describes the linear address in the RIP TDRM address space, where execution starts on the TD exit in the TD root mode. RSP TDRM stack pointer (linear address) ES selector segment information CS selector segment information SS selector segment information DS selector segment Information FS Selector Segment Information GS Selector Segment Information TR Selector Segment Information FS Base Segment GS Base Segment Base TR Base Segment GDTR Base Segment Base IDTR Base Segment Base CR0 Force PG/NE/PE =1, Ignore CD/NW CR3 allows TDRM to specify CR4 to force VMXE/PAE=1 IA32_PAT to allow TDRM designationTable 1: Processor Status (64-bit) loaded from TDRCS on TD ExitThe following processor states are automatically set/fixed during TD exit (and therefore not specified in TD-RCS):- CR0, CR4 in 64-bit mode (additional CR4 mask values may be required)- DR7, Erase DRs: Clear: Need to consider the impact of PDR bit- IA32_DEBUGCTL, IA32_PERF_GLOBAL_CTRL, IA32_PAT, IA32_BNDCFGS- IA32_EFER (ensure 64-bit mode)- Segment register (base-restricted access): same as VM exit- RFLAGS: Same as VM exit - set to 0x2- LDTR: Same as VM exit – nullThe following processor states are automatically cleared during TD exit (and therefore not specified in TD-RCS):- IA32_SYSENTER_CS / EIP / ESP- IA32_KERNEL_GS_BASE- IA32_STAR / FMASK / LSTAR- GPR (except RSP)- XSAVE status- Extended status (x87 / SSE, CET, etc.) - can be considered as optional and other conditional statesThe TD-RCS also maintains control fields and exit information structures (used to report TD exit information) as provided in Table 2 below:Field Description MSR Access Control Bitmap Address Holds MSK Access Control Bitmap 4KB Page 64-Bit Physical Address XSAVES Access Control Bitmap 64-bit XSAVES Access Control Bitmap Extended Page Table Pointer 64-bit EPTP TD Pre-Empty Timer 64-bit TD Pre- Clear Timer TD-TCS Slot Id Link this TD-RCS to a specific TD-TCS for the duration of TD entryTable 2: TD-RCS structureTable 3, depicted below, details the exit information fields in the TD-RCS:Field Description TDEXIT_REASON 64-bit value (n-bit valid, 64-n-bit reserved). See table below for values TDEXIT_QUAL See table belowTable 3: TD-RCS Exit Information FieldIn one implementation, the TD 190A can be created and launched by the TDRM 180. The TDRM 180 uses the TD creation instructions (TDCREATE and TDTCREATE) to create the TD 190A. The TDRM 180 selects the 4KB alignment area of the physical memory and provides this as a parameter to the TD creation instruction. This memory area is used as the TDCS 124 of the TD 190A. When executed, the TDCREATE instruction causes the processor 112 to verify that the destination 4KB page is assigned to the TD (using the MOT 160). The TDCREATE instruction also causes processor 112 to generate a transient memory encryption key and key ID for TD 190A and store the key ID in TDCS 124. Processor 112 then initializes the page content on the destination page using the encryption key assigned to the TD. In one implementation, initializing the page content includes initiating a TD state of the TD, which is further described below with respect to TDTCS 128. The TDCREATE instruction then causes the processor 112 to initialize a hash for the TD measurements in the TDCS 124.In one implementation, the TDRM 180 sets the IBB code/data for the TD 190A using the TDADDPAGE command (discussed above), which specifies the address of the TDCS 124 page (as a parameter) of the TD 190A, and the TD image in the TDRM address space. The address of the code/data page, as well as the physical page assigned to the TD 190A. Processor 112 then verifies that the destination 4KB page is assigned to TD 190A. Once verified, the processor 112 extends the hash for the TD 190A in the TDCS 124. The processor then copies the page content from the source to the destination page using a unique encryption key assigned to the TD 190A.The TDRM 180 provides a TD boot configuration via a data page that contains a physical memory map (and an identity page table). The TDRM 180 initializes the physical memory, and the processor 112 verifies that the page is assigned to the TD 190A and identifies the page table. The TDRM 180 then uses the TDINIT command to complete the TD 190A measurement. The TDRM 180 can then begin executing the TD 180 using the TDENTER command (this uses the TCTCS 128, as described further below).Referring now to TDCS 124, this control structure specifies the control that processor 112 initializes when TD 190A is successfully created. The TDCS 124 is available when the TD 190A is enabled. In one implementation, the TDCS occupies a 4K naturally aligned memory region. After the TDCREATE instruction is successfully executed, the page identified as TDCS 124 in the MOT 160 is blocked/read by the software. In one implementation, TDCS 124 is accessed via MOT 160 (e.g., as described above, the assigned key ID of TDCS 124 stored in MOT 160 is used during page traversal of processor 112 to prevent unauthorized authorization. Software read/write). In another implementation, the TDCS 124 is accessed via a storage device in one or more defined range registers of the processor 112 that is inaccessible to software access. The TDCS 124 may include, but is not limited to, the following fields depicted in Table 4 below:Field size (bytes) Description REVISION 4 Modified identifier 126 TDID 8 (40 bits valid, remaining reserved) TD identifier 190A COUNT_TCS 4 (16 bits valid, remaining reserved) Number of TD-TCS 142 associated with this TDCS COUNT_BUSY_TCS 4 (16 bits valid, remaining reserved) Number of busy TD-TCS associated with this TDCS KID_ENTRY_0* 8 (8 bits valid, remaining reserved) Transient key Id assigned to the key of TD 190A during TDCREATE * KID_ENTRY_1 8 (8 bits valid, remaining reserved) Key Id 1 assigned to the TD during TDCREATE. The TD can assign a key via PCONFIG. KID_ENTRY_2 8 (8 bits valid, remaining reserved) Key Id 2 assigned to the TD during TDCREATE. The TD can assign a key via PCONFIG. KID_ENTRY_3 8 (8 bits valid, remaining reserved) Key Id 3 assigned to the TD during TDCREATE. The TD can assign a key via PCONFIG. ATTRIBUTES 16 (see table below) Attributes of Trusted Domains MRTD 48 SHA-384 Measurement of Initial Content of TD 138 RESERVED 16 (Must be 0) Reserved MRSWID for MREGs growing to SHA512 48 Additional for loading after initial build Logical Software Defined Identifier MRCONFIGID 48 Software Defined Identifier for Additional TD SW Configuration MROWNER 48 Software Defined Identifier for Owner of VM MROWNERCONFIG 48 Software Defined Identifier for Additional Image Configuration from Owner XCR0 8 Initial value of XCR0 OWNERID 8 Owner ID MRTDBLOCKS 4 The number of blocks updated to the MRTD. (Pre-TDINIT only) COUNT_TCS_MAX Maximum specifies the maximum number of logical processors that can be assigned to this TD. (Maximum possible 4095). RESERVED reservation (other TD metadata)143Table 4: TDCS structureThe TDCS.ATTRIBUTES field has the following bit structure as depicted in Table 5 below:Table 5: TDCS.ATTRIBUTES field bit structureThe TD 190A may request the TDRM 180 to assign N logical processors (CPUs) to the TD 190A. For each requested CPU, TDRM 180 adds TTDCS 128 pages to TD190A using TDADDPAGE (parameter <op, TDCS, TD CPU Index, HPA>). Processor 112 verifies that the destination 4KB page is assigned to TD 190A. The processor 112 updates the TCSList [index] 142 in the TDCS 124 for use in the TD 190A. The TDTCS 128 can refer back to its parent TDCS 124 (which is specified in the TDADDPAGE command parameter).The TDRM 180 uses TDTCS 128 (parameter <TDCS, CPU Index>) to TD 190A. This activates TDTCS 128 (and the referenced TDCS 124). The TDENTER instruction checks that TDTCS 128 is not yet active. On TDENTER, processor 112 activates TD 190A key ID enforcement through a page miss handler (PMH)/TLB. Processor 112 then loads the TD state from TDTCS 128 and begins TD 190A execution.The TDTCS 128 maintains the execution state of the logical processor assigned to the TD 190A. If a TD exit condition occurs when the processor 112 is in the TD tenant mode, the TD exit saves the tenant's execution state in the TDTCS 128. In one implementation, TDTCS 128 is accessed via MOT 160 (e.g., as described above, the key ID is used during page traversal of processor 112 to prevent unauthorized software from being read/written). In another implementation, the TDTCS 128 is accessed via a storage device in one or more defined range registers of the processor 112 that is inaccessible to software access.If a TD exit occurs while the processor 112 is operating in the context of a non-root VMM within the TD 190A, the TD exits execution of the VM exit to the TD VMM (eg, TD VMM 222) (eg, VM exit 280 of FIG. 2B) ( Not yet reported), save the tenant VMM status in TDTCS 128 and perform TD exit (switch key id enforcement). The subsequent TDENTER execution key ID invoked by the TDRM 180 enforces the handover, recovering the tenant status from the TDTCS 128 (within the TD 190A) to restart the tenant VMM or OS. Accordingly, if the processor 112 is operating in the context of a non-root VMM during the previous TD exit, the TD enters reporting the VM exit (on the TD entry) to the tenant VMM.As discussed above, the TDTCS 128 maintains the execution state of the TD 190A. The execution state of the TD 190A is stored in the TDTCS 128. TDTCS can be unarchitected and can maintain the fields detailed in Tables 6 through 9 below:Field Description STATE TD virtual processor execution status. A value of 0 indicates that this TD-TCS is available for TDENTER. A value of 1 indicates that the TD-TCS is active on the logical processor (currently using this TD-TCS to perform TD). TDCS Link Back to "Parent" TDCS (64b HPA) FLAGS TD-TCS Execution Flag (see Table X below) TD_STATE_S corresponds to the TD state of the hypervisor mode. See the table below. TD_STATE_U corresponds to the TD state of the user state. See the table below.Table 6: TDCCS FieldsField Bit Position Description DEBUG 0 Debug TD-TCS Selective Entry Flag RESERVED 63:1 NATable 7: TTCCS Execution MarkField Description CR0 Initial state set by TDCREATE – then load application mask CR2 is loaded for saving, initialized to 0 CR3 loaded for save, initialized by TD OS CR4 set by TDCREATE initial state – then loaded with application mask DR0 loaded for save, initialized Clear DR1 Load for Save, Initialize Clear DR2 Load for Save, Initialize Clear DR3 Load for Save, Initialize Clear DR6 Load for Save, Initialize Clear DR7 Load for Save, Initialize to Disable Debug IA32_SYSENTER_CS Load for Save, TD32 Initialize IA32_SYSENTER_ESP Load as Save, initialized by TD OS initialization IA32_SYSENTER_EIP, saved by TD OS initialization SYSCALL MSRs is saved, saved by TD OS initialization IA32_EFER, saved by TD OS initialization IA32_PAT, saved by TD OS initialization IA32_BNDCFGS, saved by TD OS initialization ES segment information selector, base, limit, AR byte CS segment information selector, base, limit, AR byte SS segment information selector, base, limit, AR byte DS segment information selector, base, limit , AR byte FS segment information selector, base, limit, AR byte GS segment information selector, base, limit, AR byte LDTR segment information selector, base, limit, AR byte TR segment information selector, base, limit, AR The byte GDTR base is loaded for saving, loaded by the TD OS initialization GDTR limit for saving, loaded by the TD OS initialization IDTR base for saving, loaded by the TD OS initialization IDTR limit for saving, loaded by the TD OS initialization RIP for saving, initialized by TDCREATE Used for IBB RSP load as save, initialized by TDCREATE for IBB RFLAGS load to save, initialized by TDCREATE for IBB PDPTEs* (32 bit PAE) load to save, TD OS initialized IA32_XSS loaded for save, by TD OS initialization XCR0 is loaded for saving, loaded by TD OS initialization Kernel_GS_BASE for saving, initialized by TD OS initialization TSC_AUX for saving, initialized by TD OSTable 8: TDKCS hypervisor execution statusThe field description RAX is loaded as saved, loaded by the TD OS initialization RBX for saving, loaded by the TD OS initialization RCX for saving, loaded by the TD OS initialization RDX for saving, loaded by the TD OS initialization RBP for saving, and loaded by the TD OS initialization RSI as Save, initialized by the TD OS RDI to save, loaded by the TD OS initialization R8 is saved, loaded by the TD OS initialization R9 is saved, loaded by the TD OS initialization R10 is saved, loaded by the TD OS initialization R11 is saved, by TD OS Initialize R12 Load to save, load it by TD OS initialization R13, save it by TD OS initialization R14, save it by TD OS initialization R15, save it by TD OS initialization XSAVE state to save, initialize by TD OSTable 9: TTCCS Additional FieldsIn one implementation, the TD 190A can be destroyed by the TDRM 180. The TDRM 180 uses the TD Destruction Instructions (TDDESTROY and TDTDESTROY) to destroy the TD 190A. The CPU verifies that all memory assigned to the TD has been revoked and all TD-TCS are destroyed before it allows the TDCS to be destroyed.4 is a flow diagram of an example method 400 for providing isolation in a virtualized system using TD, in accordance with one implementation. Method 400 can be performed by processing logic, which can comprise hardware (eg, circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as operations performed by an MCU), firmware, or a combination thereof. In one implementation, method 400 is performed by processing device 112 of FIG. 1A or FIG. 1B. In another implementation, method 400 is performed by any of the processing devices described with respect to Figures 7-12. Alternatively, other components of computing system 100 (or software executing on processing device 112) may perform some or all of the operations of method 400.Referring to FIG. 4, method 400 begins at block 410 when processing logic executes TDRM to manage a TD that includes a VM that is executed by a processing device. At block 420, the processing logic maintains the TDCS for managing global metadata for one or more of the TD or other TDs executed by the processing logic. Then, at block 430, the processing logic maintains an execution state of the TD in the TD-TCS that is accessed for software access from at least one of the TDRM, VMM, or other TDs executed by the processing device.Subsequently, at block 440, the processing logic references the MOT to obtain at least one key ID corresponding to the encryption key assigned to the TD. In one implementation, the key ID allows for processing logic secret access to a memory page assigned to the TD in response to execution by the processing device in the context of the TD, where the memory page assigned to the TD is encrypted by the encryption key. Finally, at block 450, the processing logic references the MOT to obtain a guest physical address corresponding to the host physical memory page assigned to the TD. In one implementation, the matching of the guest physical address obtained from the MOT with the accessed guest physical address allows access to a processing device that is assigned to a memory page of the TD in response to execution by the processing device in the context of the TD.FIG. 5 is a flow diagram of an example method 500 for performing TD exit when providing isolation in a virtualized system using TD, in accordance with one implementation. Method 500 can be performed by processing logic, which can comprise hardware (eg, circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as operations performed by an MCU), firmware, or a combination thereof. In one implementation, method 500 is performed by processing device 112 of FIG. 1A or FIG. 1B. In another implementation, method 500 is performed by any of the processing devices described with respect to Figures 7-12. Alternatively, other components of computing system 100 (or software executing on processing device 112) may perform some or all of the operations of method 500.Referring to FIG. 5, method 500 begins at block 510 when processing logic identifies a TD exit event. In one implementation, the TDRM is managing the TD associated with the TD exit event, where the processing logic is executing in the context of the TD when identifying the TD exit event.At block 520, in response to identifying the TD exit event, the processing logic saves the TD hypervisor execution state and the user execution state of the TD with a first key identifier (ID) corresponding to the first encryption key assigned to the TD to Corresponds to TD-TCS in TD. In one implementation, the execution state is encrypted by a first encryption key, wherein the TDCS is accessed for software access from at least one of a TDRM, VMM, or other TD executed by the processing device.Subsequently, at block 530, the processing logic modifies the key ID status of the processing device from the first key ID to a second key ID corresponding to at least one of the TDRM or VMM. Finally, at block 540, the processing logic loads the TDRM execution and control status and the exit information of the TDRM to cause the processing device to operate in the context of the TDRM.6 is a flow diagram of an example method 600 for performing TD entry when providing isolation in a virtualized system using TD, in accordance with one implementation. Method 600 can be performed by processing logic, which can comprise hardware (eg, circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as operations performed by an MCU), firmware, or a combination thereof. In one implementation, method 600 is performed by processing device 112 of FIG. 1A or FIG. 1B. In another implementation, method 600 is performed by any of the processing devices described with respect to Figures 7-12. Alternatively, other components of computing system 100 (or software executing on processing device 112) may perform some or all of the operations of method 600.Referring to FIG. 6, method 600 begins at block 610 when the processing device recognizes a TD entry event while processing logic is executing in the context of TDRM. In one implementation, the processing logic executes the TDRM to manage the TD.At block 620, in response to identifying the TD entry event, the processing logic loads the TDRM control state of the TDRM from the TDRCS corresponding to the TDRM with a first key ID corresponding to the first encryption key assigned to the TDRM. In one implementation, the execution state is encrypted by the first encryption key. Moreover, the TDRCS can be accessed for software access from at least one of the TD or other TDs executed by the processing device.Subsequently, at block 630, the processing logic modifies the key ID status of the processing device from the first key ID to a second key ID corresponding to the second encryption key assigned to the TD. Finally, at block 640, the processing logic loads the hypervisor execution state and the TD user execution state of the TD from the TD-TCS to cause the processing device to operate in the context of the TD. In one implementation, the TD-TCS is accessed for software access from at least one of a TDRM or other TD performed by the processing device.7A is an in-order pipeline and register renaming phase, out-of-order issue/execution pipeline showing the performance of a monitoring processing device to provide isolation of a processor in a virtualization system using a trusted domain, in accordance with at least one implementation of the present disclosure. block diagram. 7B is a block diagram showing out-of-order issue/execution logic, register renaming logic, and an in-order architecture core to be included in a processor in accordance with at least one implementation of the present disclosure. The solid lined box in Figure 7A shows the in-order pipeline, while the dashed box shows the register renaming, out-of-order issue/execution pipeline. Similarly, the solid line box in Figure 7B shows the ordered architecture logic, while the dashed box shows the register renaming and out-of-order issue/execution logic.In FIG. 7A, processor pipeline 700 includes an acquisition phase 702, a length decoding phase 704, a decoding phase 706, an allocation phase 708, a rename phase 710, a scheduling (also referred to as dispatch or issue) scheduling phase 712, and a register read/memory read. Stage 714, execution stage 716, write back/memory write stage 718, exception handling stage 722, and commit stage 724. In some implementations, the phases are provided in a different order and the different phases can be considered in an orderly and disorderly manner.In Figure 7B, the arrows indicate the coupling between two or more units, and the direction of the arrows indicates the direction of the data flow between those units. FIG. 7B illustrates a processor core (core) 790 including a front end unit 730 coupled to an execution engine unit 750, and both coupled to a memory unit 770.Core 790 can be a Reduced Instruction Set Computing (RISC) core, a Complex Instruction Set Computing (CISC) core, a Very Long Instruction Word (VLIM) core, or a hybrid or alternate core type. As a further option, core 790 can be a dedicated core such as, for example, a network or communication core, a compression engine, a graphics core, and the like.Front end unit 730 includes a branch prediction unit 732 coupled to instruction cache unit 734, which is coupled to an instruction translation lookaside buffer (TLB) 736, which is coupled to instruction fetch unit 738, which is coupled to decoding unit 740. A decoding unit or decoder may decode the instructions and generate one or more micro-ops, microcode entry points, microinstructions, other instructions, or other control signals as outputs that are decoded from the original instructions or otherwise reflect the original instructions Or derive from the original instructions. The decoder can be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), and the like. Instruction cache unit 734 is also coupled to level 2 (L2) cache unit 776 in memory unit 770. Decoding unit 740 is coupled to rename/allocator unit 752 in execution engine unit 750.Execution engine unit 750 includes a rename/allocator unit 752 coupled to a set of retirement unit 754 and one or more scheduler units 756. One or more scheduler units 756 represent any number of different schedulers, including reserved stations, central command windows, and the like. One or more scheduler units 756 are coupled to one or more physical register file units 758. Each of the one or more physical register file units 758 represents one or more physical register files, wherein different register files store one or more different data types (such as scalar integers, scalar floating points, packed integers, packed floating points, Vector integer, vector floating point, etc.), state (eg, instruction pointer as the address of the next instruction to be executed), and so on. One or more physical register file units 758 are overlapped by retirement unit 754 to illustrate various ways in which register renaming and out-of-order execution can be implemented (eg, using one or more reorder buffers and one or more retirement registers) Heap; use one or more future heaps, one or more history buffers, and one or more retirement register heaps; use register maps, register pools, etc.).In general, architectural registers are visible from outside the processor or from the perspective of a programmer. The registers are not limited to any known specific type of circuit. A variety of different types of registers are suitable (as long as they are capable of storing and providing data as described herein). Examples of suitable registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers that use register renaming, combinations of dedicated and dynamically allocated physical registers, and the like. The retirement unit 754 and one or more physical register file units 758 are coupled to one or more execution clusters 760. One or more execution clusters 760 include a collection of one or more execution units 762 and a collection of one or more memory access units 764. Execution unit 762 can perform various operations (eg, shift, addition, subtraction, multiplication) and perform on various types of data (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point) .While some implementations may include multiple execution units dedicated to a particular set of functions or functions, other implementations may include one execution unit or multiple execution units that perform all of the functions. One or more scheduler units 756, one or more physical register file units 758, and one or more execution clusters 760 are shown as possibly multiple because some implementations create separate pipelines for certain types of data/operations. (eg, scalar integer pipeline, scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline and/or memory access pipeline, each with its own scheduler unit, one or more physical register file units And/or performing a cluster - and in the case of a separate memory access pipeline, implementing some implementations in which the execution cluster of this pipeline has one or more memory access units 764. It should also be understood that where a separate pipeline is used Below, one or more of these pipelines can be out-of-order release/execution, and the rest are ordered.The set of memory access units 764 is coupled to a memory unit 770 that includes a data TLB unit 772 coupled to a data cache unit 774 that is coupled to a level 2 (L2) cache unit 776. In one exemplary implementation, memory access unit 764 can include a load unit, a memory address unit, and a store data unit, each of which is coupled to data TLB unit 772 in memory unit 770. L2 cache unit 776 is coupled to one or more other levels of cache and is ultimately coupled to main memory.By way of example, an exemplary register renaming, out-of-order issue/execution core architecture may implement pipeline 700 of Figure 7A as follows: 1) instruction fetch 738 performs acquisition phase 702 and length decode phase 704, respectively; 2) decoding unit 740 performs decoding phase 706; 3) Rename/Assignment Unit 752 performs an allocation phase 708 and a rename phase 710; 4) one or more scheduler units 756 execute a scheduling phase 712; 5) one or more physical register file units 758 and memory units 770 execute Register read/memory read stage 714; execution cluster 760 performs execution stage 716; 6) memory unit 770 and one or more physical register file units 758 perform write back/memory write stage 718; 7) various units may be involved in the exception handling stage 722; 8) The retirement unit 754 and the one or more physical register file units 758 perform the commit phase 724.Core 790 can support one or more instruction sets (eg, the x86 instruction set (which has some extensions that have been added with the updated version), the MIPS instruction set of MIPS Technologies of Sunnyvale, California, and the ARM instruction of ARM Holdings of Sunnyvale, California. Set (which has additional extensions such as NEON)).It should be understood that a core can support multiple threads (performing two or more parallel operations or sets of threads) and can do so in a variety of ways, including time-sliced multi-threading, simultaneous multi-threading (where a single physical core Provides a logical core for each thread of a physical core that is multithreaded at the same time) or a combination thereof (eg, time slice acquisition and decoding such as in Intel® Hyper-Threading Technology and simultaneous multi-threading thereafter).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. Although the illustrated implementation of the processor also includes separate instruction and data cache units 734/774 and shared L2 cache unit 776, alternative implementations may have instructions and data such as, for example, a level 1 (L1) internal cache. A single internal cache for both, or a multi-level internal cache. In some implementations, the system can include a combination of an internal cache and an external cache outside of the core and/or processor. Alternatively, all caches may be external to the core and/or processor.8 shows a block diagram of a micro-architecture of a processing device 800 that includes isolated logic circuitry for providing isolation in a virtualization system using a trusted domain, in accordance with one implementation. In some implementations, instructions may be implemented to operate on data elements having sizes of bytes, words, double words, quadwords, and the like, as well as data types such as single and double precision integer and floating point data types. In one implementation, the ordered front end 801 is part of the processing device 800 that takes the instructions to execute and prepares them for later use in the processing device pipeline. Implementations that provide isolation in a virtualized system using a trusted domain can be implemented in processing device 800.The front end 801 can include several units. In one implementation, instruction prefetcher 816 fetches instructions from memory and feeds the instructions to instruction decoder 818, which in turn decodes or interprets the instructions. For example, in one implementation, the decoder decodes the received instructions into one or more operations that are machine-executable, called "microinstructions" or "micro-ops" (also known as micro-ops or uops). In other implementations, the decoder parses the instructions into opcodes and corresponding data and control fields that are used by the microarchitecture to perform operations in accordance with one implementation. In one implementation, the trace cache 830 fetches the decoded uop and assembles it into a sequence or trace of program ordering in the uop queue 834 for execution. When the trace cache 830 encounters a complex instruction, the microcode ROM 832 provides the uop needed to complete the operation.Some instructions can be converted to a single micro-op, while others require several micro-ops to perform all operations. In one implementation, if more than four micro-ops are needed to complete the instruction, decoder 818 accesses microcode ROM 832 for instructions. For one implementation, the instructions can be decoded into a small number of micro-ops for processing at instruction decoder 818. In another implementation, the instructions may be stored in microcode ROM 832 if multiple micro-ops are required to complete the operation. Trace cache 830 refers to an entry point programmable logic array (PLA) to determine the correct microinstruction pointer for reading the microcode sequence to complete one or more instructions from microcode ROM 832 according to one implementation. After the microcode ROM 832 completes the micro-op of the sort instruction, the front end 801 of the machine resumes acquiring the micro-op from the trace cache 830.The out-of-order execution engine 803 is where the instructions are prepared for execution. The out-of-order execution logic has multiple buffers to smooth the processing and reordering of the flow of instructions to optimize performance as the instructions are pipelined down and scheduled for execution. The allocator logic allocates machine buffers and resources that each uop needs to execute. The register renaming logic renames the logical registers to entries in the register file. In front of the instruction scheduler (memory scheduler, fast scheduler 802, slow/generic floating point scheduler 804, and simple floating point scheduler 806), the allocator is also two uop queues (one for memory operations and one for Each uop in one of the non-memory operations is assigned an entry. The uop scheduler 802, 804, 806 determines when the uop is ready to execute based on the readiness of its associated input register operand source and the availability of the execution resources of its operations to complete its operations. One implemented fast scheduler 802 can schedule every half cycle of the primary clock cycle, while other schedulers can schedule only once per primary processing device clock cycle. The scheduler decides on the dispatch port to schedule the uop for execution.Register files 808, 810 are located between execution units 812, 814, 816, 818, 810, 812, 814 in schedulers 802, 804, 806 and execution block 811. There are separate register files 808, 810 for integer and floating point operations, respectively. Each register file 808, 810 of an implementation also includes a bypass network that can bypass or forward the newly completed result that has not been written to the register file to the new associated uoop. Integer register file 808 and floating point register file 810 are also capable of passing data to each other. For one implementation, the integer register file 808 is divided into two separate register files, one for the low order 32 bits of data and the second register for the higher order 32 bits of data. An implementation of floating point register file 810 has a 128 bit wide entry because floating point instructions typically have operands from 64 bits to 128 bits in width.Execution block 811 includes execution units 812, 814, 816, 818, 810, 812, 814 (in which the instructions are actually executed). This section includes register files 808, 810 that store integer and floating point data manipulation values that the microinstruction needs to execute. One implemented processing device 800 includes a plurality of execution units: an address generation unit (AGU) 812, an AGU 814, a fast ALU 816, a fast ALU 818, a slow ALU 810, a floating point ALU 812, and a floating point mobile unit 814. For one implementation, floating point execution blocks 812, 814, perform floating point, MMX, SIMD, and SSE, or other operations. An implemented floating point ALU 812 includes a 64 bit x 64 bit floating point divider to perform division, square root, and remainder micro-op. For implementations of the present disclosure, instructions related to floating point values may be handled by floating point hardware.In one implementation, the ALU operations may go to the high speed ALU execution units 816, 818. An implementation of the fast ALUs 816, 818 can perform fast operations with an effective delay of half the clock period. For one implementation, the most complex integer operations go to the slow ALU 810 because the slow ALU 810 includes integer execution hardware for long delay type operations, such as multipliers, shifts, markup logic, and branch processing. Memory load/store operations are performed by AGUs 812, 814. For one implementation, integer ALUs 816, 818, 810 are described in the context of performing integer operations on 64-bit data operands. In an alternate implementation, ALUs 816, 818, 810 may be implemented to support various data bits, including 16, 32, 128, 256, and the like. Similarly, floating point units 812, 814 can be implemented to support a series of operands having various width bits. For one implementation, floating point units 812, 814 can operate on 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions.In one implementation, the uop scheduler 802, 804, 806 dispatches related operations before the parent load has completed execution. When uop is speculatively scheduled and executed in processing device 800, processing device 800 also includes logic to handle memory misses. If the data load misses in the data cache, there may be an in flight related operation in the pipeline that leaves the scheduler with temporarily incorrect data. The replay mechanism tracks and re-executes instructions that use incorrect data. Replay only related operations and allow independent operations to be completed. An implementation of the scheduler and playback mechanism of the processing device is also designed to capture sequences of instructions for text string comparison operations.Processing device 800 also includes processing device 800 that provides isolated logic in the virtualization system using a trusted domain in accordance with one implementation. In one implementation, execution block 811 of processing device 800 can include TDRM 180, MOT 160, TDCS 124, and TDTCS 128 to provide isolation in the virtualization system using trusted domains in accordance with the description herein.The term "register" can refer to an on-board processing device storage location that can be used as part of an instruction to identify an operand. In other words, the registers can be those registers that are available from outside the processing device (from the perspective of the programmer). However, implemented registers should not be limited in a sense to a particular type of circuit. Instead, the implemented registers are capable of storing data as well as providing data and performing the functions described herein. The registers described herein may be implemented by circuitry within a processing device using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, and the like. In one implementation, the integer register stores 32-bit integer data. An implementation of the register file also contains eight multimedia SIMD registers for packing data.For the purposes of this discussion, a register is understood to be a data register designed to hold packed data, such as a 64-bit wide MMXTM register in an MMX-enabled microprocessor device from Intel Corporation of Santa Clara, California (in some instances) Also known as the "mm" register). These MMX registers available in both integer and floating point forms can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128-bit wide XMM registers associated with SSE2, SSE3, SSE4 or higher (generally referred to as "SSEx") techniques can also be used to maintain such packed data operands. In one implementation, in storing packed data and integer data, the registers do not need to be distinguished between the two data types. In one implementation, integers and floating points are either contained in the same register file or contained in different register files. Moreover, in one implementation, floating point and integer data can be stored in different registers or in the same register.Implementations can be implemented in many different system types. Referring now to Figure 9, a block diagram of a multi-processing device system 900 in accordance with an implementation is shown. As shown in FIG. 9, multi-processor system 900 is a point-to-point interconnect system and includes a first processing device 970 and a second processing device 980 coupled via a point-to-point interconnect 950. As shown in Figure 9, each of the processing devices 970 and 980 can be a multi-core processing device, including first and second processing device cores (not shown), although there may potentially be many more in the processing device. Nuclear. According to implementations of the present disclosure, the processing devices each may include hybrid write mode logic. Implementations that provide isolation in a virtualized system using a trusted domain can be implemented in processing device 970, processing device 980, or both.Although two processing devices 970, 980 are shown, it is to be understood that the scope of the present disclosure is not so limited. In other implementations, one or more additional processing devices may be present in a given processing device.Processing devices 970 and 980, including integrated memory controller units 972 and 982, respectively, are shown. Processing device 970 also includes point-to-point (P-P) interfaces 976 and 978 as part of its bus controller unit; similarly, second processing device 980 includes P-P interfaces 986 and 988. Processing devices 970, 980 can exchange information via P-P interface circuits 978, 988 via point-to-point (P-P) interface 950. As shown in FIG. 9, IMCs 972 and 982 couple processing devices to respective memories, namely memory 932 and memory 934, which may be portions of the main memory that are locally attached to the respective processing device.Processing devices 970, 980 can each exchange information with chipset 990 using point-to-point interface circuits 976, 994, 986, 998 via separate P-P interfaces 952, 954. Chipset 990 can also exchange information with high performance graphics circuitry 938 via high performance graphics interface 939.A shared cache (not shown) may be included in either or both of the processing devices, still connected to the processing device via the PP interconnect, such that local cache information for either or both of the processing devices may be stored In the shared cache (if the processing device is placed in low power mode).Chipset 990 can be coupled to first bus 916 via interface 996. In one implementation, the first bus 916 can be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus, or another third generation I/O interconnect bus, although the scope of the present disclosure is not so limited. .As shown in FIG. 9, various I/O devices 914 can be coupled to the first bus 916, along with a bus bridge 918 that couples the first bus 916 to the second bus 920. In one implementation, the second bus 920 can be a low pin count (LPC) bus. Various devices may be coupled to second bus 920, including, for example, a keyboard and/or mouse 922, communication device 927, and storage unit 928, such as a disk drive or other mass storage device that may include instructions/code and data 930 in one implementation. . Additionally, audio I/O 924 can be coupled to second bus 920. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 9, the system can implement a multi-drop bus or other such architecture.Referring now to Figure 10, a block diagram of a third system 1000 in accordance with implementations of the present disclosure is shown. Similar elements in Figures 9 and 10 carry similar reference numerals, and certain aspects of Figure 9 have been omitted from Figure 10 in order to avoid obscuring other aspects of Figure 10.FIG. 10 shows that processing devices 970, 980 can include integrated memory and I/O control logic ("CL") 972 and 982, respectively. For at least one implementation, the CL 972, 982 can include an integrated memory controller unit (such as described herein). In addition, CL 972, 982 may also include I/O control logic. FIG. 9 shows that memories 932, 934 are coupled to CLs 972, 982, and I/O device 1014 is also coupled to control logic 972, 982. Legacy I/O device 1015 is coupled to chipset 990. Implementations that provide isolation in a virtualized system using a trusted domain can be implemented in processing device 970, processing device 980, or both.FIG. 11 is an example system on a chip (SoC) that may include one or more of cores 1102. For laptop computers, desktop computers, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processing devices, digital signal processing devices (DSPs), graphics devices, video games Other system designs and configurations known in the art for devices, set top boxes, microcontrollers, cell phones, portable media players, handheld devices, and various other electronic devices are also suitable. In general, a wide variety of systems or electronic devices capable of incorporating processing devices and/or other execution logic as disclosed herein are generally suitable.Referring now to Figure 11, a block diagram of a SoC 1100 in accordance with an implementation of the present disclosure is shown. In addition, the dashed box is a feature on more advanced SoCs. In Figure 11, one or more interconnect units 1102 are coupled to: an application processing device 1110 that includes one or more shared cache units 1106 and a set of one or more cores 1102A-N; a system proxy unit 1112; Or a plurality of bus controller units 1116; one or more integrated memory controller units 1114; a collection of media processing devices 1120 or one or more media processing devices 1120, which may include integrated graphics logic 1108 for providing still and/or Video camera functional image processing device 1124, audio processing device 1126 for providing hardware audio acceleration, and video processing device 1128 for providing video encoding/decoding acceleration; static random access memory (SRAM) unit 1130; direct memory storage A fetch (DMA) unit 1132; and a display unit 1140 for coupling to one or more external displays. Implementations that provide isolation in a virtualized system using a trusted domain can be implemented in the SoC 1100.Turning next to Figure 12, an implementation of a SoC design in accordance with implementations of the present disclosure is depicted. As an illustrative example, SoC 1200 is included in a User Equipment (UE). In one implementation, a UE refers to any device that is to be used by an end user to communicate, such as a handy phone, a smart phone, a tablet, an ultra-thin notebook, a notebook with a broadband adapter, or any other similar communication device. The UE may be connected to a base station or node, which in essence may correspond to a mobile station (MS) in the GSM network. Implementations that provide isolation in a virtualized system using a trusted domain can be implemented in the SoC 1200.Here, the SoC 1220 includes two cores -1206 and 1207. Similar to the discussion above, cores 1206 and 1207 can conform to an instruction set architecture, such as a processing device with Intel® architecture CoreTM, an Advanced Micro Devices, Inc. (AMD) processing device, a MIPS-based processing device, an ARM-based processing device. Design or its customers, and their licensees or adopters. Cores 1206 and 1207 are coupled to cache control 1208, which is associated with bus interface unit 1209 and L2 cache 1210 to communicate with other portions of system 1200. Interconnect 1211 includes on-chip interconnects, such as IOSF, AMBA, or other interconnects discussed above, which may implement one or more aspects of the disclosed disclosure.Interconnect 1211 provides communication channels to other components, such as Subscriber Identity Module (SIM) 1230 to interface with the SIM card, boot ROM 1235 to maintain boot code for execution by cores 1206 and 1207 to initialize and boot SoC 1200, SDRAM controller 1240 interfaces with an external memory (eg, DRAM 1260), the flash controller 1245 interfaces with a non-volatile memory (eg, flash 1265), and peripheral control 1250 (eg, a serial peripheral interface) interfaces with peripherals, video encoding The decoder 1220 and the video interface 1225 display and receive inputs (eg, touch enable inputs), the GPU 1215 to perform graphics related calculations, and the like. Any of these interfaces may incorporate aspects of the implementations described herein.In addition, the system shows peripherals for communication, such as Bluetooth module 1270, 3G modem 1275, GPS 1280, and Wi-Fi 1285. Note that as stated above, the UE includes a radio for communication. As a result, all of these peripheral communication modules may not be included. However, in the UE, some form of radio for external communication should be included.13 shows a graphical representation of a machine in an example form of computing system 1300 within which a set of instructions for causing a machine to perform any one or more of the methodologies discussed herein can be executed. In an alternate implementation, the machine can be connected (eg, networked) to other machines in the LAN, intranet, extranet, or the Internet. The machine can operate with the capabilities of a server or client device in a client-server network environment or as a peer machine in a peer-to-peer (or distributed) network environment. The machine can be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular telephone, web facility, server, network router, switch or bridge, or capable of executing a set of instructions (sequential or otherwise) A machine that specifies the action to be taken by the machine. Moreover, although only a single machine is shown, the term "machine" should also be taken to include any collection of machines that independently or jointly execute a set of instructions (or sets) to perform any one or more of the methodologies discussed herein. Implementations of the conversion pages and sections can be implemented in computing system 1300.Computing system 1300 includes processing device 1302, main memory 1304 (eg, read only memory (ROM), flash memory, dynamic random access memory (DRAM) (eg, synchronous DRAM (SDRAM) or DRAM (RDRAM), etc.), static memory 1306 ( For example, flash memory, static random access memory (SRAM), etc., and data storage device 1318 (which communicate with each other via bus 1330).Processing device 1302 represents one or more general purpose processing devices, such as a microprocessing device, a central processing unit, and the like. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessing device, a reduced instruction set computer (RISC) microprocessing device, a very long instruction word (VLIW) microprocessing device, or a processing device implementing other instruction sets, or A processing device that implements a combination of instruction sets. Processing device 1302 may also be one or more specialized processing devices such as an application specific integrated circuit (ASIC), field programmable gate array (FPGA), digital signal processing device (DSP), network processing device, and the like. In one implementation, processing device 1302 can include one or more processing device cores. Processing device 1302 is configured to execute processing logic 1326 for performing the operations discussed herein. In one implementation, processing device 1302 can be part of computer system 100 of FIG. Alternatively, computing system 1300 can include other components as described herein. It should be understood that a core can support multiple threads (performing two or more parallel operations or sets of threads) and can do so in a variety of ways, including time-sliced multi-threading, simultaneous multi-threading (where a single physical core is physics) The core is a logical core for each thread that simultaneously multithreads, or a combination thereof (eg, time slice extraction and decoding, and simultaneous multithreading thereafter, such as in Intel® Hyper-Threading Technology).Computing system 1300 can also include a network interface device 1308 communicatively coupled to network 1320. The computing system 1300 can also include a video display unit 1310 (eg, a liquid crystal display (LCD) or cathode ray tube (CRT)), an alphanumeric input device 1312 (eg, a keyboard), a cursor control device 1314 (eg, a mouse), and a signal generation device 1316 (such as speakers) or other peripheral devices. Moreover, computing system 1300 can include graphics processing unit 1322, video processing unit 1328, and audio processing unit 1332. In another implementation, computing system 1300 can include a chipset (not shown) that refers to a group of integrated circuits or chips (designed to work with processing device 1302) and controls between processing device 1302 and external devices Communication. For example, the chipset may be a collection of chips on a motherboard that links processing device 1302 to ultra-high speed devices (such as main memory 1304 and graphics controller) and to link peripherals of processing device 1302 to lower speed peripherals. , such as USB, PCI or ISA bus.Data storage device 1318 can include a computer readable storage medium 1324 on which software 1326 of any one or more of the methodologies embodying the functionality described herein is stored. During its execution by computing system 1300, software 1326 may also reside entirely or at least partially within main memory 1304 as instructions 1326 and/or reside within processing device 1302 as processing logic 1326; main memory 1304 and processing device 1302 also constitutes a computer readable storage medium.The computer readable storage medium 1324 can also be used to store the instructions 1326 using, for example, the processing device 1302 described with respect to FIG. 1 and/or a software library containing methods for invoking the above applications. Although computer readable storage medium 1324 is shown as a single medium in an example implementation, the term "computer readable storage medium" should be taken to include a single medium or multiple mediums that store one or more sets of instructions (eg, centralized) Or distributed database, and/or associated cache and server). The term "computer readable storage medium" shall also be taken to include any medium that is capable of storing, encoding, or carrying any set of instructions for execution by a machine and for causing a machine to perform an implementation. Accordingly, the term "computer readable storage medium" shall be taken to include, without limitation, solid state memory as well as optical and magnetic media.The following examples involve further implementation. Example 1 is a processing device for providing isolation in a virtualized system using a trusted domain. Further referring to Example 1, the processing device includes: a Memory Ownership Table (MOT), the MOT is accessed for software access; and a processing core, with further reference to Example 1, the processing core: performing Trusted Domain (TD) and management a trusted domain resource manager (TDRM) of the TD; maintaining a trusted domain control structure (TDCS) for managing global metadata of one or more of the TD or other TDs executed by the processing device One or more trusted domain thread control structures (TD-) that are accessed by the TDCS reference and for software access from at least one of the TDRM, virtual machine manager (VMM), or the other TD Maintaining an execution state of the TD in TCS); referring to the MOT to obtain at least one key identifier (ID) corresponding to an encryption key assigned to the TD, the key ID allowing the processing device Decrypting the memory page assigned to the TD in response to the processing device executing in the context of the TD, the memory page assigned to the TD being encrypted by the encryption key; MOT Corresponding to a guest physical address assigned to a host physical memory page of the TD, wherein a match between the guest physical address obtained from the MOT and the accessed guest physical address is allowed to be responsive to the processing device The processing device access performed in the context of the TD and assigned to the memory page of the TD.In Example 2, the subject matter of Example 1 can optionally include wherein the VMM includes a TDRM component to provide memory management for at least one of: the TD, the other TD, or one or via an Extended Page Table (EPT) Multiple virtual machines (VMs). In Example 3, the subject matter of any one of Examples 1-2 can optionally include wherein the TD-TCS references the TDCS, wherein the TDCS maintains one or more TDs of a logical processor corresponding to the TD a count of TCS, and wherein the TD-TCS stores a user execution state and a hypervisor execution state of the TD. In Example 4, the subject matter of any of Examples 1-3 can optionally include wherein the encryption key is generated by a multi-key total memory encryption (MK-TME) engine of the processing device.In Example 5, the subject matter of any of Examples 1-4 can optionally include wherein the MK-TME engine generates a plurality of encryption keys accessed via a key ID assigned to the TD for encryption And decrypting the memory page of the TD, and encrypting and decrypting a memory page corresponding to a persistent memory assigned to the TD, and wherein the MOT is via a key associated with each entry in the MOT The ID is used to track the plurality of key IDs. In Example 6, the subject matter of any of Examples 1-5 can optionally include the MOT in which the processing core references a host physical memory page accessed as part of a page traversal operation to access the EPT mapping Guest physical memory page. In Example 7, the subject matter of any of Examples 1-6 can optionally include wherein the TD includes an operating system (OS) for managing one or more applications or for managing one or more virtual machines (VMs) At least one of the VMMs, and wherein the TD entry operation transfers an operational context of the processing core from at least one of the VMMs to the OS of the TD or from the TDRM to the TD The VMM.In Example 8, the subject matter of any of Examples 1-7 can optionally include wherein the TDRM is not included in a Trusted Computing Base (TCB) of the TD. In Example 9, the subject matter of any of Examples 1-8 can optionally include wherein the TDCS includes a signature structure that captures cryptographic measurements of the TD, the cryptographic measurements being by hardware of the processing device A trusted root signature, and wherein the signature structure is provided to the attestation for verifying the cryptographic measurements.In Example 10, the subject matter of any of Examples 1-9 can optionally include a measurement state in which the processing core further maintains the TD in the TDCS, the TDCS being at least included by the processing Software access of the TDRM, the VMM, or the software of the other TDs executed by the device is accessed. In Example 11, the subject matter of any of Examples 1-10 can optionally include wherein the TDRM manages the TD and the other TDs. All of the optional features of the devices described above can also be implemented with respect to the methods and processes described herein.Example 12 is a method for providing isolation in a virtualization system using a trusted domain, comprising: managing a trusted domain (TD) executed on the processing device by executing a Trusted Domain Resource Manager (TDRM) Processing device identifying a TD exit event; in response to identifying the TD exit event, utilizing a first key identifier (ID) corresponding to a first encryption key assigned to the TD to perform a user of the TD The state and TD manager execution state is saved to a Trusted Domain Thread Control Structure (TD-TCS) corresponding to a logical processor assigned to the TD, the execution state being encrypted by the first encryption key, wherein The TD-TCS is accessed for software access from at least one of the TDRM, virtual machine manager (VMM) or other TD executed by the processing device; the key ID status of the processing device is Modifying the first key ID into a second key ID corresponding to at least one of the TDRM or the VMM; and loading a TDRM execution and control status and exit information of the TDRM to cause the processing device Above and below the TDRM Operation.In Example 13, the subject matter of Example 12 can optionally include: performing a TD entry event in the context of the TDRM; utilizing a second key identifier corresponding to a second encryption key assigned to the TDRM An ID (ID) to perform TDRM execution control specified by the TDRM from a trusted domain resource manager control structure (TD-RCS) corresponding to the logical processor assigned to the TD, the execution state passing Encrypting the second encryption key, wherein the TD-RCS is accessed using an extended page table (EPT) from at least one of the TD or other VMs executed by the processing device; Modifying a key ID status of the device from the second key ID to a first key ID corresponding to the TD; and loading the user execution status and the hypervisor execution status from the TD-TCS to The processing device is caused to operate in the context of the TD. In Example 14, the subject matter of any one of Examples 12-13 can optionally include wherein the TDCS and TD-TCS are protected by confidentiality and access via a memory ownership table (MOT) of the processing device, the MOT Include a first entry for the TDCS that associates the first key ID with the TD, wherein the MOT is enforced to a memory corresponding to the TD using the first key ID The memory confidentiality of the page's memory access.In Example 15, the subject matter of any of Examples 12-14 can optionally include wherein the MOT is accessed via a range register. In Example 16, the subject matter of any one of Examples 12-15 can optionally include wherein the TDRM execution and control state is loaded from the TD-RCS structure that is accessed via the EPT and the MOT, wherein The MOT includes a second entry for the TD-RCS structure that associates the second key ID with a physical memory page containing the TD-RCS, and wherein the MOT utilizes the second secret The key ID enforces memory confidentiality to memory accesses corresponding to memory pages of the TDRM. In Example 17, the subject matter of any of Examples 12-16 can optionally include wherein the VMM is a root VMM that includes the TDRM to manage one or more TDs, wherein the TD includes a non-root VMM to manage One or more virtual machines (VMs), and wherein the TD exits transferring an operational context of the processing core from the non-root VMM or the one or more VMs of the TD to the root VMM and TDRM .In Example 18, the subject matter of any of Examples 12-17 can optionally include wherein the encryption key is generated by a multi-key total memory encryption (MK-TME) engine of the processing device, and wherein the MK a TME engine generates a plurality of encryption keys assigned to the TD via a key ID for encrypting a short memory page or a persistent memory page of the TD, and wherein the MOT tracks the plurality of encryption key IDs a key id for each host physical page referenced in the MOT.Example 19 is a system for providing isolation in a virtualized system using a trusted domain. In Example 19, the system includes: a memory device to store instructions; and a processing device operably coupled to the memory device. With further reference to example 19, the processing apparatus executes the instructions to: execute a trusted domain resource manager (TDRM) to manage a trusted domain (TD), wherein the TDRM is not included in a trusted computing base of the TD ( TCB) maintaining a hypervisor execution state and a user execution state of the TD in a Trusted Domain Thread Control Structure (TD-TCS) for the TDRM, virtual from the execution by the processing device Software access to at least one of a Machine Manager (VMM) or other TD is accessed; and reference is made to the MOT to obtain at least one Encryption Key Identifier (ID) corresponding to an encryption key assigned to the TD The key ID allows the processing device to decrypt a memory page assigned to the TD in response to execution by the processing device in the context of the TD, the memory page assigned to the TD Encrypted by the encryption key identified via the encryption key ID; and with reference to the MOT to obtain a guest physical address corresponding to a host physical memory page assigned to the TD, wherein the guest physical address Visit to visit Matching the physical address to allow processing apparatus in response to the execution of the context in the TD and means that the processing means with access to the TD memory page.In Example 20, the subject matter of Example 19 can optionally include wherein the VMM includes a TDRM component to provide memory management for one or more of: via the Extended Page Table (EPT): the TD, the other TD or One or more virtual machines (VMs).In Example 21, the subject matter of any one of Examples 19-20 can optionally include a logical processor in which the TD-TCS corresponds to the TD, the TD-TCS storing the TD on a TD exit operation The hypervisor execution state and the user execution state and loading a user and hypervisor execution state of the TD on a TD entry operation, wherein the TD-TCS is for the TDRM from the processing device Software access to at least one of the VMM or the other TD is controlled by access. In Example 22, the subject matter of any one of Examples 19-21 can optionally include wherein the encryption key is generated by a multi-key total memory encryption (MK-TME) engine of the processing device, and wherein the MK a TME engine generates a plurality of encryption keys assigned to the TD via a key ID for encrypting a short memory page or a persistent memory page of the TD, and wherein the MOT is via each of the MOTs The key ID associated with the entry tracks the plurality of encryption key IDs.In Example 23, the subject matter of any one of Examples 19-22 can optionally include wherein the VMM includes the TDRM to manage the TD, wherein the TD comprises an operating system (OS) or a non-root VMM to manage one Or a plurality of virtual machines (VMs), and wherein the TD entry operation transfers an operational context of the processing core from the TDRM to the non-root VMM of the TD. All of the optional features of the system described above can also be implemented with respect to the methods and processes described herein.Example 24 is a non-transitory machine readable storage medium that provides isolation in a virtualized system using a trusted domain. In Example 24, the non-transitory machine readable storage medium includes data that, when accessed by a processing device, causes the processing device to perform operations including: when the processing device is in the context of the TDRM In execution, the TD entry event is identified by a processing device executing a Trusted Domain Resource Manager (TDRM) to manage the trusted domain (TD); in response to identifying the TD entry event, utilizing a corresponding TDRM to be assigned to the TDRM a first key identifier (ID) of the first encryption key to load a TDRM control state of the TDRM from a trusted domain resource manager control structure (TDRCS) corresponding to the TDRM, the TDRM control state passing through Decoding a first encryption key, wherein the TDRCS is accessed for software access from at least one of the TD or the other TDs performed by the processing device; a key ID status of the processing device Modification from the first key ID to a second key ID corresponding to a second encryption key assigned to the TD; and management of loading the TD from a trusted domain thread control structure (TD-TCS) Program execution And a TD user execution state to cause the processing device to operate in a context of the TD, wherein the TD-TCS is for software from at least one of the TDRM or the other TD executed by the processing device Access is controlled by access.In Example 25, the subject matter of Example 24 can optionally include performing a TD entry event in the context of the TDRM; utilizing a second key identifier corresponding to a second encryption key assigned to the TDRM (ID) loading TDRM execution control specified by the TDRM from a trusted domain resource manager control structure (TD-RCS) corresponding to the logical processor assigned to the TD, the execution state passing through Decoding a second encryption key, wherein the TD-RCS is accessed using an extended page table (EPT) from at least one of the TD or other VMs executed by the processing device; a key ID state is modified from the second key ID to a first key ID corresponding to the TD; and the user execution state and the hypervisor execution state are loaded from the TD-TCS such that The processing device operates in the context of the TD.In Example 26, the subject matter of Examples 30-31 can optionally include wherein the TDCS and TD-TCS are confidentially protected and accessed via a Memory Ownership Table (MOT) of the processing device, the MOT including a first entry of the TDCS that associates the first key ID with the TD, wherein the MOT is enforced with a first key ID to a memory page corresponding to the TD Memory confidentiality for memory access. In Example 27, the subject matter of Examples 30-32 can optionally include wherein the MOT is accessed via a range register.In Example 28, the subject matter of Examples 30-33 can optionally include wherein the TDRM execution and control state is loaded from the TD-RCS structure that is accessed via the EPT and the MOT, wherein the MOT includes a second entry for the TD-RCS structure that associates the second key ID with a physical memory page containing the TD-RCS, and wherein the MOT utilizes the second key ID Memory confidentiality to memory accesses corresponding to memory pages of the TDRM is enforced.In Example 29, the subject matter of Examples 30-34 can optionally include wherein the VMM is a root VMM that includes the TDRM to manage one or more TDs, wherein the TD includes a non-root VMM to manage one or more Virtual machines (VMs), and wherein the TD exits transfer an operational context of the processing core from the non-root VMM or the one or more VMs of the TD to the root VMM and TDRM. In Example 30, the subject matter of Examples 30-35 can optionally include wherein the encryption key is generated by a multi-key total memory encryption (MK-TME) engine of the processing device, and wherein the MK-TME engine Generating a plurality of encryption keys assigned to the TD via a key ID for encrypting a short memory page or a persistent memory page of the TD, and wherein the MOT tracks the plurality of encryption key IDs, wherein each A key id of the host physical page referenced in the MOT.Example 31 is an apparatus for providing isolation in a virtualization system using a trusted domain, comprising: means for executing a trusted domain resource manager (TDRM) by a processing device to manage a trusted domain (TD), the TD Processing device execution; means for maintaining a trusted domain control structure (TDCS) for managing global metadata of one or more of the TD or other TDs executed by the processing device; Software access from at least one of the TDRM, virtual machine manager (VMM), or the other TD maintains the TD in one or more trusted domain thread control structures (TD-TCS) controlled by access a component of an execution state; means for referencing the MOT to obtain at least one key identifier (ID) corresponding to an encryption key assigned to the TD, the key ID allowing the processing device to respond For the confidential access of the memory page assigned to the TD by the processing device in the context of the TD, the memory page assigned to the TD is encrypted by the encryption key; Referring to the MOT to obtain a correspondence a component assigned to a guest physical address of a host physical memory page of the TD, wherein a match between the guest physical address obtained from the MOT and the accessed guest physical address is allowed to be responsive to the processing device The processing device access performed in the context of the TD and assigned to the memory page of the TD. In Example 32, the subject matter of Example 31 can optionally include a device further configured to include the subject matter of any of Examples 2-11.Example 33 is a system for providing isolation in a virtualization system using a trusted domain, the system comprising a memory device storing instructions and a processing core operatively coupled to the memory device. Further referring to example 33, the processing core: executing a Trusted Domain Resource Manager (TDRM) to manage a trusted domain (TD) executing on the processing device; identifying a TD exit event; responsive to identifying the TD exit An event, using a first key identifier (ID) corresponding to a first encryption key assigned to the TD to save a user execution state of the TD and a TD hypervisor execution state to correspond to an assignment to the location In a Trusted Domain Thread Control Structure (TD-TCS) of a logical processor of the TD, the execution state is encrypted by the first encryption key, wherein the TD-TCS is for a location from the processing device Software access to at least one of a TDRM, a virtual machine manager (VMM), or other TD is accessed; modifying a key ID status of the processing device from the first key ID to correspond to the TDRM or a second key ID of at least one of the VMMs; and loading TDRM execution and control status and exit information of the TDRM to cause the processing device to operate in the context of the TDRM. In Example 34, the subject matter of Example 33 can optionally include the subject matter of any of Examples 13-18.Example 35 is an apparatus for implementing isolation in a virtualization system using a trusted domain, comprising a memory and a processing device coupled to the memory, wherein the processing device is to perform the method of any of the examples 12-18. Example 36 is a device for implementing isolation using a trusted domain to provide isolation in a virtualization system, including means for performing the method of any of Examples 12-18. Example 37 is at least one machine readable medium comprising a plurality of instructions responsive to being executed on a computing device to cause the computing device to perform the method of any of the examples 12-18. The details in the examples can be used anywhere in one or more embodiments.While the disclosure has been described with respect to a limited number of implementations, many modifications and changes will be apparent to those skilled in the art. It is intended that the appended claims cover all such modifications and modificationsIn the description herein, numerous specific details are set forth, such as specific types of processing devices and system configurations, specific hardware structures, specific architecture and micro-architectural details, specific register configurations, specific instruction types, specific system components, specific measurements/height, Examples of specific processing device pipeline stages and operations, etc., in order to provide a thorough understanding of the present disclosure. However, it will be apparent to those skilled in the art that the invention In other instances, well-known components or methods, such as specific and alternative processing device architectures, specific logic circuits/codes for the described algorithms, specific firmware code, specific interconnection operations, specific logic configurations, specific manufacturing techniques and materials, Specific compiler implementations, specific expressions of algorithms in the code, specific power down and gating techniques/logic, and other specific operational details of the computer system are not described in detail in order to avoid unnecessarily obscuring the present disclosure.The implementation is described in reference to providing isolation in a virtualized system using a trusted domain in a particular integrated circuit (eg, in a computing platform or microprocessor). The implementations are also applicable to other types of integrated circuits and programmable logic devices. For example, the disclosed implementations are not limited to desktop computer systems or portable computers, such as Intel® UltrabooksTM computers. It can also be used in other devices such as handheld devices, tablets, other thin notebook computers, system-on-a-chip (SoC) devices, and embedded applications. Some examples of handheld devices include cellular telephones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, digital signal processing device (DSP), system on a chip, network computer (NetPC), set top box, network hub, wide area network (WAN) switch, or any other system that can perform the functions and operations taught below. . It is described that the system can be any kind of computer or embedded system. The disclosed implementations may be particularly useful for low-end devices such as wearable devices (eg, watches), electronic implants, sensing and control infrastructure devices, controllers, supervisory control and data acquisition (SCADA) systems, and the like. . Moreover, the devices, methods, and systems described herein are not limited to physical computing devices, but may also involve software optimization for power savings and efficiency. As will become readily apparent in the following description, the implementation of the methods, devices, and systems described herein (whether referring to hardware, firmware, software, or a combination thereof) is critical to future "green technologies" that balance performance considerations. of.Although the implementations herein are described with reference to processing devices, other implementations are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of implementations of the present disclosure can be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teachings of the implementations of the present disclosure are applicable to any processing device or machine that performs data manipulation. However, the present disclosure is not limited to a processing apparatus or machine that performs 512-bit, 256-bit, 128-bit, 64-bit, 32-bit, or 16-bit data operations, and can be applied to any processing apparatus and machine in which manipulation or management of data is performed. Moreover, the description herein provides examples, and the figures illustrate various examples for purposes of illustration. However, the examples are not to be construed in a limiting sense, as they are merely intended to provide an example of implementation of the present disclosure, rather than an exhaustive list of all possible implementations of the implementations of the present disclosure.Although the following examples describe instruction handling and distribution in the context of execution units and logic circuits, other implementations of the present disclosure may be implemented by data or instructions stored on a machine readable tangible medium, when executed by a machine The machine is caused to perform functions consistent with at least one implementation of the present disclosure. In one implementation, the functionality associated with the implementation of the present disclosure is embodied in machine executable instructions. The instructions can be used to cause the steps of the present disclosure to be performed by a general purpose or special purpose processing device programmed by the instructions. Implementations of the present disclosure may be provided as a computer program product or software, which may include a machine or computer readable medium having instructions stored thereon, the instructions being operative to program a computer (or other electronic device) to perform One or more operations of an implementation of the present disclosure. Alternatively, the operations of the implementations of the present disclosure may be performed by specific hardware components that comprise fixed function logic for performing operations, or by any combination of programmed computer components and fixed-function hardware components.Instructions for programming logic to perform implementations of the present disclosure may be stored in a memory in the system, such as a DRAM, cache, flash memory, or other storage device. Moreover, the instructions may be distributed via a network or through other computer readable media. Thus, a machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (eg, a computer), but is not limited to floppy disks, optical disks, compact disk read only memory (CD-ROM), and magnetic Optical disk, read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), magnetic or optical card, flash memory, or Tangible machine readable storage device (used in transmitting information over the Internet via electrical, optical, acoustic or other forms of propagating signals (eg, carrier waves, infrared signals, digital signals, etc.)). Accordingly, a computer readable medium comprises any type of tangible machine readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).Design can go through various stages from creation to simulation to manufacturing. The data representing the design can represent the design in a number of ways. First, as useful in simulations, the hardware can be represented using a hardware description language or another functional description language. In addition, circuit level models with logic and/or transistor gates can be generated at certain stages of the design process. In addition, most designs reach a data level that represents the physical placement of various devices in the hardware model at some stage. Where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be data specifying the presence or absence of various features on different mask layers of the mask used to create the integrated circuit. In any representation of the design, the data can be stored in any form of machine readable medium. The memory or magnetic or optical storage device (such as a disk) may be a machine readable medium to store such information transmitted via light waves or waves that are modulated or otherwise generated to convey information. When transmitting an indication or carrying a code or designed electrical carrier, a new copy is performed in terms of performing copying, buffering or retransmission of the electrical signal. Thus, a communication provider or network provider can at least temporarily store, on a tangible machine readable medium, an item embodying the techniques of the present disclosure, such as information encoded into a carrier wave.A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware (such as a microcontroller) associated with a non-transitory medium to store code suitable for execution by a microcontroller. Thus, in one implementation, a reference to a module refers to hardware that is specifically configured to identify and/or execute code to be held on a non-transitory medium. Moreover, in another implementation, the use of a module refers to a non-transitory medium that includes code that is specifically adapted to be executed by a microcontroller to perform predetermined operations. And as can be inferred, in yet another implementation, the term module (in this example) can refer to a combination of a microcontroller and a non-transitory medium. Frequently, the individual module boundaries shown are typically changing and potentially overlapping. For example, the first and second modules can share hardware, software, firmware, or a combination thereof while potentially retaining a separate piece of hardware, software, or firmware. In one implementation, the use of the term logic includes hardware such as transistors, registers, or other hardware, such as programmable logic devices.In one implementation, the use of the phrase "configured to" refers to arranging, placing, manufacturing, providing, selling, and/or designing devices, hardware, logic, or components to perform specified or determined tasks. In this example, a device that is not operating or its components is still "configured to" perform a specified task (if it is designed, coupled, and/or interconnected to perform the specified task). As a purely illustrative example, a logic gate can provide 0 or 1 during operation. However, a logic gate that is "configured to" provide an enable signal to the clock does not include every potential logic gate that may provide 1 or 0. Instead, a logic gate is a logic gate that is coupled in some way (the output clock is enabled during operation 1 or 0). Note again that the use of the term "configured to" does not require an operation, but instead focuses on the underlying state of the device, hardware, and/or component, where in the potential state, when the device, hardware, and/or component is in operation, the device, hardware And/or components are designed to perform specific tasks.In addition, in one implementation, the use of the terms "to", "capable of," or "operable to" means designing in such a manner as to enable the use of devices, logic, hardware, and/or components in a specified manner. A device, logic, hardware, and/or component. As noted above, in one implementation, the use, can be, or can be used to refer to a potential state of a device, logic, hardware, and/or component, where the device, logic, hardware, and/or components are not operating but are employed This type of approach is designed to enable the device to be used in the specified manner.As used herein, a value includes any known representation of a number, state, logic state, or binary logic state. In general, the use of logic levels, one or more logic values, is also referred to as 1 and 0, which merely represents a binary logic state. For example, 1 refers to a high logic level and 0 refers to a low logic level. In one implementation, a memory cell, such as a transistor or flash memory cell, can be capable of maintaining a single logic value or multiple logic values. However, other representations of values in computer systems have been used. For example, the decimal number 10 can also be represented as a binary value of 1010 and a hexadecimal letter A. Thus, the value includes any representation of information that can be maintained in the computer system.In addition, the state can be represented by a value or a portion of the value. As an example, a first value such as a logical one may represent a default or initial state, and a second value such as a logical zero may represent a non-default state. Moreover, in one implementation, the term reset and settings refer to default and updated values or states, respectively. For example, the default value potentially includes a high logical value, ie a reset, while the updated value potentially includes a low logical value, ie a setting. Note that any combination of values can be utilized to represent any number of states.Implementations of the methods, hardware, software, firmware or code set forth above may be implemented via code or instructions executable by a processing element stored on a machine-accessible, machine-readable, computer-accessible or computer readable medium. Non-transitory machine-accessible/readable media includes any mechanism that provides (ie, stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, non-transitory machine accessible media includes random access memory (RAM) such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage media; flash memory devices; electrical storage devices; Acoustic storage device; other form of storage device for maintaining information received from transient (propagating) signals (eg, carrier waves, infrared signals, digital signals); etc., to be distinguished from non-transitory media from which information can be received . Instructions for programming logic to perform implementations of the present disclosure may be stored in a memory in the system, such as within a DRAM, cache, flash memory, or other storage device. Moreover, the instructions can be distributed via a network or through other computer readable media. Thus, a machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (eg, a computer), but is not limited to a floppy disk, an optical disk, a compact disk read only memory (CD-ROM), and a magneto-optical disk, Read only memory (ROM), random access memory (RAM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), magnetic or optical card, flash memory or on the Internet A tangible, machine readable storage device for use in transmitting information via electrical, optical, acoustic or other forms of propagating signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, a computer readable medium comprises any type of tangible machine readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).Reference to "one implementation" or "implementation" throughout this specification means that the specific features, structures, or characteristics described in connection with the implementation are included in at least one implementation of the present disclosure. The appearances of the phrase "in an implementation" Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more implementations.In the foregoing specification, a detailed description has been presented with reference to a particular exemplary implementation. It will be apparent, however, that various modifications and changes may be made thereto without departing from the spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded as Furthermore, the foregoing uses of implementations, embodiments, and/or other exemplary languages are not necessarily referring to the same implementation or the same examples, but may refer to different and distinct implementations and potentially the same implementation.Portions of the detailed description are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the <RTIgt; The algorithm is here and generally considered to be a self-consistent sequence of operations leading to a desired result. Operations are those that require physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of an electrical or magnetic signal that can be stored, transferred, combined, compared, and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, and so forth. The blocks described herein can be hardware, software, firmware, or a combination thereof.However, it should be borne in mind that all of these and similar terms are associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Except as explicitly stated in other ways as apparent from the discussion above, it is recognized that throughout the description, such as "definition," "receive," "determine," "publish," "link," "association," The discussion of terms such as "acquired," "certified," "disabled," "executed," "requested," "communicated," etc. refers to the actions and processes of a computing system or similar electronic computing device that are manipulated and expressed as being in the calculations The registers of the system and the physical (e.g., electronic) amount of data within the memory are transformed into other data that is similarly represented as a computational system memory or register or other such information storage device, transmission or display device.The word "example" or "exemplary" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "example" or "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Instead, the use of the words "example" or "exemplary" is intended to present a concept in a specific manner. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from the context, "X includes A or B" is intended to mean any natural inclusive permutation. That is, if X includes A; X includes B; or X includes both A and B, then "X includes A or B" is satisfied under any of the foregoing examples. In addition, the articles "a", "an", and "the" . In addition, the use of the terms "embodiment" or "an embodiment" or "an implementation" or "an implementation" is not intended to mean the same embodiment or implementation. Moreover, the terms "first," "second," "third," "fourth," and the like, as used herein, are meant as labels for distinguishing between different elements, and may not necessarily be specified according to their numbers. Has a sequential meaning.The present disclosure also discloses a set of technical solutions as follows:Technical Solution 1. A processing device, the processing device comprising:a memory ownership table (MOT), the MOT being accessed for software access;Processing the core, the processing core:Performing a Trusted Domain (TD) and managing a Trusted Domain Resource Manager (TDRM) of the TD;Maintaining a Trusted Domain Control Structure (TDCS) for managing global metadata of one or more of the TD or other TDs executed by the processing device;Accessing one or more trusted domain thread control structures (TD-) that are accessed by software access by the TDCS and for software access from at least one of the TDRM, virtual machine manager (VMM), or the other TD Maintaining the execution state of the TD in TCS);Referring to the MOT to obtain at least one key identifier (ID) corresponding to an encryption key assigned to the TD, the key ID allowing the processing device to decrypt in response to the processing device in the TD a memory page that is executed in the context and assigned to the TD, the memory page assigned to the TD being encrypted by the encryption key;Referring to the MOT to obtain a guest physical address corresponding to a host physical memory page assigned to the TD, wherein a match between the guest physical address obtained from the MOT and the accessed guest physical address is allowed to be responsive to The processing device is executed in the context of the TD and is accessed by a processing device assigned to the memory page of the TD.The processing device of claim 1, wherein the VMM comprises a TDRM component to provide memory management for at least one of: TD, the other TD or one or via an extended page table (EPT) Multiple virtual machines (VMs).The processing device of claim 1, wherein the TD-TCS refers to the TDCS, wherein the TDCS maintains a count of one or more TD-TCSs corresponding to logical processors of the TD, And wherein the TD-TCS stores a user execution state and a hypervisor execution state of the TD.The processing device of claim 1, wherein the encryption key is generated by a multi-key total memory encryption (MK-TME) engine of the processing device.The processing device of claim 4, wherein the MK-TME engine generates a plurality of encryption keys accessed via a key ID assigned to the TD for encrypting and decrypting the TD The memory page, and encrypting and decrypting a memory page corresponding to a persistent memory assigned to the TD, and wherein the MOT tracks the via a key ID associated with each entry in the MOT Multiple key IDs.6. The processing device of claim 2, wherein the processing core references the MOT of a host physical memory page accessed as part of a page traversal operation to access a guest physical memory page mapped by the EPT .The processing device of claim 1, wherein the TD comprises at least one of: an operating system (OS) for managing one or more applications or for managing one or more virtual machines ( The VMM of the VM), and wherein the TD enters an operation to transfer an operational context of the processing core from at least one of the VMMs to the OS of the TD or to a transfer from the TDRM to the TD Said VMM.The processing device of claim 1, wherein the TDRM is not included in a Trusted Computing Base (TCB) of the TD.The processing device of claim 1, wherein the TDCS includes a signature structure that captures a cryptographic measurement of the TD, the cryptographic measurement being signed by a hardware trusted root of the processing device, And wherein the signature structure is provided to the attestation for verifying the cryptographic measurements.The processing device of claim 1, wherein the processing core further maintains a measurement state of the TD in the TDCS, the TDCS being directed to at least the performing by the processing device Software access to the TDRM, the VMM, or the software of the other TDs is accessed.The processing device of claim 1, wherein the TDRM manages the TD and the other TD.Technical Solution 12. A method, the method comprising:Identifying the TD exit event by executing the Trusted Domain Resource Manager (TDRM) to manage the trusted device (TD) executing on the processing device;In response to identifying the TD exit event, a first key identifier (ID) corresponding to a first encryption key assigned to the TD is utilized to save a user execution state of the TD and a TD hypervisor execution state In a trusted domain thread control structure (TD-TCS) corresponding to a logical processor assigned to the TD, the execution state is encrypted by the first encryption key, wherein the TD-TCS is directed to Software access of at least one of the TDRM, virtual machine manager (VMM) or other TD performed by the processing device is accessed;Modifying a key ID status of the processing device from the first key ID to a second key ID corresponding to at least one of the TDRM or the VMM;The TDRM execution and control status and the exit information of the TDRM are loaded to cause the processing device to operate in the context of the TDRM.Technical Solution 13. The method of claim 12, further comprising:Performing a TD entry event in the context of the TDRM;Using a second key identifier (ID) corresponding to a second encryption key assigned to the TDRM to control a structure from a trusted domain resource manager corresponding to the logical processor assigned to the TD (TD-RCS) loading TDRM execution control specified by the TDRM, the execution state being encrypted by the second encryption key, wherein the TD-RCS is used from the TD or executed by the processing device An extended page table (EPT) of at least one of the other VMs to access control;Modifying a key ID status of the processing device from the second key ID to a first key ID corresponding to the TD;The user execution state and the hypervisor execution state are loaded from the TD-TCS to cause the processing device to operate in the context of the TD.The method of claim 13, wherein the TDCS and TD-TCS are controlled by confidentiality and access via a memory ownership table (MOT) of the processing device, the MOT including a first entry of the TDCS, the first entry associating the first key ID with the TD, wherein the MOT is enforced to a memory corresponding to the TD using the first key ID The memory confidentiality of the page's memory access.The method of claim 12, wherein the MOT is access controlled via a range register.The method of claim 14, wherein the TDRM execution and control state is loaded from the TD-RCS structure that is accessed and controlled via the EPT and the MOT, wherein the MOT includes a second entry of the TD-RCS structure, the second entry associating the second key ID with a physical memory page containing the TD-RCS, and wherein the MOT utilizes the second key The ID enforces memory confidentiality to memory accesses corresponding to memory pages of the TDRM.The method of claim 12, wherein the VMM is a root VMM that includes the TDRM to manage one or more TDs, wherein the TD includes a non-root VMM to manage one or more virtual machines (VM), and wherein the TD exit transfers an operational context of the processing core from the non-root VMM or the one or more VMs of the TD to the root VMM and TDRM.The method of claim 12, wherein the encryption key is generated by a multi-key total memory encryption (MK-TME) engine of the processing device, and wherein the MK-TME engine generates a secret a key ID assigned to a plurality of encryption keys of the TD for encrypting a short memory page or a persistent memory page of the TD, and wherein the MOT tracks the plurality of encryption key IDs, wherein each The host physical page referenced in the MOT is a key id.Technical Solution 19. A system, the system comprising:a memory device to store instructions;A processing device operatively coupled to the memory device, the processing device executing the instructions to:Performing a Trusted Domain Resource Manager (TDRM) to manage a Trusted Domain (TD), wherein the TDRM is not included in a Trusted Computing Base (TCB) of the TD;Maintaining a hypervisor execution state and a user execution state of the TD in a Trusted Domain Thread Control Structure (TD-TCS) for the TDRM, virtual machine manager (executed by the processing device) Software access to at least one of VMM) or other TDs is accessed;Referring to the MOT to obtain at least one encryption key identifier (ID) corresponding to an encryption key assigned to the TD, the key ID allowing the processing device to decrypt in response to the processing device being a memory page that is executed in the context of the TD and assigned to the TD, the memory page assigned to the TD being encrypted by the encryption key identified via the encryption key ID;Referring to the MOT to obtain a guest physical address corresponding to a host physical memory page assigned to the TD, wherein the matching of the guest physical address with the accessed guest physical address is allowed to be responsive to the processing device being The processing device accessing the memory page that is executed in the context of the TD and assigned to the TD is accessed.The system of claim 19, wherein the VMM comprises a TDRM component to provide memory management for one or more of: TD, the other TD or one via an Extended Page Table (EPT) Or multiple virtual machines (VMs).The system of claim 19, wherein the TD-TCS corresponds to a logical processor of the TD, and the TD-TCS stores the hypervisor execution state of the TD on a TD exit operation And a user execution state of the TD and a hypervisor execution state loaded on the TD entry operation, wherein the TD-TCS is for the TDRM, the VMM, or the other from the processing device Software access to at least one of the TDs is accessed.The system of claim 19, wherein the encryption key is generated by a multi-key total memory encryption (MK-TME) engine of the processing device, and wherein the MK-TME engine generates a secret a key ID assigned to a plurality of encryption keys of the TD for encrypting a short memory page or a persistent memory page of the TD, and wherein the MOT is via a key associated with each entry in the MOT The ID is used to track the plurality of encryption key IDs.The system of claim 19, wherein the VMM includes the TDRM to manage the TD, wherein the TD comprises an operating system (OS) or a non-root VMM to manage one or more virtual machines ( VM), and wherein the TD entry operation transfers the operational context of the processing core from the TDRM to the non-root VMM of the TD.Technical Solution 24. A non-transitory machine readable storage medium comprising data that, when accessed by a processing device, causes the processing device to perform operations comprising:Identifying a TD entry event by a processing device executing a Trusted Domain Resource Manager (TDRM) to manage a trusted domain (TD) when the processing device is executing in the context of the TDRM;Responsive to identifying the TD entry event, utilizing a first key identifier (ID) corresponding to a first encryption key assigned to the TDRM to control a structure from a trusted domain resource manager corresponding to the TDRM (TDRCS) loading a TDRM control state of the TDRM, the TDRM control state being encrypted by the first encryption key, wherein the TDRCS is for the TD or the other TDs executed by the processing device At least one software access is controlled by access;Modifying a key ID status of the processing device from the first key ID to a second key ID corresponding to a second encryption key assigned to the TD;Loading a hypervisor execution state and a TD user execution state of the TD from a trusted domain thread control structure (TD-TCS) to cause the processing device to operate in the context of the TD, wherein the TD-TCS is directed Software access to at least one of the TDRM or the other TDs performed by the processing device is controlled by access.The non-transitory machine readable storage medium of claim 24, wherein the TDCS and TD-TCS are accessed via a memory ownership table (MOT) of the processing device, the MOT comprising a first entry of the TD-TCS, the first entry associating the first key ID with the TD, wherein the MOT utilizes the first key ID to enforce a pair corresponding to Memory access control for memory accesses of the memory pages of the TD.
In described examples, an electronic device includes a semiconductor structure (102) having a back end capacitor (122) and a back end thin film resistor (126). The semiconductor structure (102) includes a first dielectric layer (106), a bottom plate (134) of the capacitor (122) and a thin film resistor body (132). The bottom plate (134) and the resistor body (132) are laterally spaced apart portions of the same thin film layer (108). The bottom plate (134) further includes a conductive layer (110) overlying the thin film layer (108). A second dielectric layer (124) is disposed on the conductive layer (110) of the bottom plate (134) of the capacitor (122). A top plate (120) of the capacitor (122) is disposed on the second dielectric layer (124).
1.An electronic device comprising:A semiconductor structure comprising: a first dielectric layer; a resistor body of a resistor, the resistor body including a first portion of a thin film layer on the first dielectric layer; a bottom plate of the capacitor, A second portion of the thin film layer and a conductive layer covering the second portion of the thin film layer; a second dielectric layer disposed on the substrate of the capacitor; and a top plate of the capacitor On said second dielectric layer above said substrate;Wherein the bottom plate and the resistor body are laterally spaced apart layers, the laterally spaced apart layers being disposed on the first dielectric layer and being formed of the same film material.2.The electronic device according to claim 1, wherein the thin film material of the bottom plate and the resistor main body is a metallic material, and the material of the top plate of the capacitor is a metallic material.3.The electronic device according to claim 1, wherein said first dielectric layer is deposited on a first metallization level of said semiconductor structure.4.The electronic device according to claim 1, further comprising:A third dielectric layer disposed over the top plate of the capacitor and above the resistor body; andAnd a second metallization layer of the semiconductor structure disposed on the third dielectric layer.5.The electronic device according to claim 4, further comprising a first conductive through hole extending through said third dielectric layer to said top plate of said capacitor.6.The electronic device according to claim 5, further comprising a second conductive through-hole extending through said third dielectric layer to said bottom plate of said capacitor.7.The electronic device according to claim 6, further comprising a third conductive through-hole extending through the third dielectric layer to the thin film resistor body.8.The electronic device according to claim 7, wherein said first conductive through hole, said second conductive through hole and said third conductive through hole are connected to said second metallization level.9.The electronic device according to claim 8, further comprising a hard mask layer on said top plate of said capacitor and on said resistor body.10.The electronic device according to claim 1, wherein said bottom plate for said capacitor and said thin film layer for said film resistor main body are made of SiCr, said conductive layer is TiN, said second The electrical layer is made of silicon nitride, and the top plate of the capacitor is made of TiN.11.A method of manufacturing an electronic device, the method comprising:Depositing a first dielectric layer over the semiconductor substrate;Depositing a thin film layer for the bottom plate of the capacitor and the main body of the resistor;Depositing a first conductive layer over the thin film layer;Depositing a second dielectric layer on the first conductive layer;Depositing a second conductive layer on the second dielectric layer;Patterning and etching to remove the second conductive layer and the second dielectric layer in the resistor region and forming a top plate and a capacitor dielectric of the capacitor;Removing the first conductive layer in the resistor region, leaving a portion of the first conductive layer in the capacitor;Etching a portion of the thin film layer to form the resistor body and the bottom plate of the capacitor, wherein the bottom plate further comprises the portion of the first conductive layer;Wherein the bottom plate and the resistor body layer are deposited on the first dielectric layer in a common process step, and wherein the bottom plate and the resistor body are laterally spaced apart portions of the same thin film layer The12.The method according to claim 12, further comprising:Forming a first metallization level above the semiconductor substrate prior to depositing the first dielectric layer;Depositing a third dielectric layer on said top plate of said capacitor;A second metallization level is formed over the third dielectric layer.13.The method according to claim 11, wherein patterning and etching to remove said second conductive layer and said second dielectric layer comprises:Forming a first hard mask layer over said second conductive layer;Forming a first mask pattern over said first hard mask layer;Etching the second conductive layer and the second dielectric layer using the first mask pattern.14.The method according to claim 13, wherein removing said first conductive layer in said resistor region comprises wet etching said first conductive layer using said first hard mask layer to protect said capacitor.15.The method according to claim 13, wherein etching said second conductive layer and said second dielectric layer comprises:Dry etching said first hard mask layer;Dry etching the second conductive layer to the second dielectric layer; andDrying the second conductive layer onto the first conductive layer.16.The method according to claim 15, wherein etching said portion of said thin film layer to form said resistor body and said bottom plate of said capacitor comprises:Depositing a second hard mask layer over the top layer of the thin film layer and the capacitor;Forming a second mask pattern over said second hard mask layer;Etching the second hard mask layer onto the thin film layer using the second mask pattern;Removing the second mask pattern;The thin film layer is etched using the etched second hard mask layer as a mask.
Integrated thin film resistors and MIM capacitorsTechnical fieldThe present invention relates to an electronic device comprising a semiconductor structure comprising a back-end thin film resistor and a back-end capacitor having a low series resistance, and to a method of manufacturing the electronic device.Background techniqueThe back-end film capacitor structure according to the prior art is & quot; competing & quot; with interconnected metallized wiring in the metallization layer of a semiconductor device. Patent Application Publication No. US 2007 / 0170546A1 discloses a rear end film capacitor structure having a film capacitor including a top plate located in a metallization layer of a semiconductor device. However, the top plate of the capacitor consumes a valuable bottom space in the metallized wiring layer.Conventional thin film capacitors, such as metal-insulator-metal (MIM) capacitors, consume the area in the interconnect layer where they are built. For example, the area occupied by the top plate or the bottom plate of the film capacitor can not be used for conventional metallization wiring in the metallization layer. In general, due to the addition of thin film capacitors in the semiconductor structure, the chip size increases or the interconnection level increases.The contents of the inventionIn the example described, an electronic device includes a semiconductor structure comprising a back-end thin film resistor and a back-end capacitor having a low series resistance. The capacitor and the resistor are easily integrated in the existing semiconductor process, and the chip area of ​​the capacitor and the resistor does not compete with the metallized wiring in the semiconductor device.In at least one of the described examples, an electronic device includes a semiconductor structure having a back-end capacitor and a back-end thin film resistor. The semiconductor structure includes a first dielectric layer, a bottom plate of the capacitor, and a thin film resistor body. The bottom plate and the resistor main body are laterally spaced apart portions of the same layer, which are provided on the first dielectric layer and are made of the same film material. The bottom plate further comprises a conductive layer covering the film material. In addition, the second dielectric layer is provided on the conductive layer of the bottom plate of the capacitor. The top plate of the capacitor is disposed on the second dielectric layer in the region of the second dielectric layer, which is defined by the lateral dimension of the bottom plate of the capacitor.A method of manufacturing an electronic device includes sequentially depositing a thin film layer, a first conductive layer, a capacitor dielectric layer, and a second conductive layer. The second conductive layer and the capacitor dielectric layer in the area of ​​the resistor are removed. The first conductive layer in the resistor region is also removed. The thin film layer is etched to laterally separate the resistor body portion of the thin film layer from the capacitor floor portion of the thin film layer. The capacitor floor comprises a portion of the thin film layer and a first conductive layer.Description of the drawingsFigures 1 to 8 schematically illustrate successive process steps involved in the manufacture of thin film back-end capacitors and thin film back-end resistors in electronic devices according to an example embodiment.detailed descriptionPatent No. US 8,803,287 B2 describes related topics and is incorporated herein by reference.1 to 6 show an electronic device including a semiconductor structure having a film capacitor and a thin film resistor in each manufacturing stage. The back-end film capacitors and the back-end thin film resistors can be interconnected via a single-stage interconnect metallization. Depending on the application and application of the layer, the thickness of the film layer is usually in the range of about _ENT0 to about. Depending on the desired sheet resistance, the thin film resistor layer typically varies between(e.g., 1000? /? SiCr) and(e.g., 50? /? NiCr). Depending on the material and the desired specific capacitance, the dielectric film of the capacitor is typically thinner than. Depending on the specific resistance of the material, the desired series resistance and / or process limitations, the thickness of the top and bottom metal electrodes can vary between hundreds and thousands of.In this context, the term "back end" describes the integration of components, including the integration of thin film capacitors and thin film resistors on partially fabricated integrated circuit structures. Previously, transistors and polysilicon structures have been formed in integrated circuits. Although the so-called & quot; front end & quot; process typically includes a process step performed at a process temperature in the range of 600 & lt; 0 & gt; C to 700 & lt; 0 & gt; C, the & quot; back end & quot; process typically includes a process step performed at a lower temperature (roughly 450 & TheDeposition in another zone (in this case, in the area defined by the transverse dimension of the bottom plate of the capacitor) means that the lateral dimension of the deposition structure is equal to or less than the lateral dimension of the underlying structure. Thus, in the plan view, the area of ​​the top plate of the capacitor is equal to or smaller than the area of ​​the bottom plate of the capacitor. In addition, the deposition of the first layer on the top of the second layer may be treated as deposition directly on the top of the respective layer.1, the starting semiconductor substrate 102 (e.g., a silicon substrate) may include various active and passive devices (not shown) that have been formed in the various regions of the semiconductor substrate 102, such as bipolar transistors And / or MOS transistors. Standard metallization and wiring levels 104 are provided on the semiconductor substrate 102. [ The wiring traces are covered by the first intermetallic dielectric layer 106. After deposition, the first intermetallic dielectric layer 106 may be planarized according to conventional process steps in semiconductor fabrication.1, the thin film layer 108 (e.g., sicrome (SiCr), SiCr: C, NiCr or NiCrAl), the first conductive layer 110 and the second dielectric layer 112, such as a silicon nitride layer (Si3N 4) or a silicon dioxide layer (SiO 2) is sequentially deposited on the top of the first intermetallic dielectric layer 106. For example, these layers may be directly adjacent to each other. When silicon chromium is used for the film layer 108, it may have a typical sheet resistance ranging from 30? /? To 2000? / ?. The thin film layer 108 is deposited on the upper surface of the first intermetallic dielectric layer 106. The first conductive layer 110 is deposited on the thin film layer 108. The first conductive layer 110 has a lower sheet resistance than the thin film layer 108 and serves as a series resistance to reduce the subsequent formation of the capacitor substrate. For example, the first conductive layer 110 may include TiN of 10-20? / ?. Other conductive materials, such as aluminum, may also be used interchangeably. The second dielectric layer 112 (Si3N4) is deposited on the top of the first conductive layer 110. [ In one example: (a) the film layer 108 may have a thickness in the range ofand be formed directly on the first dielectric layer 106; (b) the first conductive layer 106 may have a thickness in the, And is directly formed on the thin film layer 108; and (c) the second dielectric layer 110 may have a thickness in the range ofand is formed directly on the first conductive layer 108. The second dielectric layer 110 may be formed on the first conductive layer 108,2, a second conductive layer 114 for forming a top plate of a capacitor is deposited over the second dielectric layer 112, and a hard mask layer 116 is deposited over the second conductive layer 114. As shown in FIG. The second conductive material 114 includes a conductive material such as TiN or TiW. The hard mask layer 116 may include an oxide, a nitride, or a nitrogen oxide. The mask pattern 118 is deposited over the second conductive material 114. [ The mask pattern 118 covers the area designated for the back-end capacitor and exposes the area for the back-end film resistors. Although the first and second conductive layers 114 may include metals, they are not part of a conventional metallization layer. Instead, they are formed between the metallization levels.In Figure 3, a first patterning and etch back step has been performed to provide: (a) a top plate 120 of the film capacitor 122 in the second conductive layer 114; and (b) a second dielectric layer 112 & lt; / RTI & gt; The first patterning and etching includes hard mask etching (e.g., dry etching using etch chemistry such as C x F y / O 2), top plate etching (e.g., using, for example, BCl 3 / Cl 2 / N 2 etching of the TiN 114 in the second dielectric layer 112), and etching of the second dielectric layer 112 (for example, using an etching chemical such as C x F y / O 2 in the TiN layer 110 Stop the dry etching). Thus, the hard mask layer 116, the second conductive layer 114, and the second dielectric layer 112 are removed from the region designated for the thin film resistor. Advantageously, the second dielectric layer 112 provides an etch stop during a standard patterning and etch step, which may be performed using conventional photoresist deposition, etch and cleaning steps, etc., according to conventional semiconductor techniques.Referring to FIG. 4, the mask pattern 118 is removed by, for example, an ashing process. The remainder of the hard mask layer 116 is then used to protect the capacitor 122, performing wet etching to remove the first conductive layer 110 from the region designated for the thin film resistor 126, leaving only the thin film layer in the thin film resistor region 108. The wet etch may result in some undercutting of the top plate 120 and the first conductive layer 110 in the capacitor 122. [Referring to Figure 5, the hard mask layer 128 is deposited. The hard mask 128 may also include an oxide, a nitride, or a nitrogen oxide.Referring to FIG. 6, the mask pattern 130 is formed. The mask pattern 130 covers the capacitor 122 and the thin film resistor 126 and exposes the area between the capacitor 122 and the thin film resistor 126. [ The exposed portion of the hard mask 128 is removed by etching, where the etching is stopped in the thin film layer 108. [ The mask pattern 130 is then removed (e.g., ashed). The standard ashing cleaning process can also be performed.The thin film layer 108 and the conductive layer 110 are used to fabricate the bottom plate of the film capacitor. The thin film layer 108 is used alone to form the body of the thin film resistor. In order to provide a bottom plate of a film capacitor which is spaced apart or laterally spaced from the main body of the thin film resistor, a hard mask 128 is used as the pattern etched thin film layer 108, as shown in Fig.7, the etching of the thin film layer 108 has been performed to provide a laterally spaced apart film resistor body 132 (a first portion of the thin film layer 108) of the resistor 126 and a bottom plate 134 of the thin film back end capacitor 122 (the same film layer the second part). The bottom plate 134 includes a thin film layer 108 and a first conductive layer 110. [ The inclusion of the first conductive layer 110 in the bottom plate 134 provides the advantage of a lower series resistance.In Fig. 8, the second intermetallic dielectric layer 140, which is the third dielectric layer, is deposited on the top of the structure of Fig. The second intermetallic dielectric layer 140 may be subjected to further process steps such as planarization. The second intermetallic dielectric layer 140 provides a basis for further metallization levels that can be used for wiring of traces in semiconductor structures.In Fig. 8, a further metallization layer 142 is deposited on the top of the second intermetal dielectric layer 140. In Fig. Similarly, the vertically conductive through holes 144-150 are formed so as to connect the thin film resistor main body 132 (through hole 144), the metallization layer 104 (through hole 146), the bottom plate 134 (through hole 148) of the capacitor 122, and the top plate 120 150) is electrically coupled to the second metallization stage 142. [ Although the thin film resistor 126 and the capacitor 122 may include metal layers, they are not part of a conventional metallization layer. In contrast, as shown in Fig. 8, they are formed between the metallization layers 104 and 142. In Fig. The metallization level 104 is the level M N (e.g., M 2), and the metallization level 142 is the layer M N + 1 (e.g., M 3).The electronic device 160 of FIG. 8 may include further active and passive components, which are not shown (for simplicity of illustration).In the described embodiments, modifications are possible and other embodiments are possible within the scope of the claims.
Embodiments of the invention provide a beam generator to produce an atomic beam that travels across a patterned surface of a reticle. The beam may interact with particles to prevent the particles from contaminating the reticle.
CLAIMS I claim: 1. A device, comprising: a beam generator to produce an atomic beam directed across a patterned surface of a reticle; and an ion trap to trap at least some of the beam after the beam travels across the reticle. 2. The device of claim 1, wherein the atomic beam is produced to interact with particles to prevent the particles from contaminating the patterned surface of the reticle. 3. The device of claim 1, wherein the beam generator produces at least one of an Argon ion beam or a Xenon ion beam. 4. The device of claim 1, wherein the beam generator is to produce an atomic beam with an energy in a range of about 1 keV to about 100 keV. 5. The device of claim 1, wherein the beam generator is to produce an atomic beam that is substantially parallel with the patterned surface of the reticle. 6. The device of claim 5, wherein the atomic beam travels across the patterned surface of the reticle along a path, at least a portion of the path being within a distance of about 10 centimeters or less from the patterned surface of the reticle. 7. The device of claim 1, wherein the atomic beam comprises charged ions. 8. The device of claim 1, wherein the atomic beam comprises neutral atoms. 9. The device of claim 8, further comprising a neutralizer to neutralize ions generated by the beam generator. 10. The device of claim 1, further comprising: <Desc/Clms Page number 10> a photolithography chamber including: a first volume to contain a piece of target material to be patterned by electromagnetic radiation reflected from the reticle, the first volume to be at a first pressure during patterning of the target material; and a second volume to contain the reticle, the second volume to be at a second pressure higher than the first pressure during patterning of the target material; and a vacuum line connected to the ion trap to remove particles from the photolithography chamber. 11. The device of claim 10, wherein the beam generator and the ion trap are within the photolithography chamber. 12. The device of claim 10, further comprising: a source of extreme ultraviolet radiation; source optics to receive the extreme ultraviolet radiation from the source and to direct the received extreme ultraviolet radiation to the patterned surface of the reticle; and imaging optics to receive extreme ultraviolet radiation reflected from the patterned surface of the reticle to a piece of target material, wherein the extreme ultraviolet radiation from the imaging optics interacts with at least a portion of the target material to pattern the target material. 13. The device of claim 10, wherein the second pressure is in a range from about 10 milliTorr to about 100 milliTorr. 14. The device of claim 1, further comprising an electrical system to provide a voltage differential between the beam generator and the ion trap. <Desc/Clms Page number 11> 15. A device, comprising: a photolithography chamber; a reticle holder in the photolithography chamber to hold a reticle with a patterned surface; a source to generate an atomic beam and direct the beam across at least a portion of the photolithography chamber; and a trap to remove atoms of the atomic beam from the photolithography chamber. 16. The device of claim 15, wherein the atoms of the atomic beam interact with particles in the photolithography chamber and cause the particles to travel in a direction toward the trap to prevent the particles from contaminating the patterned surface of the reticle. 17. The device of claim 15, wherein the atoms of the atomic beam get within a distance of about 10 centimeters or less of the patterned surface of the reticle. 18. The device of claim 15, wherein the photolithography chamber is adapted to have a pressure adjacent the reticle in a range from about 10 milliTorr to about 100 milliTorr during use. 19. The device of claim 15, wherein the atoms of the atomic beam have an energy in a range of about 1 keV to about 100 keV. 20. The device of claim 15, wherein the atoms of the atomic beam comprise charged ions. 21. The device of claim 15, wherein the atoms of the atomic beam comprise neutral atoms. 22. A method, comprising: generating an atomic beam; <Desc/Clms Page number 12> directing the atomic beam across a patterned surface of a reticle disposed in a photolithography chamber; trapping the atomic beam and a plurality of particles; and removing the particles from the photolithography chamber. 23. The method of claim 22, wherein the atomic beam has an energy in a range of about 1 keV to about 100 keV. 24. The method of claim 22, further comprising reducing a pressure of a portion of the photolithography adjacent the reticle to a range from about 10 milliTorr to about 100 milliTorr. 25. The method of claim 22, further comprising collimating the atomic beam. 26. The method of claim 22, wherein the atomic beam comprises at least one of charged ions or neutral atoms.
<Desc/Clms Page number 1> ATOMIC BEAM TO PROTECT A RETICLE BACKGROUND Background of the Invention [0001] Lithography is used in the fabrication of semiconductor devices. In lithography, a light sensitive material called a "photoresist" coats a wafer substrate, such as a silicon substrate. The photoresist may be exposed to light reflected from or transmitted through a mask, called a "reticle", to reproduce a pattern from the reticle on the substrate. If the reticle is contaminated, such as by unwanted particles on the surface of the reticle, the pattern of light reflected from the reticle, and thus the pattern formed on the substrate, may not be the desired pattern. This may lead to failures of microelectronic or other devices formed on the substrate. Brief Description of the Drawings [0002] Figure 1 is a cross sectional schematic of a lithography apparatus according to one embodiment of the present invention. Figure 2a is a cross sectional schematic of a lithography apparatus that illustrates particles that may contaminate the patterned surface of the reticle. Figure 2b is a cross sectional schematic of a lithography apparatus that illustrates how the atomic beam may prevent particles from contaminating the reticle. Figure 3 is a cross sectional schematic that illustrates an alternative embodiment of the lithography apparatus. Figure 4 is a cross sectional schematic that illustrates another alternative embodiment of the lithography apparatus. <Desc/Clms Page number 2> DETAILED DESCRIPTION [0007] Figure 1 is a schematic diagram of a lithography apparatus 100 for patterning a piece of target material 120, such as a silicon substrate, through use of light reflected off a patterned surface of a reticle 114 according to one embodiment of the present invention. 5 The lithography apparatus 100 may include a lithography chamber 102 in which the lithography may take place. In some embodiments, the lithography chamber 102 may be divided into three volumes, a first volume 103, a second volume 104, and a third volume 106. ] The first volume 103 may enclose a radiation source 108 and source optics 10 112, and thus may be referred to as a "source volume" or "source optics volume. " The radiation source 108 may be capable of producing electromagnetic radiation 110 used with the reticle 114 to pattern the target material 120. In some embodiments, the radiation source 120 may produce extreme ultraviolet light (EUV); such as light with a wavelength less than about 15 nanometers and greater than that of x-rays (about 1.3 nanometers). The 15 light may have a wavelength of about 13.5 nanometers in some embodiments. In other embodiments, the radiation source 120 may produce different types of radiation or light, with different wavelengths. The source optics 112 may include mirrors or other optical devices for directing the radiation 110 from the radiation source 108 to a patterned surface 113 of the reticle 114. 20 [0009] The second volume 104 may enclose imaging optics 118, and thus may be referred to as an "imaging volume" or "imaging optics volume. " The imaging optics 118 may receive radiation 110 reflected from the patterned surface 113 of the reticle 114 and direct the reflected radiation 110 to the target material 120. [00101 The second volume 104 may also enclose the target material 120. The target 25 material 120 may be, for example, a silicon wafer with a coating of a photoresist material. <Desc/Clms Page number 3> The photoresist material may react in response to the radiation 110 reflected from the reticle 114 to allow patterning of the material of the silicon wafer. Other materials besides a silicon wafer may also be used as the target material 120. [00111 A first separator 105a may separate the first volume 103 from the third volume 106 and a second separator 105b may separate the second volume 104 from the third volume 106 in some embodiments. In an embodiment, one or both of the separators 105a, 105b may include an opening (not shown) -so that the first volume 103 and/or second volume 104 is not completely sealed off from the third volume 106. This opening may be useful, for example, when the radiation source 108 produces EUV light. EUV light is blocked by nearly all materials, but openings in the separators 105a, 105b may allow the EUV radiation to travel from the radiation source 108 in the first volume 103 to the patterned surface 113 of the reticle 114 in the third volume 106 and then to the target material 120 in the second volume 104. The opening (s) be small enough that a pressure differential may be maintained between the first and third volumes 103,106 and/or between the second and third volumes 104, 106. For example, the first and second volumes 103,104 may be held at a near vacuum during operation of the lithography apparatus 100 while the third volume 106 may be held at a higher pressure, such as between about 10 and 100 milliTorr. In another embodiment, the separators 105a, 105b may completely seal the first and second volumes 103,104 from the third volume 106. The radiation 110 produced by the radiation source 108 may have a different wavelength than EUV light and be able to pass through windows in the separators 105a, 105b to travel between the first, second, and third volumes 103,104, 106. In still another embodiment, the entirety of both separators 105a, 105b may be made of a material transparent to the radiation 110 produced by the radiation source 108. <Desc/Clms Page number 4> The third volume 106 may enclose a reticle holder 116. The reticle holder 116 may hold the reticle 114 in a fixed or moveable position during use of the lithography apparatus 100, so that the pattern on the patterned surface 113 of the reticle 114 may be correctly transferred to the target material 120. The reticle holder 116 may move during use, causing the reticle 114 to also move, and allowing the radiation 110 reflect off of the entire patterned surface 113 of the reticle 114. Various embodiments of reticle holders 116 may be used, such as a holder 116 that affixes the reticle 114 in place by electrostatic energy, a holder 116 that affixes the reticle 114 by mechanical devices, a holder 116 beneath the reticle 114 so that gravity keeps the reticle 114 in place, or other reticle holders 116. The third volume 106 may also enclose the reticle 114. The reticle 114 may be a reflective reticle 114 with a patterned surface 113 off of which the radiation 110 is reflected to pattern the target material 120 in some embodiments. In other embodiments, the reticle 114 may be a transmissive reticle 114, where radiation 110 passes through the reticle 114 to transfer the pattern from the patterned surface 113 to the target material 120 in other embodiments. Any reticle 114 suitable for a lithography apparatus 100 may be used. The third volume 106 may also enclose a beam generator 122 and a beam trap 126. During operation of the lithography apparatus 100, the beam generator 122 may generate an atomic beam 124 and direct the beam 124 across the patterned surface 113 of the reticle 114. In some embodiments, the atomic beam 124 may be a beam of charged ions or a beam of neutral atoms. The beam 124 may interact with particles within the third volume 106 to prevent the particles from contacting and contaminating the patterned surface 113 of the reticle 114. The beam 124 may cause the particles to travel along the direction of the beam 124. The beam 124 and particles with which the beam 124 has <Desc/Clms Page number 5> interacted may enter the beam trap 126. The beam trap 126 may trap the charged ions or neutral atoms of the beam 124 and the particles with which the beam 124 has interacted and prevent them from re-entering the third volume 106. A vacuum line 128 or another device may remove the charged ions or neutral atoms of the beam 124 and the particles from the trap 126 and the lithography chamber 102 to prevent them from contaminating the patterned surface 113 of the reticle 114. The lithography apparatus 100 may be different in different embodiments of the invention. For example, the lithography chamber 102 may not be divided into multiple volumes, or may be divided into more or fewer than three volumes. The various components of the lithography apparatus 100 may be arranged differently. For example, the radiation source 108 may be located in the third volume 106. Various components may be located outside of the lithography chamber 102, rather than enclosed by the chamber 102. For example, the beam generator 122 may be located outside the chamber 102 and direct the beam 124 into the chamber 102. Various other components may be added to the lithography apparatus 100, or the lithography apparatus 100 may lack some of the illustrated and described components in some embodiments. Figure 2a is a schematic diagram of a lithography apparatus 100 that illustrates particles 202 that may contaminate the patterned surface 113 of the reticle 114, according to one embodiment of the present invention. There may be many particles 202, such as dust, within the lithography chamber 102. These particles 202 may have a velocity that would result in the particle 202 landing on the patterned surface 113 of the reticle 114. For example, particle 202' of Figure 2a has a velocity 204 that may result in the particle 202' landing on the patterned surface 113. Should one or more particles 202 land on the patterned surface 113 of the reticle 114 and stay there, contaminating the reticle 114, the pattern from the reticle 114 may be incorrectly transferred to the target material 120. The <Desc/Clms Page number 6> radiation 110 actually reflected from the reticle 114 would be different from a contaminated reticle 114 than an uncontaminated reticle 114. The particle 202 may prevent the target material 120 from being correctly patterned. Figure 2b is a schematic diagram of a lithography apparatus 100 that illustrates how the atomic beam 124 may prevent particles 202 from contaminating the reticle 114. The beam generator 122 may generate an atomic beam 124. The atomic beam 124 may be a charged ion beam or a neutral atomic beam in some embodiments. In some embodiments, the beam 124 may be an Argon beam, a Xenon beam, another non-reactive beam, or another type of beam. The beam 124 may be substantially collimated. In some embodiments, the beam 124 may travel along a path across the patterned surface 113 of the reticle 114. This path may be substantially parallel to the patterned surface 113 in some embodiments. In other embodiments, the path may be at an angle to the patterned surface 113, with one portion of the beam 124 path closer to the surface 113 than another portion of the beam 124 path. In yet other embodiments, the beam 124 may follow different paths across all or part of the patterned surface 113 of the reticle 114. In some embodiments, all or some of the beam 124 path may be at a distance 208 of about ten centimeters or less from the patterned surface 113. In some embodiments, the pressure in the vicinity of the beam 124 and reticle 114, for example the pressure in the third volume 106 of the lithography apparatus 100 illustrated in Figure 1, may be higher than a near vacuum. In some embodiments, the pressure may be in a range from about 10 milliTorr to about 100 milliTorr, although other pressures may also be used. The atomic beam 124 may interact with the particles 202 to prevent the particles 202 from contaminating the reticle 114. The charged ions or neutral atoms of the beam 124 may interact with the particles 202 to cause the particles 202 to travel in the direction of the beam 124. The particles 202 may then enter the beam trap 126 and be <Desc/Clms Page number 7> removed from the chamber, so the particles 202 may be prevented from contaminating the reticle 114. For example, the illustrated particle 202' of Figure 2b may initially have a velocity 204 that would cause the particle 202' to land on the and contaminate the reticle 114. The atoms or ions of the atomic beam 124 may interact with the particle 202' and impart momentum to the particle 202' to change the velocity of the particle 202' so that the particle 202' will travel at least partially in a direction 206 of the beam 124 and into the beam trap 126 rather than to the reticle 114 surface 113. For example, the beam 124 has imparted momentum in the direction 206 of the beam 124 to particle 202" of Figure 2b to change the velocity 204" of the particle 202" from a velocity that would result in the particle contaminating the reticle 114 to a velocity 204" that will result in the particle 204" entering the beam trap 126 and being removed from the lithography chamber 102. In some embodiments the beam 124 may have an energy in a range from about 1 keV to about 100 keV, although in other embodiments the beam 124 may have different energies. In some embodiments, the energy of the beam 124 may be high enough to cause a desired amount of particles 202 to go into the beam trap 126 rather than contaminate the reticle 114 surface. [00191 Figure 3 is a schematic diagram that illustrates an alternative embodiment of the lithography apparatus 100. For simplicity and clarity, numerous components that may be included in the lithography apparatus 100 have been omitted from Figure 3. In the embodiment of the lithography apparatus 100 illustrated in Figure 3, an electrical system 302 creates a voltage differential between the beam generator 122 and the beam trap 126. This voltage differential may provide further force causing the atoms or ions of the atomic beam 124 to travel from the beam generator 122 into the beam trap 126. Figure 4 is a schematic diagram that illustrates another alternative embodiment of the lithography apparatus 100. For simplicity and clarity, numerous components that <Desc/Clms Page number 8> may be included in the lithography apparatus 100 have been omitted from Figure 4. In the embodiment of the lithography apparatus 100 illustrated in Figure 4, a collimator 402 collimates the beam 124 generated by the beam generator 122. In some embodiments, the beam generator 122 generates a charged atomic beam 124. A neutralizer 404 removes the charge from the beam 124 so that the beam that passes across the patterned surface 113 of the reticle 114 is a neutral beam 124. Either or both of the collimator 402 and the neutralizer 404 may be included in some embodiments of the lithography apparatus 100. Either or both of the collimator 402 and the neutralizer 404 may be part of the beam generator 122 or another component. Alternatively, either or both of the collimator 402 and the neutralizer 404 may be a separate component of the lithography apparatus 102 in some embodiments. The foregoing description of the embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. This description and the claims following include terms, such as left, right, top, bottom, over, under, upper, lower, first, second, etc. that are used for descriptive purposes only and are not to be construed as limiting. The embodiments of a device or article described herein can be manufactured, used, or shipped in a number of positions and orientations. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above teaching. Persons skilled in the art will recognize various equivalent combinations and substitutions for various components shown in the Figures. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
A method of forming a conductive line construction comprises forming a structure comprising poly silicon- comprising material. Elemental titanium is directly against the polysilicon of the poly silicon-comprising material. Silicon nitride is directly against the elemental titanium. Elemental tungsten is directly against the silicon nitride. The structure is annealed to form a conductive line construction comprising the poly silicon-comprising material, titanium silicide directly against the polysilicon-comprising material, elemental tungsten, TiSixNy between the elemental tungsten and the titanium silicide, and one of (a) or (b), with (a) being the TiSixNy is directly against the titanium silicide, and (b) being titanium nitride is between the TiSixNy and the titanium silicide, with the TiSixNy being directly against the titanium nitride and the titanium nitride being directly against the titanium silicide. Structure independent of method is disclosed.
CLAIMS:1. A method of forming a conductive line construction,comprising:forming a structure comprising polysilicon-comprising material, elemental titanium directly against the polysilicon of the polysilicon comprising material, silicon nitride directly against the elemental titanium, and elemental tungsten directly against the silicon nitride; andannealing the structure to form a conductive line construction comprising:the polysilicon-comprising material;titanium silicide directly against the polysilicon-comprising material;elemental tungsten;TiSixNybetween the elemental tungsten and the titanium silicide; andone of (a) or (b), where,(a): the TiSixNyis directly against the titanium silicide;(b): titanium nitride is between the TiSixNyand the titanium silicide, the TiSixNybeing directly against the titanium nitride, the titanium nitride being directly against the titanium silicide.2. The method of claim 1 comprising forming the silicon nitride to be amorphous.3. The method of claim 1 comprising forming all of the silicon nitride at a temperature of no greater than 350°C.4. The method of claim 1 comprising:forming the polysilicon-comprising material, the elemental titanium, the silicon nitride, and the elemental tungsten over a substrate; and forming each of the elemental titanium, the silicon nitride, and the elemental tungsten over the substrate in sub -atmospheric conditions; the substrate being kept at sub-atmospheric conditions at all times between forming all of the elemental titanium and forming all of the elemental tungsten.5. The method of claim 4 comprising forming the silicon nitride by physical vapor deposition at a temperature of no greater than 350°C.6. The method of claim 1 wherein the annealing comprises a temperature of at least 800°C.7. The method of claim 1 comprising forming the elemental titanium to a thickness of 15 Angstroms to 30 Angstroms.8. The method of claim 1 comprising forming the silicon nitride to a thickness of 25 Angstroms to 40 Angstroms and forming the TiSixNyto a thickness of 20 Angstroms to 70 Angstroms.9. The method of claim 1 wherein the annealing forms the TiSixNydirectly against the elemental tungsten.10. The method of claim 1 wherein the annealing leaves the silicon nitride at a thickness of no more than 10 Angstroms between the TiSixNyand the elemental tungsten, the TiSixNybeing directly against the silicon nitride of thickness of no more than 10 Angstroms, the elemental tungsten being directly against the silicon nitride of thickness of no more than10 Angstroms.11. The method of claim 1 being (a).12. The method of claim 1 being (b).13. A method of forming a conductive line construction,comprising:forming a structure comprising polysilicon-comprising material, elemental metal directly against the polysilicon of the polysilicon- comprising material, elemental titanium directly against the elemental metal, silicon nitride directly against the elemental titanium, and elemental tungsten directly against the silicon nitride; andannealing the structure to form a conductive line construction comprising:the polysilicon-comprising material;metal silicide directly against the polysilicon-comprising materia], the metal silicide comprising the elemental metal that reacts with the polysilicon of the polysilicon-comprising material to form said metal silicide;elemental tungsten;TiSixNybetween the elemental tungsten and the metal silicide; andone of (a) or (b), where,(a): the TiSixNyis directly against the metal silicide;(b): titanium nitride is between the TiSixNyand the metal silicide, the TiSixNybeing directly against the titanium nitride, the titanium nitride being directly against the metal silicide.14. The method of claim 13 wherein the metal silicide comprises at least one of nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide.15. The method of claim 14 wherein the metal silicide comprises at least two of nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide.16. The method of claim 13 being (a).17. The method of claim 13 being (b).18. The method of claim 13 wherein the annealing forms the TiSixNydirectly against the elemental tungsten.19. The method of claim 13 wherein the annealing leaves the silicon nitride at a thickness of no more than 10 Angstroms between the TiSixNyand the elemental tungsten, the TiSixNybeing directly against the silicon nitride of thickness of no more than 10 Angstroms, the elemental tungsten being directly against the silicon nitride of thickness of no more than 10 Angstroms.20. A method comprising:forming a structure comprising polysilicon-comprising material, titanium-comprising material over the polysilicon-comprising material, silicon nitride-comprising material over the titanium-comprising material, and tungsten-comprising material over the silicon nitride-comprising material; andannealing the structure to cause at least a part of the silicon nitride to be converted into conductive material comprising titanium, silicon and nitrogen.21. The method of claim 20 wherein the annealing the structure further causes titanium silicide material to be formed between thepolysilicon-comprising material and the conductive material.22. The method of claim 20 wherein the annealing the structure further causes titanium nitride-comprising material to be formed between the polysilicon-comprising material and the conductive material.23. The method of claim 20 wherein the annealing the structure further causes a portion of the silicon nitride to remain between the conductive material and the tungsten-comprising material.24. The method of claim 20 wherein the silicon nitride-comprising material is included in the structure as amorphous silicon nitride-comprising material.25. The method of claim 20 wherein the poly silicon-comprising material consists essentially of poly silicon, the titanium-comprising material consists essentially of elemental titanium, the silicon nitride-comprising material consists essentially of silicon nitride, and the tungsten-comprising material consists essentially of elemental tungsten.26. A conductive line construction comprising:polysilicon-comprising material;a metal silicide directly against the poly silicon of the poly silicon comprising material;elemental tungsten;TiSixNybetween the elemental tungsten and the metal silicide; and one of (a) or (b), where,(a): the TiSixNyis directly against the metal silicide;(b): titanium nitride is between the TiSixNyand the metal silicide, the TiSixNybeing directly against the titanium nitride, the titanium nitride being directly against the metal silicide.27. The conductive line construction of claim 26 being (a).28. The conductive line construction of claim 26 being (b).29. The conductive line construction of claim 26 wherein the TiSixNyis directly against the elemental tungsten.30. The conductive line construction of claim 26 comprising silicon nitride between the TiSixNyand the elemental tungsten, the silicon nitride being no thicker than 10 Angstroms, the elemental tungsten being directly against the silicon nitride, and the silicon nitride being directly against the TiSixNy.31. The conductive line construction of claim 26 wherein the metal silicide comprises titanium silicide.32. The conductive line construction of claim 26 wherein the metal silicide comprises at least one of nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide.33. The conductive line construction of claim 32 wherein the metal silicide comprises at least two of titanium silicide, nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide.34. The conductive line construction of claim 32 wherein the metal silicide comprises at least two of nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide.35. The conductive line construction of claim 26 wherein the TiSixNyhas a thickness of 20 Angstroms to 70 Angstroms.36. The conductive line construction of claim 26 wherein the metal silicide has a thickness of 30 Angstroms to 70 Angstroms.37. A semiconductor device comprising:a memory array comprising at least one digit-line, at least one word line, and at least one memory cell electrically coupled to the at least one digit-line and the at least one word-line;at least one peripheral transistor comprising a gate electrode and a pair of source/drain regions;the at least one digit-line comprising the gate electrode of the at least one peripheral transistor; andthe gate of the at least one peripheral transistor comprisingpolysilicon-comprising material, metal silicide-comprising material over the poly silicon-comprising material, composite material including titanium, silicon and nitrogen-comprising material, and tungsten-comprising material over the composite material.38. The device of claim 37 wherein the poly silicon-comprising material consists essentially of polysilicon; the metal silicide-comprising material consists essentially of metal silicide; the titanium, silicon and nitrogen-comprising material consists essentially of titanium, silicon and nitrogen; and the tungsten-comprising material consists essentially of elemental tungsten.39. The device of claim 37 wherein the metal silicide-comprising material comprises titanium silicide.40. The device of claim 39 wherein the metal silicide-comprising material consists essentially of titanium silicide41. The device of claim 37 wherein the gate electrode of the at least one peripheral transistor further comprises titanium nitride-comprising material between the metal silicide-comprising material and the tungsten comprising material.42. The device of claim 37 wherein the gate electrode of the at least one peripheral transistor further comprises silicon nitride-comprising material between the composite material and the tungsten- comprising material.
DESCRIPTIONCONDUCTIVE LINE CONSTRUCTION, MEMORY CIRCUITRY, AND METHOD OF FORMING A CONDUCTIVE LINE CONSTRUCTION TECHNICAL FIELDEmbodiments disclosed herein pertain to conductive lineconstructions, to memory circuitry, and to methods of forming a conductive line construction. BACKGROUNDMemory is one type of integrated circuitry and is used in computer systems for storing data. Memory may be fabricated in one or more arrays of individual memory cells. Memory cells may be written to, or read from, using digitlines (which may also be referred to as bitlines, data lines, or sense lines) and access lines (which may also be referred to as wordlines). The digitlines may conductively interconnect memory cells along columns of the array, and the access lines may conductively interconnect memory cells along rows of the array. Each memory cell may be uniquely addressed through the combination of a digitline and an access line.Memory cells may be volatile, semi-volatile, or non-volatile.Non-volatile memory cells can store data for extended periods of time in the absence of power. Non-volatile memory is conventionally specified to be memory having a retention time of at least about 10 years. Volatile memory dissipates and is therefore refreshed/rewritten to maintain data storage. Volatile memory may have a retention time of milliseconds or less.Regardless, memory cells are configured to retain or store memory in at least two different selectable states. In a binary system, the states are considered as either a "0" or a " 1. In other systems, at least some individual memory cells may be configured to store more than two levels or states of information.A capacitor is one type of electronic component that may be used in a memory cell. A capacitor has two electrical conductors separated by electrically insulating material. Energy as an electric field may be electrostatically stored within such material. Depending on composition of the insulator material, that stored field will be volatile or non-volatile. For example, a capacitor insulator material including only S1O2 will be volatile. One type of non-volatile capacitor is a ferroelectric capacitor which has ferroelectric material as at least part of the insulating material. Ferroelectric materials are characterized by having two stable polarized states and thereby can comprise programmable material of a capacitor and/or memory cell. The polarization state of the ferroelectric material can be changed by application of suitable programming voltages and remains after removal of the programming voltage (at least for a time). Each polarization state has a different charge- stored capacitance from the other, and which ideally can be used to write (i.e., store) and read a memory state without reversing the polarization state until such is desired to be reversed. Less desirable, in some memory having ferroelectric capacitors the act of reading the memory state can reverse the polarization. Accordingly, upon determining the polarization state, a re-write of the memory cell is conducted to put the memory cell into the pre-read state immediately after its determination. Regardless, a memory cell incorporating a ferroelectric capacitor ideally is non-volatile due to the bi- stable characteristics of the ferroelectric material that forms a part of the capacitor. Other programmable materials may be used as a capacitor insulator to render capacitors non-volatile.A field effect transistor is another type of electronic component that may be used in a memory cell. These transistors comprise a pair of conductive source/drain regions having a semiconductive channel region there-between. A conductive gate is adjacent the channel region and separated there-from by a thin gate insulator. Application of a suitable voltage to the gate allows current to flow from one of the source/drain regions to the other through the channel region. When the voltage is removed from the gate, current is largely prevented from flowing through the channel region. Field effect transistors may also include additional structure, for example a reversibly programmable charge- storage region as part of the gate construction between the gate insulator and the conductive gate. Regardless, the gate insulator may be programmable, for example being ferroelectric.Digitlines and wordlines are conductive line constructions that may comprise multiple different conductive materials. One or more of the conductive materials may in part function as a diffusion barrier to preclude or at least restrict immediately adjacent materials thereto from diffusing relative to one another. In some constructions, an outermost material comprises elemental tungsten or other metal. Ideally, such are deposited in a desired crystalline phase.Conductive line constructions are of course used in other integrated circuitry.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 is a diagrammatic cross-sectional view of a portion of a conductive line construction in process in accordance with an embodiment of the invention.Fig. 2 is a view of the Fig. 1 substrate at a processing step subsequent to that shown by Fig. 1.Fig. 3 is a view of the Fig. 1 substrate at a processing step subsequent to that shown by Fig. 1 and is an alternate to that shown by Fig. 2.Fig. 4 is a view of the Fig. 1 substrate at a processing step subsequent to that shown by Fig. 1 and is an alternate to those shown by Figs. 2 and 3.Fig. 5 is a view of the Fig. 1 substrate at a processing step subsequent to that shown by Fig. 1 and is an alternate to those shown by Figs. 2-4.Fig. 6 is a diagrammatic cross-sectional view of a portion of a conductive line construction in process in accordance with an embodiment of the invention.Fig. 7 is a view of the Fig. 6 substrate at a processing step subsequent to that shown by Fig. 6.Fig. 8 is a view of the Fig. 6 substrate at a processing step subsequent to that shown by Fig. 6 and is an alternate to that shown by Fig. 7.Fig. 9 is a view of the Fig. 6 substrate at a processing step subsequent to that shown by Fig. 6 and is an alternate to those shown by Figs. 7 and 8.Fig. 10 is a view of the Fig. 6 substrate at a processing stepsubsequent to that shown by Fig. 6 and is an alternate to those shown by Figs. 7-9.Fig. 11 is a view of memory circuitry in accordance with anembodiment of the invention. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTSEmbodiments of the invention encompass methods of forming a conductive line construction as well as a conductive line construction independent of method of manufacture. Embodiments of the invention also encompass memory circuitry. First example embodiments are described with reference to Figs. 1-5.Referring to Fig. 1, a conductive line construction 10 in process in accordance with a method embodiment comprises a structure 12 having been fabricated relative to a base substrate 11. Substrate 11 may comprise any of conductive/conductor/conducting,semiconducti ve/semiconductor/semiconducting, andinsulative/insulator/insulating (i.e., electrically herein) materials. Various materials are above base substrate 11. Materials may be aside, elevationally inward, or elevationally outward of the Fig.1 -depicted materials. For example, other partially or wholly fabricated components of integrated circuitry may be provided somewhere above, about, or within base substrate 11. Control and/or other peripheral circuitry for operating components within a memory or other array of electronic components may also be fabricated and may or may not be wholly or partially within an array or sub- array. Further, multiple sub-arrays may also be fabricated and operated independently, in tandem, or otherwise relative one another. As used in this document, a“sub-array" may also be considered as an array.Example structure 12 is shown as a vertical stack of several materials that have been collectively patterned relative to substrate 11, for example to form a longitudinally-elongated horizontal line running into and out of the plane of page upon which Fig. 1 lies. Alternately, and by way of example only, one or more of the materials of structure 12 may be patterned separately relative to one another. Further, the various materials of a conductive line construction may be laterally and/or diagonally adjacent one another and the line construction may be other than horizontally oriented, such as vertically, diagonally, etc. oriented, including combinations thereof. Example structure 12 has been formed to comprise poly silicon-comprising material 14, elemental titanium 16 directly against poly silicon ofpoly silicon-comprising material 14, silicon nitride 18 directly against elemental titanium 16, and elemental tungsten 20 directly against silicon nitride 18. Materials 14, 16, 18, and 20 may be of any suitable thicknesses, with an example thickness for poly silicon-comprising material 14 being from 25 to 200 Angstroms, for elemental titanium 16 being from 15 to 30 Angstroms, for silicon nitride 18 being from 25 to 40 Angstroms, and for elemental tungsten 20 being from 50 to 500 Angstroms.In one embodiment, each of elemental titanium 16, silicon nitride 18, and elemental tungsten 20 is formed over substrate 11/14 in sub-atmospheric conditions (e.g., below 100 mTorr), with the substrate being kept at sub-atmospheric conditions at all times between forming all of elemental titanium 16 and forming all of elemental tungsten 20. Such may occur, for example, by physical vapor deposition of one or more of materials 14, 16,18, and 20 in one or more chambers where the substrate is kept under vacuum and not exposed to atmospheric conditions in movement from one chamber to another. By such physical vapor deposition, silicon nitride 18 may be formed as amorphous silicon nitride. In some examples, materials 14, 16, 18 and 20 may be used as a gate electrode of a transistor, and that may be formed on gate dielectric material (not shown) that is above substrate 11.Referring to Fig. 2, structure 12 has been annealed to form conductive line construction 10 to comprise poly silicon-comprising material 14, titanium silicide 22 directly against poly silicon-comprising material 14, elemental tungsten 20, and TiSixNy24 between elemental tungsten 20 and titanium silicide 22, with TiSixNy24 being directly against titanium silicide 22, and in one embodiment with TiSixNy24 being directly against elemental tungsten 20. In one embodiment, the annealing comprises a temperature of at least 800°C. In one embodiment, silicon nitride 18 is formed to be amorphous and in one embodiment the forming of all of silicon nitride 18 occurs at a temperature of no greater than 350°C. Such may facilitate preclusion of all of the silicon nitride from reacting with the elemental titanium to form titanium nitride and titanium silicide therefrom without forming any TiSixNy. Such may also facilitate growth of larger grains in elemental tungsten 20 and thereby reduced-resistance/increased- conductively thereof. In one embodiment, the gate electrode of a transistor on gate dielectric material may be thus made by including polysilicon material 14, titanium silicide (TiSix) 22, TiSixNy24, and elemental tungsten 20. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Fig. 3 shows an example alternate embodiment wherein the annealing has formed a conductive line construction 10a. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“a" or with different numerals. The annealing has formed structure 12a to comprise titanium nitride 26 between TiSixNy24 and titanium silicide 16, with TiSixNy24 being directly against titanium nitride 26 and titanium nitride 26 being directly against titanium silicide 22. The overall collective thickness of materials 22, 26 and 24 in Fig. 3 may be of the same collective thickness as materials 22 and 24 in Fig.2 (not shown). Further and regardless, any patterning that may occur to form conductive line construction 10/lOa may occur before or after the annealing. In one embodiment, the gate electrode of a transistor on gate dielectric material may be thus made by including poly silicon material 14, titanium silicide (TiSix) 22, titanium nitride 26, TiSixNy24, and elemental tungsten 20. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.An alternate example embodiment conductive line construction 10b resulting from the annealing is shown and described with reference to Fig. 4. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“b”. Conductive line construction 10b of Fig. 4 is analogous to conductive line construction 10 of Fig. 2, however wherein the annealing leaves silicon nitride 18b at a thickness of no more than 10 Angstroms between TiSixNy24 and elemental tungsten 20. TiSixNy24 is directly against silicon nitride 18b and elemental tungsten 20 is directly against silicon nitride 18b. Silicon nitride 18b is sufficiently thin (i.e., no greater than 10 Angstroms thick) wherein conductive line construction 10b is conductive from its top to bottom in spite of silicon nitride material 18b being intrinsically insulative. In one embodiment, the gate electrode of a transistor on gate dielectric material may be thus made by including poly silicon material 14, titanium silicide (TiSix) 22, TiSixNy24, silicon nitride 18b, and elemental tungsten 20. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Fig. 5 shows an alternate embodiment conductive line construction analogous to that of construction 10a of Fig. 3, however wherein the annealing leaves silicon nitride 18b at a thickness of no more than 10Angstroms between TiSixNy24 and elemental tungsten 20. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“c”. In one embodiment, the gate electrode of a transistor on gate dielectric material may be thus made by including polysilicon material 14, titanium silicide(TiSix) 22, titanium nitride 26, TiSixNy24, silicon nitride 18b, and elemental tungsten 20. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Alternate example methods of forming a conductive line construction are next described with reference to Figs. 6-10. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“d” or with different numerals. Referring to Fig. 6, such shows a conductive line construction lOd in fabrication. Such shows the forming of a structure 12d comprising polysilicon-comprising material 14, metal silicide 32 directly against polysilicon of polysilicon-comprising material 14, elemental titanium 16 directly against metal silicide MSixof material 32, silicon nitride 18 directly against elemental titanium 16, and elemental tungsten 20 directly against silicon nitride 18. An example metal in metal silicide MSixof material 32 is one or more of nickel, magnesium, platinum, tantalum, cobalt, tungsten, and molybdenum. Material 32 may be of any suitable thickness, with 15Angstroms to 30 Angstroms being an example. Ideally, metal silicide MSixof material 32 is formed on the poly silicon-comprising material 14 before depositing elemental titanium 16, silicon nitride 18, and elemental tungsten 20.Referring to Fig. 7, structure 12d has been annealed to formconductive line construction lOd. A reaction between titanium 16 and the metal silicide 32 does not occur, whereas a reaction between titanium 16 and silicon nitride 18 does occur. Annealed structure 12d thus comprises elemental tungsten 20 and TiSixNy24 between elemental tungsten 20 and metal silicide 32, with TiSixNy24 being directly against metal silicide 32.In one embodiment, metal silicide 32 comprises at least one of nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide. In one embodiment, metal silicide 32 comprises at least two of nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide. In one embodiment and as shown, the annealing forms TiSixNy24 directly against elemental tungsten 20. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Fig. 8 shows an alternate example conductive line construction lOe wherein the annealing has formed titanium nitride 26 between TiSixNy24 and metal silicide 32, with TiSixNy24 being directly against titanium nitride 26 and titanium nitride 26 being directly against metal silicide 32. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“e”. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.An alternate example conductive line construction lOf is shown in Fig. 9. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“f”. Fig. 9 shows the annealing as leaving silicon nitride 18b at a thickness of no more than 10 Angstroms between TiSixNy24 and elemental tungsten 20. TiSixNy24 is directly against silicon nitride 18b and elemental tungsten 20 is directly against silicon nitride 18b. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.An alternate example conductive line construction lOg is shown in Fig. 10. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“g”. Fig. 10 shows an alternate embodiment construction analogous to that of construction l Oe of Fig. 8 and wherein the annealing leaves silicon nitride 18b at a thickness of no more than 10 Angstroms between TiSixNy24 and elemental tungsten 20. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to otherembodiments may be used.Embodiments of the invention encompass structures and/or devices independent of method of manufacture. Nevertheless, such structures and/or devices may have any of the attributes as described herein in method embodiments. Likewise, the above-described method embodiments may incorporate and form any of the attributes described with respect to structures and/or devices embodiments.In one embodiment, a conductive line construction (e.g., 10, 10a, 10b, 10c, lOd, lOe, lOf, lOg) comprises polysilicon-comprising material (e.g.,14), a metal silicide (e.g., 22 and/or 32) directly against the polysilicon of the polysilicon-comprising material, elemental tungsten (e.g., 20), and TiSixNy(e.g., 24) between the elemental tungsten and the metal silicide.The conductive line construction comprises one of (a) or (b), where (a): the TiSixNyis directly against the metal silicide (e.g., Figs. 2, 4, 7, 9) and (b): titanium nitride is between the TiSixNyand the metal silicide, with the TiSixNybeing directly against the titanium nitride and the titanium nitride being directly against the metal silicide (e.g., Figs. 3, 5, 8, 10). In one embodiment, the metal silicide comprises titanium silicide. In oneembodiment, the metal silicide comprises at least one of nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide. In one embodiment, the metal silicide comprises at least two of nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide. In one embodiment, the metal silicide comprises at least two of titanium silicide, nickel silicide, magnesium silicide, platinum silicide, tantalum silicide, cobalt silicide, tungsten silicide, and molybdenum silicide. In one embodiment, the metal silicide has a thickness of 20Angstroms to 70 Angstroms. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.An embodiment of the invention comprises memory circuitry, for example circuitry 50 shown in Fig. 11. Such by way of example only shows base substrate 11 as comprising suitably and variously dopedsemiconductive material 54, such as monocrystalline silicon having example dielectric isolation regions 52 formed therein. Such memory circuitry comprises an array of memory cells (not shown) and a periphery circuit 60 configured to access the array of memory cells. Periphery circuit 60 comprises a plurality of transistors including a transistor 62 that has a pair of source/drain regions 64 and a gate 66 comprising a digitline (or bitline) DL (e.g., running horizontally into and out of the plane upon which Fig. 11 lies). Transistor 62 comprises a channel region 70 between source/drain regions 64 and a gate insulator (e.g., gate dielectric material) 72 between gate 66 and channel region 70. Digitline DL (and thus gate 66) comprises structure as described above, namely a poly silicon-comprising material, a metal silicide directly against the polysilicon of the poly silicon-comprising material, elemental tungsten, TiSixNybetween the elemental tungsten and the metal silicide, and at least one of (a) or (b), where, (a): the TiSixNyis directly against the metal silicide and (b): titanium nitride is between the TiSixNyand the metal silicide, with the TiSixNybeing directly against the titanium nitride and the TiSixNybeing directly against the metal silicide. In one embodiment, only one of the wordline and the digitline comprises, collectively, the polysilicon-comprising material, metal silicide, elemental tungsten, TiSixNy, and one of (a) or (b), and in one embodiment each of the wordline and the digitline comprises, collectively, suchpolysilicon-comprising material, metal silicide, elemental tungsten, TiSixNy, and one of (a) or (b). Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.The above processing(s) or construction(s) may be considered as being relative to an array of components formed as or within a single stack or single deck of such components above or as part of an underlying base substrate ( albeit , the single stack/deck may have multiple tiers). Control and/or other peripheral circuitry for operating or accessing such components within an array may also be formed anywhere as part of the finished construction, and in some embodiments may be under the array (e.g., CMOS under- array). Regardless, one or more additional such stack(s)/deck(s) may be provided or fabricated above and/or below that shown in the figures or described above. Further, the array(s) of components may be the same or different relative one another in different stacks/decks. Intervening structure may be provided between immediately-vertically-adjacent stacks/decks (e.g., additional circuitry and/or dielectric layers). Also, different stacks/decks may be electrically coupled relative one another. The multiple stacks/decks may be fabricated separately and sequentially (e.g., one atop another), or two or more stacks/decks may be fabricated at essentially the same time.The assemblies and structures discussed above may be used in integrated circuits/circuitry and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application- specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.In this document unless otherwise indicated,“elevational”,“higher”, “upper”,“lower”,“top”,“atop”,“bottom”,“above”,“below”,“under”, “beneath”,“up”, and“down” are generally with reference to the vertical direction. “Horizontal” refers to a general direction (i.e., within 10 degrees) along a primary substrate surface and may be relative to which the substrate is processed during fabrication, and vertical is a direction generally orthogonal thereto. Reference to“exactly horizontal” is the direction along the primary substrate surface (i.e., no degrees there-from) and may be relative to which the substrate is processed during fabrication. Further, “vertical” and“horizontal” as used herein are generally perpendicular directions relative one another and independent of orientation of the substrate in three-dimensional space. Additionally,“elevationally- extending” and“extend(ing) elevationally” refer to a direction that is angled away by at least 45° from exactly horizontal. Further,“extend(ing) elevationally”,“elevationally-extending”,“extend(ing) horizontally”, “horizontally-extending” and the like with respect to a field effect transistor are with reference to orientation of the transistor’s channel length along which current flows in operation between the source/drain regions. For bipolar junction transistors,“extend(ing) elevationally”“elevationally- extending”,“extend(ing) horizontally”,“horizontally-extending” and the like, are with reference to orientation of the base length along which current flows in operation between the emitter and collector. In some embodiments, any component, feature, and/or region that extends elevationally extends vertically or within 10° of vertical.Further,“directly above”,“directly below”, and“directly under” require at least some lateral overlap (i.e., horizontally) of two stated regions/materials/components relative one another. Also, use of“above” not preceded by“directly” only requires that some portion of the stated region/material/component that is above the other be elevationally outward of the other (i.e., independent of whether there is any lateral overlap of the two stated regions/materials/components). Analogously, use of“below” and “under” not preceded by“directly” only requires that some portion of the stated region/material/component that is below/under the other beelevationally inward of the other (i.e., independent of whether there is any lateral overlap of the two stated regions/materials/components).Any of the materials, regions, and structures described herein may be homogenous or non-homogenous, and regardless may be continuous or discontinuous over any material which such overlie. Where one or more example composition(s) is/are provided for any material, that material may comprise, consist essentially of, or consist of such one or morecomposition(s). Further, unless otherwise stated, each material may be formed using any suitable existing or future-developed technique, with atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implanting being examples.Additionally,“thickness” by itself (no preceding directional adjective) is defined as the mean straight-line distance through a given material or region perpendicularly from a closest surface of an immediately- adjacent material of different composition or of an immediately -adjacent region.Additionally, the various materials or regions described herein may be of substantially constant thickness or of variable thicknesses. If of variable thickness, thickness refers to average thickness unless otherwise indicated, and such material or region will have some minimum thickness and some maximum thickness due to the thickness being variable. As used herein, “different composition” only requires those portions of two stated materials or regions that may be directly against one another to be chemically and/or physically different, for example if such materials or regions are not homogenous. If the two stated materials or regions are not directly against one another,“different composition” only requires that those portions of the two stated materials or regions that are closest to one another be chemically and/or physically different if such materials or regions are not homogenous. In this document, a material, region, or structure is“directly against” another when there is at least some physical touching contact of the stated materials, regions, or structures relative one another. In contrast,“over”, “on”,“adjacent”,“along”, and“against” not preceded by“directly” encompass“directly against” as well as construction where intervening material(s), region(s), or structure(s) result(s) in no physical touching contact of the stated materials, regions, or structures relative one another.Herein, region s-materials-components are“electrically coupled” relative one another if in normal operation electric current is capable of continuously flowing from one to the other and does so predominately by movement of subatomic positive and/or negative charges when such are sufficiently generated. Another electronic component may be between and electrically coupled to the regions-materials-components. In contrast, when regions-materials-components are referred to as being "directly electrically coupled”, no intervening electronic component (e.g., no diode, transistor, resistor, transducer, switch, fuse, etc.) is between the directly electrically coupled regions-materials-components.The composition of any of the conductive/conductor/conducting materials herein may be metal material and/or conductively-dopedsemiconductive/semiconductor/semiconducting material. “Metal material" is any one or combination of an elemental metal, any mixture or alloy of two or more elemental metals, and any one or more conductive metalcompound(s).Herein,“selective” as to etch, etching, removing, removal, depositing, forming, and/or formation is such an act of one stated material relative to another stated material(s) so acted upon at a rate of at least 2: 1 by volume. Further, selectively depositing, selectively growing, or selectively forming is depositing, growing, or forming one material relative to another stated material or materials at a rate of at least 2: 1 by volume for at least the first 75 Angstroms of depositing, growing, or forming. Unless otherwise indicated, use of“or” herein encompasses either and both.CONCLUSIONIn some embodiments, a method of forming a conductive line construction comprises forming a structure comprising polysiliconcomprising material. Elemental titanium is directly against the polysilicon of the polysilicon-comprising material. Silicon nitride is directly against the elemental titanium. Elemental tungsten is directly against the silicon nitride. The structure is annealed to form a conductive line construction comprising the polysilicon-comprising material, titanium silicide directly against the poly silicon-comprising material, elemental tungsten, TiSixNybetween the elemental tungsten and the titanium silicide, and one of (a) or (b), with (a) being the TiSixNyis directly against the titanium silicide, and (b) being titanium nitride is between the TiSixNyand the titanium silicide, with the TiSixNybeing directly against the titanium nitride and the titanium nitride being directly against the titanium silicide.In some embodiments, a method of forming a conductive line construction comprises forming a structure comprising polysilicon- comprising material. Elemental metal is directly against the polysilicon of the polysilicon-comprising material. Elemental titanium is directly against the elemental metal. Silicon nitride is directly against the elemental titanium. Elemental tungsten is directly against the silicon nitride. The structure is annealed to form a conductive line construction comprising the polysilicon-comprising material, metal silicide directly against thepolysilicon-comprising material, with the metal silicide comprising elemental metal that reacts with the polysilicon of the polysiliconcomprising material to form said metal silicide, elemental tungsten, TiSixNybetween the elemental tungsten and the metal silicide, and one of (a) or (b), with (a) being the TiSixNyis directly against the metal silicide, and (b) being titanium nitride is between the TiSixNyand the metal silicide, with the TiSixNybeing directly against the titanium nitride and the titanium nitride being directly against the metal silicide.In some embodiments, a method comprises forming a structure comprising polysilicon-comprising material, titanium-comprising material over the polysilicon-comprising material, silicon nitride-comprising material over the titanium-comprising material, and tungsten-comprising material over the silicon nitride-comprising material; and. The structure is annealed to cause at least a part of the silicon nitride to be converted into conductive material comprising titanium, silicon and nitrogen. In one embodiment, the polysilicon-comprising material consists essentially of (or consists of) polysilicon, the titanium-comprising material consists essentially of (or consists of) elemental titanium, the silicon nitride-comprising material consists essentially of (or consists of) silicon nitride, and thetungsten-comprising material consists essentially of (or consists of) elemental tungsten.In some embodiments, a conductive line construction comprises polysilicon-comprising material, a metal silicide directly against the polysilicon of the polysilicon-comprising material, elemental tungsten, TiSixNybetween the elemental tungsten and the metal silicide, and one of (a) or (b), with (a) being the TiSixNyis directly against the metal silicide, and (b) being titanium nitride is between the TiSixNyand the metal silicide, with the TiSixNybeing directly against the titanium nitride and the titanium nitride being directly against the metal silicide.In some embodiments, memory circuitry comprises an array of memory cells individually comprising a transistor having a pair ofsource/drain regions and a gate comprising a wordline. A storage element is electrically coupled to one of the source/drain regions and a digitline is electrically coupled to the other of the source/drain regions. At least one of the wordline and the digitline comprises polysilicon-comprising material, a metal silicide directly against the polysilicon of the polysilicon-comprising material, elemental tungsten, TiSixNybetween the elemental tungsten and the metal silicide, and one of (a) or (b), with (a) being the TiSixNyis directly against the metal silicide, and (b) being titanium nitride is between the TiSixNyand the metal silicide, with the TiSixNybeing directly against the titanium nitride and the titanium nitride being directly against the metal silicide.In some embodiments, a semiconductor device comprises a memory array comprising at least one digit-line, at least one word-line, and at least one memory cell electrically coupled to the at least one digit-line and the at least one word-line. At least one peripheral transistor comprises a gate electrode and a pair of source/drain regions. The at least one digit-line comprises the gate electrode of the at least one peripheral transistor. The gate of the at least one peripheral transistor comprisespolysilicon-comprising material, metal silicide-comprising material over the polysilicon-comprising material, composite material including titanium, silicon and nitrogen-comprising material, and tungsten-comprising material over the composite material. In one embodiment, thepoly silicon-comprising material consists essentially of (or consists of) polysilicon; the metal silicide-comprising material consists essentially of (or consists of) metal silicide; the titanium, silicon and nitrogen-comprising material consists essentially of (or consists of) titanium, silicon and nitrogen; and the tungsten-comprising material consists essentially of (or consists of) elemental tungsten.
Some embodiments include methods of processing a unit containing crystalline material. A damage region may be formed within the crystalline material, and a portion of the unit may be above the damage region. A chuck may be used to bend the unit and thereby induce cleavage along the damage region to form a structure from the portion of the unit above the damage region. Some embodiments include methods of forming semiconductor-on-insulator constructions. A unit may be formed to have dielectric material over monocrystalline semiconductor material. A damage region may be formed within the monocrystalline semiconductor material, and a portion of the monocrystalline semiconductor material may be between the damage region and the dielectric material. The unit may be incorporated into an assembly with a handle component, and a chuck may be used to contort the assembly and thereby induce cleavage along the damage region.
CLAIMS The invention claimed is: 1. A method of processing a unit comprising crystalline material, comprising: forming a damage region within the crystalline material, a portion of the unit being above the damage region; and utilizing a chuck to bend the unit and thereby induce cleavage along the damage region to form a structure from the portion of the unit above the damage region. 2. The method of claim 1 wherein the chuck is an electrostatic chuck. 3. The method of claim 2 wherein the electrostatic chuck is a Johnsen- Rahbek type chuck. 4. The method of claim 2 wherein the electrostatic chuck has a convex outer topography along a surface that engages the unit. 5. The method of claim 2 wherein the electrostatic chuck has a concave outer topography along a surface that engages the unit. 6. The method of claim 1 wherein the crystalline material comprises semiconductor material. 7. The method of claim 1 wherein the crystalline material comprises monocrystalline silicon. 8. The method of claim 7 wherein the damage region is induced with hydrogen and/or helium. 9. The method of claim 1 wherein the crystalline material comprises semiconductor material, and wherein the unit comprises a dielectric material over the semiconductor material; and further comprising: bonding the dielectric material of the unit to a handle component after forming the damage region; and wherein the structure is supported by the handle component after the cleavage. 10. The method of claim 9 wherein the crystalline semiconductor material is monocrystalline silicon and wherein the dielectric material comprises silicon dioxide. 11. The method of claim 10 wherein the handle component comprises a monocrystalline silicon wafer. 12. The method of claim 9 further comprising expanding the damage region with a thermal anneal after bonding the dielectric material of the unit to the handle component. 13. A method of forming a semiconductor-on-insulator construction, comprising: forming a unit comprising dielectric material over monocrystalline semiconductor material; forming a damage region within the monocrystalline semiconductor material, a portion of the monocrystalline semiconductor material being between the damage region and the dielectric material; attaching the unit to a handle component through the dielectric material to form an assembly comprising the handle component and the unit; and utilizing an electrostatic chuck to contort the assembly and thereby induce cleavage along the damage region and form the semiconductor-on-insulator construction; the semiconductor-on-insulator construction comprising the dielectric material as the insulator, and comprising at least some of said portion of the semiconductor material as the semiconductor. 14. The method of claim 13 wherein the electrostatic chuck has a curved outer topography along a surface that engages the assembly. 15. The method of claim 13 wherein the forming the damage region comprises implanting hydrogen into the monocrystalline semiconductor material. 16. The method of claim 15 wherein the hydrogen is implanted at a dose of from about 2xl016 particles/cm2 to about 5xl016 particles/cm2. 17. The method of claim 16 further comprising expanding the damage region with a thermal anneal after attaching the unit to the handle component and prior to inducing the cleavage. 18. The method of claim 15 wherein the hydrogen is implanted at a dose of at least about lxl 017 particles/cm2. 19. The method of claim 18 wherein the unit is not exposed to a temperature in excess of 300°C during an interval between the hydrogen implant and the cleavage. 20. A method of forming a semiconductor-on-insulator construction, comprising: forming a unit comprising dielectric material over monocrystalline semiconductor material; forming a damage region within the monocrystalline semiconductor material, a portion of the monocrystalline semiconductor material being between the damage region and the dielectric material; attaching the unit to a handle component through the dielectric material to form an assembly comprising the handle component and the unit; and contorting the assembly along a curved outer surface of a chuck to thereby induce cleavage along the damage region and form the semiconductor-on- insulator construction; the semiconductor-on-insulator construction comprising the dielectric material as the insulator, and comprising at least some of said portion of the semiconductor material as the semiconductor. 21. The method of claim 20 wherein the contorting comprises engaging the assembly along the curved outer surface while the assembly is in a first orientation relative to the curved outer surface, and then flipping the assembly and engaging the assembly along the curved outer surface while the assembly in a second orientation relative to the curved outer surface, with the second orientation being opposite to the first orientation. 22. The method of claim 20 wherein the chuck is an electrostatic chuck. 23. The method of claim 20 wherein the outer surface is comprised by a convex outer topography of the chuck. 24. The method of claim 20 wherein the outer surface is comprised by a concave outer topography of the chuck.
DESCRIPTION METHODS OF PROCESSING UNITS COMPRISING CRYSTALLINE MATERIALS, AND METHODS OF FORMING SEMICONDUCTOR-ON- INSULATOR CONSTRUCTIONS TECHNICAL FIELD Methods of processing units comprising crystalline materials, and methods of forming semiconductor-on-insulator constructions. BACKGROUND Smart-cut technology is a process for forming semiconductor-on-insulator (SOI) constructions. An example process sequence that may be utilized in smart-cut technology is described by Bruel (M. Bruel, Electronics Letters, July 6, 1995; Vol. 31, No. 14, pp 1201-1202). The process sequence comprises formation of silicon dioxide over a first monocrystalline silicon wafer, followed by implantation of hydrogen ions into the wafer to form a damage region. The damage region is spaced from the silicon dioxide by an intervening portion of the monocrystalline silicon material of the wafer. Subsequently, the wafer is bonded to a handle component (which can be a second semiconductor wafer) by hydrophilic bonding through the silicon oxide. The damage region is then thermally treated with a two-phase process. The two-phase process comprises first heating the damage region to a temperature of from about 400°C to about 600°C to split the wafer along the damage region (forming an SOI structure having a thin layer of monocrystalline bonded to the handle portion, and also forming a second structure corresponding to monocrystalline silicon which can be recycled into the process as a starting monocrystalline silicon wafer). The two-phase process then comprises heating the SOI structure to a temperature of greater than or equal to 1000°C to strengthen chemical bonds. Although Bruel states that the first phase of the thermal treatment utilizes a temperature of from about 400°C to about 600°C, it has been determined subsequent to Bruel that the first phase may be conducted utilizing a temperature of from about 200°C to about 600°C; and specifically that co-implants can be utilized to reduce the temperature utilized for such first phase. Subsequent processing of the SOI structure may comprise chemical-mechanical polishing (CMP) to reduce surface roughness along an outer surface of the thin layer ofmonocrystalline silicon (i.e., along the surface that had formed during the break along the damage region). Existing smart-cut processes can be expensive due to the large amount of hydrogen utilized in forming the damage regions. Another problem with existing smart- cut processes can be that the surface formed by breaking the damage region may be very rough, so that extensive CMP is required, which can reduce throughput and increase costs. For the above-discussed reasons, it would be desirable to develop new smart-cut- type processes which can utilize less hydrogen than existing processes and/or which may have improved surfaces formed along the damage regions to reduce, or possibly even eliminate, subsequent CMP of such surfaces. BRIEF DESCRIPTION OF THE DRAWINGS FIGS. 1-4 are diagrammatic, cross-sectional views of a portion of a construction at various process stages of an example embodiment process. FIG. 5 is a diagrammatic, cross-sectional view of the construction of FIG. 4, shown at a different scale than used in FIG. 4; with FIG. 5 showing an entirety of the construction. FIG. 6 is a diagrammatic, cross-sectional view of the construction of FIG. 5 adjacent a chuck at a process stage of an example embodiment process. FIGS. 7-9 are diagrammatic, cross-sectional views of a construction analogous to that of FIG. 5, shown at a different scale than in FIGS. 5 and 6, and shown at various process stages of an example embodiment process. FIGS. 10 and 11 show additional example process stages that may be utilized with the construction of FIG. 5. FIG. 12 shows a cross-sectional side view of a semiconductor wafer with various dimensions that may be utilized as input into an equation for ascertaining surface lateral stress (σ) in some embodiments. FIGS. 13-15 are diagrammatic views of example electrostatic chuck configurations that may be utilized in some example embodiments. DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS Some embodiments include new smart-cut-type processing in which a curved surface of a chuck is utilized to contort a wafer after formation of a damage region, andto thereby enhance breakage along the damage region. Such may enable less hydrogen to be utilized in forming the damage region relative to conventional smart-cut processes; and/or may enable a surface of an SOI to be formed with substantially reduced roughness, which may eliminate a CMP step utilized in conventional smart-cut processes, or which may at least reduce the amount of CMP relative to conventional smart-cut processes. Any suitable chuck may be utilized in the embodiments described herein. In some embodiments, the chuck may be an electrostatic chuck; such as, for example, a Johnsen- Rahbek (J-R)-type chuck. Example embodiments are described with reference to FIGS. 1-15. FIGS. 1-9 illustrate an example embodiment smart-cut-type process. Referring to FIG. 1, a portion of a construction 10 is illustrated. The construction comprises a crystalline material 12 having a dielectric material 14 thereover. In some embodiments, the construction 10 may be considered to correspond to a unit 16 comprising crystalline material. A "unit" comprising crystalline material is any construction comprising crystalline material. The "unit" may comprise the crystalline material alone, or in combination with one or more other materials; and in the shown embodiment of FIG. 1 the "unit" comprises the crystalline material in combination with the dielectric material 14. In some embodiments, the crystalline material 12 may comprise, consist essentially of, or consist of a semiconductor material; and may, for example, comprise, consist essentially of, or consist of monocrystalline silicon. In some embodiments, the monocrystalline silicon may be configured as a wafer of appropriate composition and dimension to be suitable for integrated circuit fabrication. The dielectric material 14 may comprise, consist essentially of, or consist of silicon dioxide in some embodiments. For instance, in some embodiments, the crystalline material 12 may comprise monocrystalline silicon, and the dielectric material 14 may comprise a region of silicon dioxide thermally grown across a surface of the monocrystalline silicon. Referring to FIG. 2, a damage region 18 (diagrammatically illustrated with a dashed line) is formed within the crystalline material 12. The damage region may be formed with any suitable processing. In the shown embodiment, hydrogen 20 is implanted through the dielectric material 14 to form the damage region (with the implant being represented by the arrows 21). The hydrogen may be in any suitable form, and insome embodiments may comprise hydrogen ions. The implanted hydrogen may be provided at any suitable dosage. In some embodiments, the implanted hydrogen may be provided at a dosage less than the conventional dosage utilized for forming a damage region with hydrogen in smart-cut processing (with a conventional dosage typically 1 7 9 being about 1x10 particles/cm ; with the term "particles" referring to the species of hydrogen present in the implant, such as hydrogen ions). In some embodiments, the implanted hydrogen may be provided at a dosage less than one-half of the conventional dosage, such as, for example, a dosage of from about one-quarter to about one-half of the conventional dosage. For instance, the hydrogen may be provided at a dosage of from about 2xl016 particles/cm2 to about 5xl016 particles/cm2. Although hydrogen is described in the specific example embodiment described above and in other example embodiments in this disclosure; in some embodiments, helium and/or other ions may be substituted for hydrogen, or utilized in addition to hydrogen, to form a damage region. The utilization of a low dose of hydrogen may enable example embodiment processes of the present invention to be performed at reduced costs relative to conventional smart-cut type processes. Further, the utilization of a lower dosage of hydrogen may increase throughput. For instance, it may take about 30 minutes to implant a conventional dose of hydrogen; and embodiments that utilize from about one-quarter to about one-half of the conventional dosage may be accomplished in from about from about one-quarter to about one-half of the conventional time. Although it may be advantageous to utilize a low dose of hydrogen in some embodiments, in other embodiments the dosage of hydrogen may be about the same as 1 7 that utilized in conventional processes, and may be, for example, at least about 1x10 particles/cm2. If the dose of hydrogen utilized to form the damage region is about the same as that utilized in conventional processes, the embodiments may not save hydrogen-usage costs as compared to conventional processes. However, the embodiments may still have advantages relative to conventional smart-cut processes (such as, for example, reducing subsequent CMP), as discussed below. The damage region 18 is spaced from the dielectric material 14, and accordingly a portion 19 of crystalline material 12 is between the dielectric material and the damage region. Referring to FIG. 3, the unit 16 is shown to be bonded to a handle component 24 to form an assembly 26. The illustrated handle component comprises a semiconductor wafer 25 and a dielectric material 27 adjacent semiconductor material of the wafer. Insome embodiments, the semiconductor wafer 25 may comprise, consist essentially of, or consist of monocrystalline silicon, and the dielectric material 27 may comprise, consist essentially of, or consist of silicon dioxide. The handle component 24 may be bonded to the unit 16 by hydrophilic bonding of the dielectric material 27 of the handle component to the dielectric material 14 of the unit 16. Although the dielectric materials 14 and 27 are shown to be separate from one another in the assembly 26, in some embodiments the dielectric materials 14 and 27 may be the same composition as one another and may merge to form a single dielectric material between the crystalline material 12 and the semiconductor material 25. Also, although both the handle component 24 and the unit 16 are shown to initially comprise dielectric materials, in other embodiments only one of the handle component and the unit 16 may initially have the dielectric material and may be bonded to the other of the handle component and the unit through such dielectric material. Referring to FIG. 4, the damage region 18 is thermally treated to expand such damage region. Such thermal treatment may comprise similar conditions as conventional thermal treatments of a damage region during smart-cut processing and may, for example, comprise maintaining the damage region at a temperature of from about 200°C to about 600°C for a duration of about 30 minutes. The thermal processing of FIG. 4 may be optional in some embodiments. For instance, it may be advantageous to utilize the thermal processing of FIG. 4 when relatively low doses of hydrogen are initially implanted, and it may be unnecessary to utilize such thermal processing when conventional doses of hydrogen are implanted. Referring to FIG. 5, the construction 10 of FIG. 4 is shown inverted relative to FIG. 4, and is shown at a different scale than FIG. 4. Specifically, the scale utilized in FIG. 5 enables an entire width of the assembly 26 to the illustrated. The dielectric material which joins the handle component 24 to the unit 16 is shown as a single dielectric material "14, 27"; rather than being shown as two separate dielectric materials, in order to simplify the drawing. Referring to FIG. 6, the assembly is provided adjacent a chuck 30. The chuck comprises a curved outer surface 31 (specifically, a curved outer surface with a concave topography in the embodiment of FIG. 6), and the assembly 26 is directed toward such curved surface (as indicated by arrows 32) to contort the assembly. In the shown embodiment, the contortion comprises bending the assembly along the curved surface30, but in other embodiments other contortions of the assembly may be accomplished with other embodiments of chucks. The chuck 30 may comprise any suitable chuck, and in some embodiments may comprise an electrostatic chuck. If the chuck 30 is an electrostatic chuck, it may be advantageous for the chuck to be a Johnsen-Rahbek-type electrostatic chuck for reasons analogous to the advantages discussed in an article by Qin and McTeer (S. Qin and A. McTeer, "Wafer dependence of Johnsen-Rahbek type electrostatic chuck for semiconductor processes," Journal of Applied Physics 102, 064901-1 (2007)). Example Johnsen-Rahbek-type electrostatic chucks are described below with reference to FIGS. 13-15. The assembly 26 is engaged by the surface 31 to contort the assembly. The illustrated degree of curvature of the surface of chuck 30 is exaggerated for purposes of illustration. In practice, the degree of curvature is chosen to be large enough to encourage separation within unit 16 along damage region 18, but small enough to avoid undesired cracking or breakage at other locations within assembly 26. FIG. 7 shows assembly 26 at a process stage in which the curvature along the surface of the chuck has begun to induce separation along the damage region 18. The view of FIG. 7 is at a different scale than the views of FIGS. 5 and 6 to enable the separation along the damage region to be clearly illustrated. Also, the assembly 26 is shown in isolation from the chuck in FIG. 7, but the chuck would be engaged with the assembly 26 at the process stage of FIG. 7, and would be inducing the shown contortion of the assembly which causes the separation along the damage region. The illustrated contortion (shown as bending of the assembly) is exaggerated to emphasize such contortion. In practice, the amount of contortion would be chosen to be large enough to be sufficient to induce separation along the damage region, and yet small enough to avoid undesired detrimental effects to the assembly. FIG. 7 shows gaps 40 forming along edges of the damage region, and shows that the portion 19 of crystalline material 12 between the damage region and the dielectric material 14 is cleaved from a remaining portion 42 of the crystalline material. Although the gaps are shown initiating from edges of the damage region, in other embodiments the gaps may initiate at other locations along the damage region. Referring to FIG. 8, the construction 10 is shown at a processing stage subsequent to that of FIG. 7, and specifically after the cleavage along damage region 18 (FIG. 7) has been completed. The construction has been split into two pieces 46 and 48.The piece 48 comprises the portion 42 of crystalline material 12 that had been on an opposing side of the damage region from the portion 19 of the crystalline material. The piece 46 comprises the portion 19 of the crystalline material 12 bonded to a handle comprising the wafer of the semiconductor material 25. The portion 19 may be considered to be a crystalline material structure 50. The pieces 48 and 46 may be separated from one another and subjected to additional processing, as implied by the arrows 45 and 47. The piece 48 may be re- utilized to form another unit 16 which can then be subjected to the processing of FIGS. 1 -8. The piece 46 may be subjected to CMP, if desired, to smooth an upper surface of structure 50, and may be utilized as an SOI construction (with structure 50 being the semiconductor of the SOI, and with dielectric "14, 27" being the insulator of the SOI). FIG. 9 shows an SOI construction 52 comprising the piece 46. The semiconductor material 25 may be cut in subsequent processing (not shown) to thin the amount of material 25 beneath the insulator portion of the SOI, if so desired. The structure 50 at the process stage of FIG. 9 may comprise all of the initial portion 19 that had been present before cleavage along damage region 18 (e.g., all of the portion 19 present at the processing stage of FIG. 6), or may comprise only some of such initial portion 19. For instance, part of the portion 19 may be lost in the processing stages described with reference to FIGS. 6-8; and/or may be lost in subsequent CMP. The processing of FIGS. 6-8 utilizes a curved surface of a chuck to enhance cleavage along the damage region. The chuck- induced cleavage of FIGS. 6-8 may be conducted at any suitable temperature; and in some embodiments may be conducted at a temperature less than the temperatures commonly utilized to achieve cleavage with conventional smart-cut processes. For instance, in some embodiments the chuck-induced cleavage may be conducted at room temperature (i.e., about 22°C). Even though the cleavage can be conducted at room temperature, there may be embodiments in which a thermal anneal is still desired (such as, for example, for dopant activation, for strengthening chemical bonds, etc.). In such embodiments, the thermal anneal may be conducted simultaneously with the cleavage, or before the cleavage, or after the cleavage. An advantage of the processing of FIGS. 6-8 is that such may enable cleavage along a damage region while utilizing a lower dose of hydrogen to initially form the damage region than conventional processes. However, when a low dose of hydrogen isutilized to initially formed the damage region, there may be significant roughness present along a surface of structure 50 (FIG. 8) after the cleavage along the damage region. Such roughness may be comparable to that resulting from conventional smart-cut processes, and may be removed with CMP analogous to that utilized in the conventional smart-cut processes. Another advantage of the processing of FIGS. 6-8 may be manifested if the dose of hydrogen utilized to form the damage region is comparable to that utilized in conventional smart-cut processes. Specifically, the processing of FIGS. 6-8 may enable the damage region to be cleaved without thermally expanding the damage region (i.e., without the processing stage of FIG. 4). Thus, the unit 16 of FIG. 1 may have the damage region 18 formed by the hydrogen implant (i.e., the processing of FIG. 2), and may then not be subjected to thermal processing which expands the damage region (i.e., may not be exposed to a temperature in excess of 300°C) during the interval between the hydrogen implant and the cleavage along the damage region. The omission of the thermal expansion of the damage region may enable the cleavage to be attained while creating less roughness along the surface of structure 50 than would be created by conventional smart cut processes. This may enable structure 50 to be suitable for utilization in SOI with significantly less CMP-smoothing of the surface of structure 50 than is utilized in conventional smart-cut processes, and in some embodiments may enable structure 50 to be utilized in SOI with no CMP-smoothing of the surface of structure 50. Although the embodiment of FIG. 6 shows the assembly 26 contorted with unit 16 being oriented above handle 24, in other embodiments the assembly may be flipped as shown in FIG. 10. In some embodiments, the assembly 26 may be subjected to multiple contortions against one or more chucks to induce desired cleavage along a damage region. For instance, the assembly 26 may be contorted in the orientation of FIG. 6 and then flipped to be contorted in the orientation of FIG. 10, or vice versa. The chuck 30 of FIGS. 6 and 10 is one of many configurations of a chuck having a curved outer surface that may be utilized in some embodiments. The chuck 30 had a concave outer surface. FIG. 11 shows a processing stage analogous to that of FIG. 6, but utilizing a chuck 60 having a curved outer surface 61 with a convex topography. The illustrated degree of curvature of the surface of chuck 60 is exaggerated for purposes of illustration. In practice, the degree of curvature is chosen to be large enough toencourage separation within unit 16 along damage region 18, but small enough to avoid undesired cracking or breakage at other locations within assembly 26. The cleavage induced along a damage region with a curved surface of a chuck may be related to the surface lateral stress (σ) of a unit (e.g., the unit 16 of FIG. 1). FIG. 12 shows a cross-sectional side view of a semiconductor wafer 64 with various dimensions that may be utilized as input into an equation for ascertaining surface lateral stress (σ) in some embodiments. Specifically, the surface lateral stress may be characterized by Equation I. „ , . T 12Eyt Equation I σ = 4a2-3L2 In Equation I, "E" is the Young's Modulus (168 GPa for Si), "y" is the total wafer vertical displacement, "t" is the total thickness of the wafer, "L" is the length of the wafer, and "a" is one-fourth of the length of the wafer. The wafer vertical displacement "y" relates to the amount of contortion induced by a chuck, and may be considered to correspond to, for example, the vertical displacement induced by the curved surface 31 in the embodiment of FIG. 6. As discussed previously, the chucks utilized in various embodiments described herein may be electrostatic chucks. FIGS. 13-15 illustrate some example embodiments of electrostatic chucks that may be utilized. Each chuck may have an advantage for some embodiments, and a disadvantage for others. The voltages shown in FIGS. 13-15 are example voltages provided to assist the reader in understanding the operation of the chucks. Other voltages may be utilized in various embodiments. The chuck of FIG. 13 is a D- shaped bi-polar configuration, the chuck of FIG. 14 is a pie-shaped multi-polar configuration, and the chuck of FIG. 15 is a ring-shaped multi-polar configuration. The bi-polar and multi-polar configurations advantageously do not need any "real" ground because they have a "virtual" ground, and thus they may be readily utilized for processes occurring under either vacuum or atmosphere. The bi-polar and multi-polar configurations may also have benefits including low-cost, uniform generation of forces, and reduced particle and metal contaminations. The ring-shaped multi-polar configuration may be particularly attractive for electrostatic force enhanced cleavage along a damage region due to its uniform and uniaxial force, and flexible and programmable scheme.Although the embodiments described above pertain to fabrication of SOI constructions, the invention includes embodiments directed toward other constructions comprising crystalline materials. Such other constructions may include, for example, Semiconductor-Metal-On-Insulator (SMOI), (which may be utilized, for example, for ultrahigh density vertical devices of three-dimensional DRAM and NAND), and Silicon- On-Polycrystalline Aluminum Nitride (SOP AN), (which may be utilized, for example, for LED fabrications). The particular orientation of the various embodiments in the drawings is for illustrative purposes only, and the embodiments may be rotated relative to the shown orientations in some applications. The description provided herein, and the claims that follow, pertain to any structures that have the described relationships between various features, regardless of whether the structures are in the particular orientation of the drawings, or are rotated relative to such orientation. The cross-sectional views of the accompanying illustrations only show features within the planes of the cross-sections, and do not show materials behind the planes of the cross-sections in order to simplify the drawings. When a structure is referred to above as being "on" or "against" another structure, it can be directly on the other structure or intervening structures may also be present. In contrast, when a structure is referred to as being "directly on" or "directly against" another structure, there are no intervening structures present. When a structure is referred to as being "connected" or "coupled" to another structure, it can be directly connected or coupled to the other structure, or intervening structures may be present. In contrast, when a structure is referred to as being "directly connected" or "directly coupled" to another structure, there are no intervening structures present. Some embodiments include a method of processing a unit comprising crystalline material. A damage region is formed within the crystalline material. A portion of the unit is above the damage region. A chuck is used to bend the unit and thereby induce cleavage along the damage region to form a structure from the portion of the unit above the damage region. Some embodiments include a method of forming a semiconductor-on-insulator construction. A unit is formed to comprise dielectric material over monocrystalline semiconductor material. A damage region is formed within the monocrystalline semiconductor material. A portion of the monocrystalline semiconductor material is between the damage region and the dielectric material. The unit is attached to a handlecomponent through the dielectric material to form an assembly comprising the handle component and the unit. An electrostatic chuck is utilized to contort the assembly and thereby induce cleavage along the damage region and form the semiconductor-on- insulator construction. The semiconductor-on-insulator construction comprises the dielectric material as the insulator, and comprises at least some of said portion of the semiconductor material as the semiconductor. Some embodiments include a method of forming a semiconductor-on-insulator construction. A unit is formed to comprise dielectric material over monocrystalline semiconductor material. A damage region is formed within the monocrystalline semiconductor material. A portion of the monocrystalline semiconductor material is between the damage region and the dielectric material. The unit is attached to a handle component through the dielectric material to form an assembly comprising the handle component and the unit. The assembly is contorted along a curved outer surface of a chuck to thereby induce cleavage along the damage region and form the semiconductor- on-insulator construction. The semiconductor-on-insulator construction comprises the dielectric material as the insulator, and comprises at least some of said portion of the semiconductor material as the semiconductor.
Material stacks for perpendicular spin transfer torque memory (pSTTM) devices, pSTTM devices and computing platforms employing such material stacks, and methods for forming them are discussed. The material stacks include a cladding layer of predominantly tungsten on a protective layer, which is in turn on an oxide capping layer over a magnetic junction stack. The cladding layer reduces oxygen dissociation from the oxide capping layer for improved thermal stability and retention.
1.A material stack for a spin transfer torque memory device, the material stack includes:A magnetic junction comprising a layer of free magnetic material;A first layer comprising oxygen, said first layer being on said free magnetic material layer;A second layer comprising iron, said second layer being on said first layer; andThe third layer on the second layer, wherein the third layer mainly includes tungsten.2.The material stack according to claim 1, wherein the magnetic junction further comprises a fixed magnetic material layer, the fixed magnetic material layer and the free magnetic material layer have perpendicular magnetic anisotropy, and the second layer is further Cobalt is included and the third layer has not less than 99% tungsten by weight.3.The material stack according to claim 1 or 2, wherein the third layer has a thickness of not less than 0.5 nm and not more than 5 nm.4.The material stack according to claim 1 or 2, wherein the third layer has a thickness of not less than 2 nm and not more than 2.5 nm.5.The material stack according to any one of claims 1 to 4, further comprising:The electrode on the third layer, wherein the electrode includes at least one of ruthenium or tantalum.6.The material stack according to claim 5, wherein the electrode includes a first electrode layer including ruthenium on the third layer and a second electrode layer including tantalum on the first electrode layer.7.The material stack according to any one of claims 1 to 6, wherein the magnetic junction further comprises a tunnel barrier layer and a fixed magnetic material layer, and the free magnetic material layer and the fixed magnetic material layer each include Co Or one of Fe, B or B, the tunnel barrier layer includes one or more of Mg and O, and the material stack further includes a layer between the fixed magnetic material layer and the metal electrode Synthetic antiferromagnetic (SAF) structure.8.The material stack according to any one of claims 1 to 6, wherein the free magnetic material layer is a first free magnetic material layer, and the magnetic junction further comprises:Fixed magnetic material layer;Tunnel barrierA second free magnetic material layer on the tunnel barrier layer; andA fourth layer including a metal, the fourth layer being between the first free magnetic material layer and the second free magnetic material layer.9.The material stack according to claim 1, wherein the magnetic junction further comprises a fixed magnetic material layer and a tunnel barrier layer, the fixed magnetic material layer and the free magnetic material layer, and the second layer each includes Co , Fe and B, the tunnel barrier layer includes Mg and O, the material stack further includes a synthetic antiferromagnetic (SAF) structure between the fixed magnetic material layer and a first metal electrode, and the third The layer includes not less than 99% tungsten by weight and has a thickness of not less than 1.5 nm and not more than 2.5, and the material stack further includes a second metal electrode including tantalum on the third layer.10.A non-volatile memory unit includes:First electrodeA second electrode electrically coupled to a bit line of the memory array;A vertical spin transfer torque memory (pSTTM) device between the first electrode and the second electrode, the pSTTM device including:A magnetic junction comprising a layer of free magnetic material;A first layer comprising oxygen, said first layer being on said free magnetic material layer;A second layer comprising iron, said second layer being on said first layer; andA third layer on the second layer, wherein the third layer mainly includes tungsten; andA transistor having a first terminal electrically coupled to the first electrode, a second terminal electrically coupled to a source line of the memory array, and a third terminal electrically coupled to a word line of the memory array.11.The non-volatile memory cell according to claim 10, wherein the second layer includes cobalt, and the third layer includes not less than 99% tungsten by weight.12.The nonvolatile memory cell according to claim 10 or 11, wherein the third layer has a thickness of not less than 0.5 nm and not more than 5 nm.13.The nonvolatile memory cell according to claim 10 or 11, wherein the third layer has a thickness of not less than 2 nm and not more than 2.5 nm.14.The non-volatile memory cell according to any one of claims 10 to 13, wherein the first electrode includes a first electrode layer including ruthenium on the third layer and the first electrode A second electrode layer including tantalum on the layer.15.The non-volatile memory cell according to claim 10, wherein the magnetic junction further comprises a tunnel barrier layer and a fixed magnetic material layer, and the fixed magnetic material layer and the free magnetic material layer and the protective layer are both Including Co, Fe, and B, the tunnel barrier layer includes Mg and O, the pSTTM device further includes a synthetic antiferromagnetic (SAF) structure between the fixed magnetic material layer and the second electrode, and The third layer includes not less than 99.9% by weight of tungsten and has a thickness of not less than 1.5 nm and not more than 2.5.16.A method of forming a magnetic tunnel junction material stack includes:Depositing a first amorphous CoFeB layer on a substrate;Depositing a first dielectric material layer on the first amorphous CoFeB layer;Depositing a second amorphous CoFeB layer on the first dielectric material layer;Depositing an oxide layer on the second amorphous CoFeB layer;Depositing a protective layer including at least Co and Fe on the oxide layer;Depositing a cladding layer on the protective layer, wherein the cladding layer is mainly tungsten; andThe magnetic tunnel junction material stack is annealed to convert the first amorphous CoFeB layer and the second amorphous CoFeB layer into a polycrystalline CoFeB.17.The method according to claim 16, wherein depositing the cladding layer comprises depositing the cladding layer to a thickness of not less than 0.5 nm and not more than 5 nm.18.The method according to claim 16 or 17, wherein depositing the cladding layer comprises depositing the cladding layer to have not less than 99% tungsten by weight.19.The method of any one of claims 16 to 18, wherein depositing the second amorphous CoFeB layer comprises depositing the second amorphous CoFeB layer on the first dielectric material layer, and depositing the The oxide layer includes depositing the oxide layer on the second amorphous CoFeB layer.20.The method according to any one of claims 16 to 19, further comprising before the annealing:Depositing a metal coupling layer on the second amorphous CoFeB layer; andA third amorphous CoFeB layer is deposited on the metal coupling layer.
Vertical spin torque transfer device with improved retention and thermal stabilityBackground techniqueMagnetic memory devices, such as spin-transfer torque memory (STTM) devices, include a magnetic tunnel junction (MTJ) for switching and detecting the state of the memory. The MTJ includes a fixed magnet and a free magnet separated by a barrier layer so that the fixed magnet and the free magnet have perpendicular magnetic anisotropy (PMA) (out of the plane of the substrate and / or MTJ layer). When detecting the state of the memory, the magnetic tunnel junction resistance of the memory is established by the relative magnetization of the fixed magnet and the free magnet. When the magnetization directions are parallel, the magnetic tunnel junction resistance is in a low state, and when the magnetization directions are anti-parallel, the magnetic tunnel junction resistance is in a high state. The relative magnetization direction is provided or written to the memory by changing the magnetization direction of the free magnet while maintaining the magnetization direction of the fixed magnet (as the name implies, fixed). The direction of magnetization of the free magnet is changed by passing a driving current polarized through the fixed magnet through the free magnet.MTJ devices with PMA have the possibility to obtain high density memories. However, scaling such devices to higher densities can be difficult in terms of device thermal stability and retention. Greater thermal stability is advantageously associated with longer memory element non-volatile lifetimes. Greater retentivity is advantageously associated with fewer failures in which the non-volatile memory maintains its state (parallel or anti-parallel). As scaling continues to smaller sizes, maintaining adequate thermal stability and retention becomes more difficult.BRIEF DESCRIPTION OF THE DRAWINGSThe materials described herein are illustrated in the drawings by way of example and not by way of limitation. For simplicity and clarity of illustration, elements shown in the figures are not necessarily drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements. In the drawings:FIG. 1 shows a cross-sectional view of a material layer stack for a vertical STTM device;2 illustrates a cross-sectional view of a material layer stack including an exemplary multilayer free magnetic material layer for a vertical STTM device;3 illustrates a cross-sectional view of a material layer stack including an exemplary synthetic antiferromagnetic (SAF) layer for a vertical STTM device;4 illustrates a cross-sectional view of a material layer stack including a plurality of free magnetic material layers and a synthetic antiferromagnetic layer for a vertical STTM device;FIG. 5 shows a flowchart showing an exemplary process for fabricating a magnetic tunnel junction device structure;6A, 6B, 6C, 6D illustrate side views of an exemplary magnetic tunnel junction device structure when performing a specific fabrication operation;7 shows a schematic diagram of a non-volatile memory device including a magnetic tunnel junction device structure having a cladding layer mainly of tungsten;8 illustrates an exemplary cross-section die layout including the exemplary magnetic tunnel junction device structure of FIG. 7;FIG. 9 illustrates a system in which a mobile computing platform and / or data server machine employs a magnetic tunnel junction device having a cladding layer that is primarily tungsten; andFIG. 10 illustrates a functional block diagram of a computing device all arranged in accordance with at least some embodiments of the present disclosure.detailed descriptionOne or more embodiments or implementations will now be described with reference to the drawings. Although specific configurations and arrangements are discussed, it should be understood that this is done for exemplary purposes only. A person skilled in the relevant art will recognize that other configurations and arrangements may be employed without departing from the spirit and scope of the description. It will be apparent to those skilled in the relevant art that the techniques and / or arrangements described herein can also be used in a variety of other systems and applications in addition to those described herein.In the following detailed description, reference is made to the accompanying drawings that form a part thereof, wherein similar parts may be denoted by similar reference numerals throughout to indicate corresponding or similar elements. It should be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. In addition, it should be understood that other embodiments may be utilized and structural and logical changes may be made without departing from the scope of the claimed subject matter. It should also be noted that directions and references such as up, down, top, bottom, top, bottom, etc. may be used to facilitate discussion of the drawings and embodiments, and are not intended to limit the application of the claimed subject matter. Therefore, the following detailed description should not be understood in a limited sense, and the scope of the claimed subject matter is defined by the appended claims and their equivalents.Many details are set forth in the description below, however, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known methods and apparatus are shown in block diagram form rather than in detail to avoid obscuring the present invention. Reference throughout the specification to "an embodiment" or "in an embodiment" means that a particular feature, structure, function, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearance of the phrase "in the embodiment" in various places throughout the specification does not necessarily refer to the same embodiment of the invention. Furthermore, the particular features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, the first embodiment and the second embodiment may be combined as long as the two embodiments are not designated as mutually exclusive.The terms "coupled" and "connected" along with their derivatives can be used herein to describe the structural relationship between components. It should be understood that these terms are not intended to be synonymous with each other. In contrast, in certain embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may be used to indicate that two or more elements are in physical or electrical contact with each other directly or indirectly (with other intervening elements in between), and / or that two or more elements operate or interact in conjunction with each other (for example, as in Relationship).The terms "above", "below", "between" and "on" as used herein refer to the relative position of one layer or component relative to other layers or components . For example, a layer disposed above or below another layer may be in direct contact with the other layer, or may have one or more intervening layers. Further, one layer disposed between two layers may be in direct contact with the two layers, or may have one or more intervening layers. In contrast, the first layer "on" the second layer is in direct contact with the second layer. Similarly, unless explicitly stated otherwise, one feature disposed between two features may be in direct contact with an adjacent feature or may have one or more intervening features. Furthermore, the terms "substantially", "close to", "about", "close to", and "left and right" generally refer to being within +/- 10% of a target value. The term "layer" as used herein may include a single material or multiple materials.The term "free" or "non-fixed" as used herein with reference to a magnet refers to a magnet whose direction of magnetization can change along its easy axis when an external field or external force (e.g., Oster field, spin moment, etc.) is applied . In contrast, the terms "fixed" or "pinned" as used herein with reference to magnets refer to the direction of magnetization of which is pinned or fixed along the axis and does not result from the application of external fields (e.g., electric field, Oster field, spin moment) And change the magnet. As used herein, a perpendicularly magnetized magnet (either a vertical magnet or a magnet with perpendicular magnetic anisotropy (PMA)) refers to a magnet having a magnetization that is substantially perpendicular to the plane of the magnet or device. For example, a magnet has a magnetization in the z-direction with respect to the x-y plane of the device in the range of 90 (or 270) degrees +/- 20 degrees. Furthermore, the term "device" may generally refer to a device according to the context in which the term is used. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures with active and / or passive elements, and the like. Generally, a device is a three-dimensional structure having a plane along the x-y direction of the x-y-z Cartesian coordinate system and a height along the z-direction. The plane of the device may also be the plane of a device including the device.The following describes a magnetic tunnel junction device, device, system, computing platform, material stack, and method related to a magnetic tunnel junction device having a free magnetic layer and a fixed magnetic layer MTJ stack separated by a tunnel barrier layer The body and the cladding layer, mainly tungsten, over the oxide layer on the free magnet.As described above, it may be advantageous to provide a magnetic tunnel junction device with improved thermal stability and retentivity or to maintain such characteristics at an appropriate level of operation as the device scales. In some embodiments, the magnetic junction device includes a magnetic tunnel junction having a fixed magnet layer and a free magnet layer separated by a barrier layer such that the fixed magnet layer and the free magnet layer have perpendicular magnetic anisotropy (PMA) . PMA is characterized by a magnetically easy axis that is substantially perpendicular to the plane of the fixed and free magnet layers (and on a line with its thickness). Such PMA can be contrasted with in-plane magnetic anisotropy that is parallel to or in-plane with respect to the plane of the layer. The perpendicular magnetic anisotropy of the fixed magnet layer and the free magnet layer may be provided or established based on the thickness of the layer and / or the interface of the layer with its corresponding adjacent material. In addition, the magnetic tunnel junction device includes an oxide layer on the free magnet layer for improving the performance of the free magnet layer, and a protective layer on the oxide layer. The protective layer includes iron, and it protects the oxide layer during the deposition of subsequent layers. The magnetic tunnel junction device also includes a cladding layer mainly of tungsten on the protective layer. As used herein, the term "main" indicates that the main component is the component with the largest proportion in the layer or material. In some embodiments, the cladding layer is pure tungsten or near pure tungsten (eg, higher than 99% tungsten or more tungsten).The tungsten coating can replace the previous tantalum material to improve the thermal stability and retention of the device. Tungsten may have a relatively low oxygen affinity (lower than the oxygen affinity of tantalum) and may retain oxygen in the oxide layer previously discussed. Such oxygen retention maintains an iron-oxygen bond at the interface of the free magnet layer and the oxide layer (which in some embodiments is magnesium oxide) to retain PMA in the free magnet layer (e.g., the storage layer of the device) , Otherwise PMA will be depleted during oxygen loss. For example, the PMA of the free magnet layer is due to the material composition and structure of the free magnet layer, the thickness of the free magnet layer, and the interface between the free magnet layer and directly adjacent materials in the stack. By maintaining oxygen at the interface between the free magnet layer and the oxide layer, the PMA of the free magnet layer is improved, and important performance characteristics such as thermal stability and retention of the memory device are also improved.FIG. 1 illustrates a cross-sectional view of a material layer stack 100 for a vertical STTM device arranged in accordance with at least some embodiments of the present disclosure. For example, the material layer stack 100 may be incorporated into an MTJ device. The material layer stack 100 may be characterized as a magnetic tunnel junction device structure. As shown, the material layer stack 100 includes a terminal electrode 102 (eg, a bottom electrode BE) over a substrate 101. The fixed magnetic material layer 103 is on or over the terminal electrode 102. The fixed magnetic material layer 103 may include a single fixed magnetic material or a stack of fixed magnetic materials. As shown, the tunnel barrier layer 104 is on or over the fixed magnetic material layer 103 (and between the fixed magnetic material layer 103 and the free magnetic material layer 105). The free magnetic material layer 105 is on or above the tunnel barrier layer 104. The free magnetic material layer 105 may include a single free magnetic material or a stack of free magnetic materials magnetically coupled through an intervening metal coupling material layer. An oxide layer 106, which may be characterized as an oxide cap layer, is on the free magnetic material layer 105. In some embodiments, the free magnetic material layer 105 is one or a combination of MgO, VO, WO, TaO, HfO, or MoO. The protective layer 107 is on or above the oxide layer 106 and protects the oxide layer 106 during the deposition of subsequent layers. The cladding layer 108, which is primarily tungsten, is over the protective layer 107 and provides reduced or no oxygen removal from the oxide layer 106 to achieve improved device performance, as discussed further herein. A terminal electrode 109 (eg, a top electrode TE) is on or over the tungsten cladding layer 108. It is to be noted that in some embodiments, the order of the material layers 102-109 may be reversed, and / or the material layers 102-109 may extend laterally from the sidewalls of the topological features.As shown, the material layer stack 100 provides a perpendicular magnetic system such that the magnetic easy axes of the magnetic material layers 103, 105 are outside the plane of the substrate 101 in the z direction. The fixed magnetic material layer 103 may be composed of any material or material stack suitable for maintaining a fixed magnetization direction, and the free magnetic material layer 105 may be magnetically softer than the fixed magnetic material layer 103 (for example, magnetization may be easier to rotate To a parallel state and an anti-parallel state). The fixed magnetic material layer 103 may be characterized as a fixed magnet, a fixed magnet layer, or a fixed magnetic stack.In some embodiments, the material layer stack 100 is based on a CoFeB / MgO system, which includes a fixed magnetic material layer 103 made of CoFeB, a tunnel barrier layer 104 made of MgO, and a free magnetic material layer 105 made of CoFeB. That is, in some embodiments, the fixed magnetic material layer 103 includes one or more of Co, Fe, and B, the tunnel barrier layer 104 includes one or more of Mg and O, and the free magnetic material layer 105 includes one or more of Co, Fe, and B. In some embodiments, all CoFeBs have a body centered cubic (BCC) (001) out-of-plane texture, so that the texture herein refers to the main distribution of crystallographic orientation within the layers of the MTJ structure 101. In some embodiments, the CoFeB magnetic material layers 103, 105 are iron-rich alloys for improving magnetic perpendicularity. For example, iron-rich alloys are alloys that have more iron than cobalt. Other magnetic material systems may be used for the fixed magnetic material layer 103 and / or the free magnetic material layer 105, for example, Co, Fe, Ni systems.In some embodiments, the free magnetic material layer 105 is CoFeB. In some embodiments, the free magnetic material layer 105 has a thickness in a range of 1 to 2.5 nm. For example, the free magnetic material layer 105 having a thickness of less than 2.5 nm exhibits PMA. In some embodiments, the free magnetic material layer 105 has a thickness in a range of 0.6 to 1.6 nm. In addition, the interface PMA may be provided by iron-oxygen hybridization between the free magnetic material layer 105 and the tunnel barrier layer 104 and between the free magnetic material layer 105 and the oxide layer 106. In some embodiments, the fixed magnetic material layer 103 has a thickness in a range of 0.1 to 1 nm. In some embodiments, the free magnetic material layer 105 has a thickness that is less than the thickness of the fixed magnetic material layer 103. In the embodiment, the fixed magnetic material layer 103 is composed of a single layer of CoFeB. In an embodiment, the fixed magnetic material layer 103 has a thickness in a range of 2 to 3 nm. As discussed, the fixed magnetic material layer 103 and the free magnetic material layer 105 have PMA, and when the magnetization direction in the free magnetic material layer 105 and the magnetization direction in the fixed magnetic material layer 103 are antiparallel (opposite), the material layers are stacked The body 100 is in a high resistance state, and when the magnetization direction in the free magnetic material layer 105 is parallel to the magnetization direction in the fixed magnetic material layer 103 (as shown), the material layer stack 100 is in a low resistance state. When the spin-polarized electron current passing through the fixed magnetic material layer 103, the tunnel barrier layer 104, and the free magnetic material layer 105 causes a change in the direction of magnetization in the free magnetic material layer 105, a state change occurs. The free magnetic material layer 105 may be characterized as a free magnet, a free magnet layer, or a free magnetic stack.The tunnel barrier layer 104 is composed of a material or a material stack suitable for allowing a current with a majority of spins to pass through the layer while blocking a current with a few spins (eg, a spin filter). In some embodiments, the tunnel barrier layer 104 is or includes magnesium oxide (MgO). In some embodiments, the tunnel barrier layer 104 is or includes magnesium aluminum oxide (MgAlO). In some embodiments, the tunnel barrier layer 104 is or includes aluminum oxide (Al2O3). The tunnel barrier layer 104 may provide a solid phase epitaxial crystalline structure (eg, a BCC with a (001) texture) for the free magnetic material layer 105 and / or the fixed magnetic material layer 103. The tunnel barrier layer 104 may be characterized as a barrier layer, a tunnel layer, or an oxide layer.The material layer stack 100 further includes an oxide layer 106 disposed on or over the free magnetic material layer 105 of the magnetic tunnel junction 111. In an embodiment, the oxide layer 106 is or includes MgO, such that the oxide layer includes Mg and O. In some embodiments, the oxide layer 106 is or includes one or more of VO, WO, TaO, HfO, or MoO. In one embodiment, the oxide layer 106 has a thickness of not less than 2 nm. In an embodiment, the oxide layer 106 has a thickness in a range of 1.5 to 4 nm. In an embodiment, the oxide layer 106 has a thickness in a range of 0.3 to 1.5 nm. It is worth noting that the oxide layer 106 provides a source of oxygen for the oxygen-iron hybridization at the interface 112 of the free magnetic material layer 105 and the oxide layer 106. This oxygen-iron hybrid at the interface 112 provides the interface vertical anisotropy in the free magnetic material layer 105. As discussed, maintaining the interface vertical anisotropy improves the performance of the material layer stack 100 relative to the thermal stability and retention at the free magnetic material layer 105. Specifically, by providing a cladding layer 108 that is primarily tungsten, the choice of material relative to other layers of this layer maintains oxygen at the interface 112. For example, tantalum with high oxygen affinity near the oxide layer 106 (or even on the opposite side of the protective layer 107) may disadvantageously become an oxygen absorber, thereby causing the oxide layer 106 to become oxygen deficient at the interface 112, which results in a free magnetic material The poor PMA at the layer 105 and the poor performance of the material layer stack 100 at least in terms of thermal stability and retention.With continued reference to FIG. 1, the protective layer 107 is on or above the oxide layer 106. The protective layer 107 provides a barrier for the oxide layer 106 to resist, for example, direct physical sputtering damage during the deposition of subsequent layers (eg, the tungsten cladding layer 108). In some embodiments, the protective layer 107 is an alloy having individual constituent atoms having a lower atomic mass than the atomic mass of the material of the tungsten encapsulation layer 108 (for example, each constituent atom of the protective layer 107 has Lower atomic mass than tungsten). In some embodiments, the protective layer 107 has a thickness in a range of 0.3 to 1.5 nm. In an embodiment, the protective layer 107 is a CoFeB material. In an embodiment, the protective layer 107 includes one or more of Co, Fe, and B. In an embodiment, the protective layer 107 has a stoichiometric ratio of cobalt and iron and a certain thickness so that CoFeB is non-magnetic. In an embodiment, the protective layer 107 includes cobalt and iron, and it is worth noting that it does not include boron. In addition, the thickness of the protective layer 107 is provided to prevent damage during the deposition of the tungsten cladding layer 108, as discussed.The tungsten cladding layer 108 is on or over the protective layer 107. As discussed, the tungsten cladding layer 108 is primarily tungsten (ie, the component with the highest proportion in the cladding layer 108 is tungsten). In some embodiments, the tungsten cladding layer 108 is pure tungsten or near pure tungsten. In an embodiment, the tungsten cladding layer 108 has at least 95% tungsten by weight (ie, the mass fraction of tungsten in the tungsten cladding layer 108 is not less than 95%). In an embodiment, the tungsten cladding layer 108 has not less than 99% tungsten by weight. In an embodiment, the tungsten cladding layer 108 has not less than 99.9% tungsten by weight. In some embodiments, the tungsten cladding layer 108 is noteworthy that it is free of tantalum. Specifically, the tungsten cladding layer 108 may provide a tantalum-free thickness on the protective layer 107. The tungsten cladding layer 108 may be characterized as a tungsten protective layer, a tungsten cladding, and the like.The tungsten cladding layer 108 may have any suitable thickness. In an embodiment, the tungsten cladding layer 108 has a thickness in a range of 0.5 to 5 nm (for example, not less than 0.5 nm and not more than 5 nm). In an embodiment, the tungsten cladding layer 108 has a thickness of about 4 nm (for example, not less than 3.8 nm and not more than 4.2 nm). In an embodiment, the tungsten cladding layer 108 has a thickness of about 3 nm (eg, not less than 2.8 nm and not more than 3.2 nm). In an embodiment, the tungsten cladding layer 108 has a thickness of about 2 nm (eg, not less than 1.8 nm and not more than 2.2 nm). In an embodiment, the tungsten cladding layer 108 has a thickness of not less than 2 nm and not more than 2.5 nm. Specifically, the inventors have found that the thickness of the tungsten cladding layer 108 is reduced from 4 nm to 3 nm to 2 nm, which is the majority of the size of the material layer stack 100 (for example, the dimensions of the x and y dimensions such that the y dimension enters the page) Provides improved retention (e.g., fewer faulty bits). For example, for a circular device having a diameter in the range of about 5 to 100 nm (for example, in the x or y dimension, where y comes out of the page in Figure 1), the A tungsten cladding layer 108 is advantageous. The thickness of the tungsten cladding layer 108 described above may not have much impact on the resistivity of the material layer stack 100 while providing the benefits discussed herein. In addition, when using tantalum, it may be advantageous to provide a terminal electrode 109 having a double-layer structure so that a ruthenium layer is on the tungsten cladding layer 108 and a tantalum layer is on the ruthenium layer. It is worth noting that the ruthenium layer in question is free of tantalum. In an embodiment, the ruthenium layer has a thickness in a range of 2 to 8 nm, wherein a thickness of about 5 nm is particularly advantageous, and the tantalum layer has a thickness in a range of 3 to 10 nm, wherein a thickness of approximately 6 nm is particularly advantageous.The terminal electrodes 102, 109 may include any suitable conductive material. In an embodiment, the terminal electrode 102 is made of a material or a material stack suitable for making electrical contact with the fixed magnetic material layer 103 of the material layer stack 100. In some embodiments, the terminal electrode 102 is an electrode that is smooth in topology. In an embodiment, the terminal electrode 102 is composed of a ruthenium layer interlaced with a tantalum layer. In an embodiment, the terminal electrode 102 is titanium nitride. In an embodiment, the terminal electrode 102 has a thickness in a range of 20 to 50 nm. In an embodiment, the terminal electrode 109 includes one or more of ruthenium, tantalum, or titanium nitride. In an embodiment, the terminal electrode 109 includes a material suitable for providing a hard mask for etching the material layer stack 100 to form a pSTTM device. In an embodiment, the terminal electrode 109 has a thickness in a range of 30 to 70 nm. In the embodiment, the terminal electrodes 102, 109 are the same metal. The terminal electrodes 102, 109 may be characterized as electrodes, electrode layers, metal electrodes, and the like.In operation, a pSTTM device employing a material layer stack 100 or any other material layer stack discussed herein acts as a variable resistor, where the resistance of the device is in a high state (antiparallel magnetization) and a low state (as discussed) (Parallel magnetization), where switching is achieved by passing a critical amount of spin-polarized current by changing the magnetization orientation of the free magnetic layer to align with the magnetization of the fixed magnetic layer. By changing the direction of the current, the magnetization in the free magnetic layer can be reversed. Since the free magnetic layer does not require power to maintain relative magnetization orientation, pSTTM is a non-volatile memory.FIG. 2 illustrates a cross-sectional view of a material layer stack 200 including an exemplary multilayer free magnetic material layer for a vertical STTM device, arranged in accordance with at least some embodiments of the present disclosure. For example, the material layer stack 200 may be incorporated into any MTJ device discussed herein. Herein, similar reference numerals are used to designate similar materials and components. It is worth noting that, relative to the material layer stack 100, the material layer stack 200 includes at least a second free magnetic material layer 205 coupled to the free magnetic material layer 105 through the intervening metal coupling layer 202. That is, the metal coupling layer 202 is on or above the free magnetic material layer 105 and the second free magnetic material layer 205 is on or above the metal coupling layer 202. Including the metal coupling layer 202 and the second free magnetic material layer 205 increases the number of material interfaces within the magnetic tunnel junction 211 (relative to the magnetic tunnel junction 111), which improves the overall interface vertical anisotropy in the magnetic tunnel junction 211.The second free magnetic material layer 205 may include any materials and characteristics discussed with respect to the free magnetic material layer 105. In an embodiment, both the free magnetic material layer 105 and the free magnetic material layer 205 are or include CoFeB. In the embodiment, the free magnetic material layer 105 and the free magnetic material layer 205 are CoFeB, and the free magnetic material layer 105 has a thickness larger than that of the second free magnetic material layer 205. In an embodiment, the CoFeB free magnetic material layer 105 has a thickness in a range of 0.5 to 2 nm, and the CoFeB second free magnetic material layer 205 has a thickness in a range of 0.3 to 1.5 nm. In some embodiments, the metal coupling layer 202 is or includes a transition metal, such as, but not limited to, tungsten, molybdenum, vanadium, niobium, or iridium. In some embodiments, the metal coupling layer 202 has a thickness in a range of 0.1 to 1 nm.3 illustrates a cross-sectional view of a material layer stack 300 including an exemplary synthetic antiferromagnetic (SAF) layer 301 for a vertical STTM device, arranged in accordance with at least some embodiments of the present disclosure. For example, the material layer stack 300 may be incorporated into any MTJ device discussed herein. It is worth noting that, relative to the material layer stack 100, the material layer stack 300 includes a synthetic antiferromagnetic layer 301 between the terminal electrode 102 and the fixed magnetic material layer 103. That is, the synthetic antiferromagnetic layer 301 is on or above the terminal electrode, and the fixed magnetic material layer 103 is on or above the synthetic antiferromagnetic layer 301. The inclusion of the synthetic antiferromagnetic layer 301 improves the restoring force of the fixed magnetic material layer 103 against accidental inversion.In an embodiment, the free magnetic material layer 105 and the fixed magnetic material layer 103 have similar thicknesses and are used to change the magnetization orientation of the free magnetic material layer 105 (for example, from parallel The injected electron spin current to antiparallel, or vice versa) will also affect the magnetization of the fixed magnetic material layer 103. The inclusion of a synthetic antiferromagnetic layer 301 makes the fixed magnetic material layer 103 resistant to changes in orientation, so that the free magnetic material layer 105 and the fixed magnetic material layer 103 can have similar thicknesses while still providing the fixing of the fixed magnetic material layer 103 Magnetization characteristics. In some embodiments, the free magnetic material layer 105 and the fixed magnetic material layer 103 have different thicknesses (for example, the fixed magnetic material layer 103 is thicker), and the synthetic antiferromagnetic layer 301 improves the alignment of the fixed magnetic material layer 103 in the direction of magnetization. The resilience of undesired accidental changes. In an embodiment, the synthetic antiferromagnetic layer 301 includes a non-magnetic layer between a first magnetic layer and a second magnetic layer. In some embodiments, the first magnetic layer and the second magnetic layer each include a metal such as, but not limited to, cobalt, nickel, platinum, or palladium. In an embodiment, the non-magnetic layer between the first magnetic layer and the second magnetic layer is or includes ruthenium. In an embodiment, the non-magnetic layer between the first magnetic layer and the second magnetic layer is ruthenium having a thickness in a range of 0.4 to 1 nm.FIG. 4 illustrates a cross-sectional view of a material layer stack 400 including a plurality of free magnetic material layers and a synthetic antiferromagnetic layer for a vertical STTM device, arranged in accordance with at least some embodiments of the present disclosure. For example, the material layer stack 400 may be incorporated into any MTJ device discussed herein. It is worth noting that, relative to the material layer stack 100, the material layer stack 400 includes at least a second free magnetic material layer 205 coupled to the free magnetic material layer 105 through the intervening metal coupling layer 202, and the terminal electrode 102 and the fixed magnetic material A synthetic antiferromagnetic layer 301 between the layers 103. That is, the metal coupling layer 202 is on or above the free magnetic material layer 105, and the second free magnetic material layer 205 is on or above the metal coupling layer 202, and the composite antiferromagnetic layer 301 is on or over the terminal electrode And the fixed magnetic material layer 103 is on or above the synthetic antiferromagnetic layer 301. As discussed with respect to FIGS. 2 and 3, including the metal coupling layer 202 and the second free magnetic material layer 205 improves the overall interface vertical anisotropy in the magnetic tunnel junction 211, and including the synthetic antiferromagnetic layer 301 improves the fixation. The restoring force of the magnetic material layer 103 against accidental inversion.FIG. 5 shows a flowchart illustrating an exemplary process 500 for fabricating a magnetic tunnel junction device structure arranged in accordance with at least some embodiments of the present disclosure. For example, the process 500 may be implemented to fabricate any of the material layer stacks 100, 200, 300, and 400 as discussed herein and / or a memory device including such a material layer stack. In the illustrated embodiment, the process 500 may include one or more operations shown by operations 501-512. However, embodiments herein may include additional operations, certain operations may be omitted, or operations may be performed in an order other than the order provided. In an embodiment, the process 500 may fabricate a magnetic tunnel junction device structure 650 over the substrate 101 as discussed further herein in connection with FIGS. 6A-6D.Process 500 may begin at operation 501, where a substrate may be received for processing. The substrate may include any suitable substrate, for example, a silicon wafer and the like. In some embodiments, the substrate includes a base device, such as a transistor and / or an electrical connector, and the like. In an embodiment, the substrate 101 may be received and processed as discussed in connection with FIGS. 6A-6D.Processing may continue at operations 502-509, which may be collectively characterized as a setup or deposition operation 513. In each of operations 502-509, the indicated layers (terminal electrode layer and optional SAF layer at operation 502, fixed magnetic material layer at operation 503, tunnel barrier layer at operation 504, The free magnetic material layer at 505 and the optional metal coupling layer and optional second free magnetic material layer, the oxide cap layer at operation 506, the protective layer at operation 507, and the main tungsten at operation 508 The cladding layer, and the terminal electrode layer at operation 509) are disposed on or over one or more layers provided in the previous operation (or on a receiving substrate for the terminal electrode layer provided in operation 502). ).Each of the indicated layers may be provided using any suitable technique or techniques, such as a deposition technique. In an embodiment, one, some or all of the layers are deposited using a physical vapor deposition (sputter deposition) technique. It should be recognized that such a layer may be deposited on the layer provided in the previous operation (or a receiving substrate for the terminal electrode layer provided in operation 502), or one or more intervening layers may be on the Between the layer set by the current operation and the layer set by the previous operation. Moreover, some of the layers may be optional. In an embodiment, the layers provided in operation 513 are deposited in situ (eg, in place and without movement or alteration between operations) without exposing the layer to the atmospheric environment between such depositions. For example, the layer provided in operation 513 may be deposited using sequential in-situ physical vapor deposition.For example, at operation 502, a terminal electrode layer (e.g., metal) is disposed on or over a substrate received at operation 501 using any suitable one or more techniques such as a deposition technique (e.g., physical vapor deposition). . The terminal electrode layer may have any of the characteristics discussed herein in connection with the terminal electrode 102. As shown, at operation 502, an optional SAF structural layer (e.g., a first magnetic layer, a non-magnetic layer, and a first Two magnetic layers) are provided on or above the terminal electrode layer. The SAF structural layer may have any of the characteristics discussed herein in connection with the synthetic antiferromagnetic layer 301.At operation 503, a fixed magnetic material layer is disposed on or over the terminal electrode layer or the SAF structure layer using any suitable one or more techniques such as a deposition technique (eg, physical vapor deposition). The fixed magnet layer may have any of the characteristics discussed herein in connection with the fixed magnetic material layer 103. In an embodiment, depositing a layer of fixed magnetic material includes depositing an amorphous CoFeB layer over a substrate. At operation 504, a tunnel barrier layer is disposed on or over the fixed magnet layer using any suitable technique or techniques, such as a deposition technique (e.g., physical vapor deposition). The tunnel barrier layer may have any of the characteristics discussed herein in connection with the tunnel barrier layer 104. In an embodiment, depositing the tunnel barrier layer includes depositing a layer of a dielectric material on or over a fixed magnetic material layer (eg, amorphous CoFeB). At operation 505, a layer of free magnetic material is disposed on or on the tunnel barrier using any suitable technique or techniques, such as a deposition technique (e.g., physical vapor deposition). The free magnetic material layer may have any of the characteristics discussed herein in connection with the free magnetic material layer 105. In an embodiment, depositing a layer of fixed magnetic material includes depositing an amorphous CoFeB layer over a tunnel barrier layer (eg, a layer of dielectric material). For example, the amorphous CoFeB free magnetic material layer and the amorphous CoFeB fixed magnetic material layer in question may be annealed later to convert them into polycrystalline CoFeB.As shown, at operation 505, an optional metal coupling layer and a second free magnetic material layer may be disposed on the free magnetic material layer using any suitable one or more techniques such as a deposition technique (e.g., physical vapor deposition). Up or above. The metal coupling layer and the second free magnetic material layer may have any of the characteristics discussed herein in connection with the metal coupling layer 202 and the second free magnetic material layer 205. In an embodiment, depositing the second free magnetic material layer includes depositing an amorphous CoFeB layer. At operation 506, the oxide capping layer is disposed on or over the free magnetic material layer or the second free magnetic material layer using any suitable technique or techniques, such as a deposition technique (e.g., physical vapor deposition). The oxide cap layer may have any of the characteristics discussed herein in connection with the oxide layer 106. At operation 507, a protective layer is disposed on the oxide layer using any suitable technique or techniques, such as a deposition technique (e.g., physical vapor deposition). The protective layer may have any of the characteristics discussed herein in connection with the protective layer 107. In an embodiment, the protective layer includes cobalt and iron. In an embodiment, the protective layer includes cobalt, iron, and boron. At operation 508, a cladding layer that is primarily tungsten is disposed on or over the protective layer using any suitable technique or techniques, such as a deposition technique (e.g., physical vapor deposition). The tungsten cladding layer may have any of the characteristics discussed herein in connection with the tungsten cladding layer 108. At operation 509, the terminal electrode layer is disposed on or over the tungsten cladding layer using any suitable technique or techniques, such as a deposition technique (e.g., physical vapor deposition). The terminal electrode layer may have any of the characteristics discussed herein in connection with the terminal electrode 109.Processing continues from operation 513 to operation 510, where the layer deposited at operation 513 is patterned. As discussed, in some embodiments, one or more of the layers illustrated in operation 513 may be omitted. The layer received at operation 510 is patterned using any suitable technique or techniques, such as a lithographic operation. In an embodiment, a photoresist pattern is provided, and the terminal electrode layer provided at operation 509 is patterned and used as a hard mask to pattern the underlying layer. Operation 510 generates a patterned layer including a patterned bottom or first terminal electrode layer, a patterned SAF layer (if implemented), a patterned fixed magnetic material layer, a patterned tunnel barrier layer, a patterned free magnetic material layer, a pattern Patterned metal coupling layer (if implemented), patterned second free magnetic material layer (if implemented), patterned oxide (cap) layer, patterned protective layer, patterned tungsten cladding layer, and patterned top or Second terminal electrode.Processing continues to operation 511, where the patterned layer is annealed, and a magnetic field can be applied to the patterned layer as needed to generate a magnetic tunnel junction device structure. Such annealing is performed at any suitable temperature and duration to set the crystalline structure of the barrier layer and / or remove boron from the patterned free magnet layer, the patterned fixed magnet layer, or the patterned magnets in a line One or more of the layers (if applicable) are driven away. In an embodiment, annealing transforms the amorphous CoFeB magnetic material layer into polycrystalline CoFeB. In an embodiment, the annealing operation has a maximum temperature in a range of about 350 to 400 ° C. In addition, the applied magnetic field is applied at any suitable duration with any suitable field strength (eg, 1 to 5 Tesla). Such application of a magnetic field may establish the magnetism of one or more of the free magnetic material layer or the fixed magnetic material layer. The annealing and magnetic field application may be performed separately or at least partially simultaneously. Further, in some embodiments, such an annealing process is after setting (e.g., via deposition) a terminal electrode layer (as discussed in connection with operation 509) (e.g., immediately after) and patterning (e.g., (Discussed in operation 510). This annealing process performed after the terminal electrode layer is provided can achieve the crystallinity discussed above before patterning.Processing continues and ends at operation 512, where high temperature STTM and / or metal oxide semiconductor (MOS) transistor integrated circuit (IC) processing is performed at a temperature of, for example, at least 400 ° C. Any standard microelectronic fabrication process can be performed, such as photolithography, etching, thin film deposition, planarization (eg, CMP), etc. to complete the interconnection of STTM devices implementing either of the following: material layer stack Any of the material layer stacks of 100, 200, 300, 400 or a subset of the material layers therein.6A, 6B, 6C, 6D illustrate side views of exemplary magnetic tunnel junction device structures arranged in accordance with at least some embodiments of the present disclosure when performing specific fabrication operations. As shown in FIG. 6A, the magnetic tunnel junction device structure 600 includes a substrate 101. For example, the substrate 101 may be any substrate, such as a substrate wafer received in operation 501. In some examples, the substrate 101 is or includes a semiconductor material, such as a single crystal silicon substrate, silicon on insulator, and the like. In various examples, the substrate 101 includes a metallized interconnect layer for an integrated circuit or an electronic device, for example, the electronic device may be a transistor, memory, capacitor, resistor, optoelectronic device, switch, or through an electrically insulating layer (Eg, interlayer dielectric, trench insulation, etc.) any other active or passive electronic device.FIG. 6B illustrates a terminal electrode layer 601, one or more SAF structure layers 602, a fixed magnetic material layer 603 (eg, amorphous CoFeB), a tunnel barrier layer 604, and a free magnetic material layer 605 (eg, amorphous CoFeB). ), Metal coupling layer 606, second free magnetic material layer 607 (for example, amorphous CoFeB), oxide cap layer 608, protective layer 609, cladding layer 610 mainly composed of tungsten and terminal electrode layer 611 and magnetic properties The tunnel junction device structure 600 is similar to the magnetic tunnel junction device structure 620. The illustrated layers are formed using any suitable one or more techniques, for example, deposition techniques including physical vapor deposition or any other operation discussed in connection with operation 513 or discussed elsewhere herein. As shown, the illustrated layers may be formed over the substrate 101 in a batch manner and in a horizontal manner (eg, along the x-y plane of the substrate 101). As discussed, one or more of the SAF structure layer 602, the metal coupling layer 606, and the second free magnetic material layer 607 are optional and may not be provided in some embodiments. In the embodiment, the fixed magnetic material layer 603 is disposed on the terminal electrode layer 601. In an embodiment, an oxide cap layer 608 is disposed on the free magnetic material layer 605.FIG. 6C shows the counter electrode layer 601, one or more SAF structure layers 602, a fixed magnetic material layer 603, a tunnel barrier layer 604, a free magnetic material layer 605, a metal coupling layer 606, and a second free magnetic material layer 607. , The oxide cap layer 608, the protective layer 609, the cladding layer 610 mainly composed of tungsten, and the terminal electrode layer 611 are patterned to provide or form a patterned terminal electrode layer 621, one or more patterned SAF structure layers 622, Patterned fixed magnetic material layer 623, patterned tunnel barrier layer 624, patterned free magnetic material layer 625, patterned metal coupling layer 626, patterned second free magnetic material layer 627, patterned oxide cap layer 628, pattern The magnetic tunnel junction device structure 640 is similar to the magnetic tunnel junction device structure 620 after the patterned protective layer 629, the patterned cladding layer 630 mainly composed of tungsten, and the patterned terminal electrode layer 631. In an embodiment, a patterned resist layer is provided over the terminal electrode layer 611 using a photolithography technique, and the illustrated layer may be patterned using an etching technique. The pattern of the resist layer is transferred to the terminal electrode layer 611, and the terminal electrode layer 611 may be used as a hard mask to pattern other layers. For example, the terminal electrode layer 611 and / or the patterned terminal electrode layer 631 may be characterized as a hard mask layer.FIG. 6D illustrates one or more annealing operations and applying a magnetic field to the magnetic tunnel junction device structure 640 to provide the terminal electrode 102, the synthetic antiferromagnetic layer 301, the fixed magnetic material layer 103, the tunnel as discussed herein in connection with FIG. 4 After the barrier layer 104, the free magnetic material layer 105, the metal coupling layer 202, the second free magnetic material layer 205, the oxide layer 106, the protective layer 107, the cladding layer 108 mainly composed of tungsten, and the terminal electrode 109, the magnetic tunnel junction is formed. The device structure 640 is similar to the magnetic tunnel junction device structure 650. As discussed, in some embodiments, the operations described in connection with FIGS. 6A, 6B, 6C, 6D exclude the formation of a synthetic antiferromagnetic layer 301 and / or the metal coupling layer 202 and the second free magnetic material layer 205 Formed to form a magnetic tunnel junction device structure as discussed in connection with FIGS. 1, 2 and 3.The annealing operation in question is performed at any suitable temperature and duration. In an embodiment, the annealing operation has a maximum temperature in a range of 350 to 400 ° C. Such an annealing operation can crystallize MgO in the tunnel barrier layer 104, and / or match the crystal structure of the tunnel barrier layer 104 with an adjacent CoFeB magnetic material layer, and / or remove boron from the fixed magnetic material layer 103, the free magnetic material One or more of the layer 105 and the second free magnetic material layer 205 are driven away. In addition, the applied magnetic field is applied at any suitable duration with any suitable field strength (eg, 1 to 5 Tesla). Such a magnetic field application establishes the magnetism of one or more of the fixed magnetic material layer 103, the free magnetic material layer 105, and the second free magnetic material layer 205. In an embodiment, the annealing and the magnetic field application may be performed at least partially at the same time so that the annealing is performed in the presence of a magnetic field of 1 to 5 Tesla. For example, the annealing duration and the magnetic field application duration may at least partially overlap. In other embodiments, annealing and magnetic field application may be performed separately.6A, 6B, 6C, and 6D illustrate exemplary process flows for making material layer stacks 100, 200, 300, 400, or other magnetic tunnel junction material stacks or device structures discussed herein. In various examples, additional operations may be included or certain operations may be omitted. Specifically, the illustrated process may include: depositing a first amorphous CoFeB layer over a substrate; depositing a first dielectric material layer over the first amorphous CoFeB layer; and depositing a first dielectric material layer over the first dielectric material layer Two amorphous CoFeB layers; depositing an oxide layer on the second amorphous CoFeB layer; depositing a protective layer including at least Co and Fe on the oxide layer; depositing a cladding layer on the protective layer, wherein the cladding layer is mainly Is tungsten; and annealing the magnetic tunnel junction material stack to convert the first amorphous CoFeB layer and the second amorphous CoFeB layer into a polycrystalline CoFeB.FIG. 7 shows a schematic diagram of a non-volatile memory device 701 including a magnetic tunnel junction device structure having a cladding layer that is primarily tungsten, arranged in accordance with at least some embodiments of the present disclosure. For example, the non-volatile memory device 701 may provide a spin torque transfer memory (STTM) bit cell of a spin torque transfer random access memory (STTRAM). The non-volatile memory device 701 may be implemented in any suitable component or device or the like (e.g., any component discussed in connection with FIGS. 9 and 10). In an embodiment, the non-volatile memory device 701 is implemented in a non-volatile memory coupled to a processor. For example, the non-volatile memory and processor may be implemented by a system having any suitable form factor. In an embodiment, the system further includes an antenna and a battery such that each of the antenna and the battery is coupled to the processor.As shown, the non-volatile memory device 701 includes a magnetic tunnel junction device structure 710. In the illustrated embodiment, the magnetic tunnel junction device structure 710 includes a terminal electrode 102, a synthetic antiferromagnetic layer 301, a fixed magnetic material layer 103, a tunnel barrier layer 104, a free magnetic material layer 105, a metal coupling layer 202, and a second The free magnetic material layer 205, the oxide layer 106, the protective layer 107, the cladding layer 108 mainly made of tungsten, and the terminal electrode 109 (that is, the material layer stack 400). In other embodiments, the non-volatile memory device 701 includes a magnetic tunnel junction device structure that implements the material layer stack 100, the material layer stack 200, or the material layer stack 300. As discussed herein, the fixed magnetic material layer 103, the free magnetic material layer 105, and the second free magnetic material layer 205 have perpendicular magnetic anisotropy. In addition, the magnetic tunnel junction device structure 710 includes a cladding layer 108 mainly of tungsten on the protective layer 107. Also as shown, the magnetic tunnel junction device structure 710 includes terminal electrodes 102, 109, which are coupled to a circuit of a non-volatile memory device 701 discussed below.The non-volatile memory device 701 includes a first metal interconnection 792 (for example, a bit line), a second metal interconnection 791 (for example, a source line), a first terminal 716, a second terminal 717, and a third terminal 718. And a third metal interconnect 793 (eg, a word line). The terminal electrode 109 of the magnetic tunnel junction device structure 710 is coupled to the first metal interconnection 792, and the terminal electrode 102 of the magnetic tunnel junction device structure 710 is coupled to the second terminal 717 of the transistor 715. In an alternative embodiment, the terminal electrode 102 of the magnetic tunnel junction device structure 710 is coupled to the first metal interconnect 792, and the terminal electrode 109 of the magnetic tunnel junction device structure 710 is coupled to the second terminal 717 of the transistor 715 (ie, the magnetic tunnel The junction device structure 710 is flipped as a whole). A first terminal 716 (eg, a gate terminal) of the transistor 715 is coupled to the third metal interconnect 793, and a third terminal 718 of the transistor 715 is coupled to the second metal interconnect 791. Such a connection may be made in any manner commonly used in the art. The non-volatile memory device 701 may further include an additional read-write circuit (not shown), a sense amplifier (not shown), a bit line reference (not shown), etc. This is a person skilled in the art of non-volatile memory devices Understand. A plurality of non-volatile memory devices 701 may be operatively connected to each other to form a memory array (not shown) so that the memory array may be incorporated into the non-volatile memory device.In operation, the non-volatile memory device 701 uses the previously discussed magnetic tunnel junction for switching and detecting the memory state of the non-volatile memory device 701. For example, the non-volatile memory can be read by accessing or sensing the memory state implemented by the parallel or non-parallel magnetic direction of the free magnetic material layer 105 (and / or the second free magnetic material layer 205) through the magnetic tunnel junction Device 701. More specifically, the magnetic resistance of the magnetic tunnel junction of the non-volatile memory device 701 is established by the magnetic direction of the fixed magnetic material layer stored in the free magnetic material layer. When the magnetic directions are substantially parallel, the magnetic tunnel junction has a low resistance state, and when the magnetic directions are substantially anti-parallel, the magnetic tunnel junction has a high resistance state. Such a low or high resistance state can be detected via the circuit of the non-volatile memory device 701. For a write operation, the driving current polarized by the fixed magnet layer 103 is passed through the free magnet layer by passing the circuit of the non-volatile memory device 701 again, so that, for example, a positive voltage switches the magnetization direction of the free magnet layer to the reverse The parallel and negative voltage switches the magnetization direction of the free magnet layer to parallel (or vice versa), so that the magnetic direction of the free magnet layer is optionally switched between the parallel direction and the anti-parallel direction.As discussed herein, the cladding layer 108, which is mainly tungsten, reduces the amount of oxygen removed from the oxide layer 106 relative to other cladding materials, so that the non-volatile memory device 701 has improved thermal stability and retention Sex.The magnetic tunnel junction device structures and material layer stacks discussed herein can be provided to any suitable device (e.g., STTM, STTRAM, etc.) or platform (e.g., computing, mobile, automotive, Networking, etc.). Further, the non-volatile memory device 701 or any magnetic tunnel junction device structure may be located on a substrate such as a bulk semiconductor material as part of a wafer. In an embodiment, the substrate is a bulk semiconductor material that is part of a chip that has been separated from a wafer. One or more layers of interconnects and / or devices may be between the magnetic tunnel junction device structure and the substrate, and / or one or more layers of interconnects and / or devices may be between the magnetic tunnel junction device structure and the magnetic tunnel Between the interconnects above the device structure.FIG. 8 illustrates an exemplary cross-section die layout 800 including an exemplary magnetic tunnel junction device structure 710 arranged in accordance with at least some embodiments of the present disclosure. For example, the cross-section die layout 800 shows a magnetic tunnel junction device structure 710 formed in its metal 3 (M3) and metal 2 (M2) layer regions. Although exemplified in connection with the magnetic tunnel junction device structure 710, any of the magnetic tunnel junction device structures or material layer stacks discussed herein can be implemented in the die layout of FIG. As shown in FIG. 8, the cross-section die layout 800 shows an active region having a transistor MN including a diffusion region 801, a gate terminal 802, a drain terminal 804, and a source terminal 803. For example, the transistor MN may implement the transistor 715 (where the gate terminal 802 is the first terminal 716, the drain terminal 804 is the second terminal 717, and the source terminal 803 is the third terminal 718), and the source line (SL) may be A second metal interconnect 791 is implemented, and the bit line may implement a first metal interconnect 792.As shown, source terminal 803 is coupled to SL (source line) via polysilicon (poly) or via, where SL is formed in metal 0 (M0). In some embodiments, the drain terminal 804 is coupled to MOa (also in metal 0) through a via 805. The drain terminal 804 passes through via 0-1 (for example, a via layer connecting a metal 0 layer to a metal 1 layer), metal 1 (M1), and via 1-2 (for example, a metal 1 layer is connected to a metal 2 layer) Layer of via layer) and metal 2 (M2) are coupled to the magnetic tunnel junction device structure 710. The magnetic tunnel junction device structure 710 is coupled to a bit line in metal 4 (M4). In some embodiments, the magnetic tunnel junction device structure 710 is formed in a metal 3 (M3) region. In some embodiments, the transistor MN is formed in or on the front side of the die, and the magnetic tunnel junction device structure 710 is located in or on the back end of the die. In some embodiments, the magnetic tunnel junction device structure 710 is located in a back-end metal layer or a via layer (eg, via 3).Although illustrated using a magnetic tunnel junction device structure 710 formed in metal 3 (M3), the magnetic tunnel junction device structure 710 may be formed in any suitable layer of the cross-section die layout 800. In some embodiments, the magnetic tunnel junction device structure 710 is formed in a metal 2 and / or metal 1 layer region. In such an embodiment, the magnetic tunnel junction device structure 710 may be directly connected to M0a, and a bit line may be formed in the metal 3 or the metal 4.FIG. 9 illustrates a system 900 arranged in accordance with at least some embodiments of the present disclosure, wherein the mobile computing platform 905 and / or the data server machine 906 employs a magnetic tunnel junction device with a cladding layer that is primarily tungsten. The data server machine 906 may be any commercial server, including, for example, any number of high-performance computing platforms that are disposed within a rack and networked together to implement electronic data processing, which includes packaged devices 950 in an exemplary embodiment. For example, the device 950 (eg, a memory or processor) may include a magnetic tunnel junction device having a cladding layer that is primarily tungsten. In an embodiment, the device 950 includes a non-volatile memory that includes a magnetic tunnel junction device having a cladding layer that is primarily tungsten, such as any of the magnetic tunnel junction device structures and / or material layer stacks discussed herein. As discussed below, in some examples, the device 950 may include a system on chip (SOC), such as the SOC 960 shown in connection with the mobile computing platform 905.The mobile computing platform 905 may be any portable device configured for each of electronic data display, electronic data processing, or wireless electronic data transmission. For example, the mobile computing platform 905 may be any of a tablet, smartphone, laptop, etc., and may include a display screen (e.g., capacitive, inductive, resistive, or optical touch screen), chip scale, or package Level integrated system 910 and battery 915. Although exemplified in connection with the mobile computing platform 905, in other examples, a chip-level or package-level integrated system 910 and a battery 915 may be implemented in a desktop computing platform, an automotive computing platform, an IoT platform, and the like.Whether provided in the integrated system 910 shown in the enlarged view 920 or as a separately packaged device within the data server machine 906, the SOC 960 may include a memory circuit and / or a processor circuit 940 (e.g., RAM, micro, Processor, multi-core microprocessor, graphics processor, etc.), PMIC 930, controller 935, and radio frequency integrated circuit (RFIC) 925 (eg, including a broadband RF transmitter and / or receiver (TX / RX)). As shown, one or more magnetic tunnel junction devices having a cladding layer that is primarily tungsten may be employed via the memory circuit and / or processor circuit 940, for example, any magnetic tunnel junction device structure and / or material layer discussed herein Stacked body. In some embodiments, the RFIC 925 includes a digital baseband and an analog front-end module. The module also includes a power amplifier on the transmit path and a low noise amplifier on the receive path. Functionally, the PMIC 930 can perform battery power regulation, DC-to-DC conversion, and so on, and thus has an input coupled to the battery 915 and an output that provides current supply to other functional modules. As further shown in FIG. 9, in an exemplary embodiment, RFIC 925 has an output coupled to an antenna (not shown) to implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE802.20, Long Term Evolution (LTE), Ev-DO, HSPA +, HSDPA +, HSUPA +, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, their derivatives Products and any other wireless protocols known as 3G, 4G, 5G and higher. The memory circuit and / or the processor circuit 940 may provide a memory function for the SOC 960, provide high-level control for the SOC 960, data processing, and the like. In alternative embodiments, each of the SOC modules may be integrated onto a separate IC coupled to a package substrate, interposer, or board.FIG. 10 is a functional block diagram of a computing device 1000 arranged in accordance with at least some embodiments of the present disclosure. The computing device 1000 or a portion thereof may be implemented, for example, via one or both of a data server machine 906 or a mobile computing platform 905, and also includes a motherboard 1002 that houses several components such as, but not limited to, the processor 1001 (E.g., an application processor) and one or more communication chips 1004, 1005. The processor 1001 may be physically and / or electrically coupled to the motherboard 1002. In some examples, the processor 1001 includes an integrated circuit die packaged within the processor 1001. In general, the term "processor" may refer to any device or portion of a device that processes electronic data from a register and / or memory to transform that electronic data into other electronic data that can be stored in a register and / or memory.In various examples, one or more communication chips 1004, 1005 may also be physically and / or electrically coupled to the motherboard 1002. In other embodiments, the communication chip 1004 may be part of the processor 1001. Depending on its application, the computing device 1000 may include other components that may or may not be physically and electrically coupled to the motherboard 1002. These other components may include, but are not limited to, volatile memory (e.g., DRAM) 1007, 1008, non-volatile memory (e.g., ROM) 1010, graphics processor 1012, flash memory, global positioning system as shown (GPS) device 1013, compass 1014, chipset 1006, antenna 1016, power amplifier 1009, touch screen controller 1011, touch screen display 1017, speaker 1015, camera 1003 and battery 1018, and other components such as digital signal processor, password Processor, audio codec, video codec, accelerometer, gyroscope, and mass storage device (e.g., hard disk drive, solid state drive (SSD), compact disk (CD), digital versatile disk (DVD), etc. ).The communication chips 1004 and 1005 can implement wireless communication for transmitting data to and from the computing device 1000. The term "wireless" and its derivatives can be used to describe circuits, devices, systems, methods, technologies, communication channels, etc. that provide the use of modulated electromagnetic radiation to transfer data over non-solid media. This term does not imply that the associated devices do not contain any wiring, although in some embodiments they may not. The communication chips 1004, 1005 may implement any of a number of wireless standards or protocols, including but not limited to those described elsewhere herein. As discussed, the computing device 1000 may include a plurality of communication chips 1004, 1005. For example, the first communication chip may be dedicated to short-range wireless communications, such as Wi-Fi and Bluetooth, and the second communication chip may be dedicated to longer-range wireless communications, such as GPS, EDGE, GPRS, CDMA, WiMAX , LTE, Ev-DO and others. For example, in an embodiment, any component of computing device 1000 may include or utilize one or more magnetic tunnel junction devices having a cladding layer that is primarily tungsten, such as any magnetic tunnel junction device structure and / or material discussed herein Layer stack.Although certain features set forth herein have been described with reference to various embodiments, this description should not be interpreted in a limiting sense. Accordingly, various modifications of the embodiments described herein, as well as other embodiments apparent to those skilled in the art to which the present disclosure pertains, should be considered to fall within the spirit and scope of the present disclosure. It should be recognized that the invention is not limited to the embodiments thus described, but can be practiced with modification and alteration without departing from the scope of the appended claims. For example, the above embodiments may include specific combinations of features further provided below.The following examples refer to other embodiments.In one or more first embodiments, a material stack for a spin-transfer torque memory device includes: a magnetic junction including a layer of free magnetic material; a first layer including oxygen; Said free magnetic material layer; a second layer including iron, said second layer on said first layer; and a third layer on said second layer, wherein said third layer mainly comprises tungsten .In one or more second embodiments, considering the first embodiment, the magnetic junction further includes a fixed magnetic material layer, the fixed magnetic material layer and the free magnetic material layer have perpendicular magnetic anisotropy, and the second layer Iron is also included, and the third layer has not less than 99% tungsten by weight.In one or more third embodiments, considering the first or second embodiment, the third layer has a thickness of not less than 0.5 nm and not more than 5 nm.In one or more fourth embodiments, in consideration of any of the first to third embodiments, the third layer has a thickness of not less than 2 nm and not more than 2.5 nm.In one or more fifth embodiments, in consideration of any one of the first to fourth embodiments, the material stack further includes an electrode on the third layer, wherein the electrode includes ruthenium or At least one of tantalum.In one or more sixth embodiments, in consideration of any one of the first to fifth embodiments, the electrode includes a first electrode layer including ruthenium on the third layer, and A second electrode layer including tantalum on the electrode layer.In one or more seventh embodiments, in consideration of any one of the first to sixth embodiments, the magnetic junction further includes a tunnel barrier layer and a fixed magnetic material layer, the free magnetic material layer and the fixed The magnetic material layers each include one or more of Co, Fe, or B, the tunnel barrier layer includes one or more of Mg and O, and the material stack further includes a layer on the fixed magnetic material And metal electrodes with a composite antiferromagnetic (SAF) structure.In one or more eighth embodiments, considering any one of the first to seventh embodiments, the free magnetic material layer is a first free magnetic material layer, and the magnetic junction further includes: a fixed magnetic material Layers; a tunnel barrier layer; a second free magnetic material layer on the tunnel barrier layer; and a fourth layer including a metal, the fourth layer being between the first free magnetic material layer and the second free magnetic layer Between layers of material.In one or more ninth embodiments, in consideration of any of the first to eighth embodiments, the magnetic junction further includes a fixed magnetic material layer and a tunnel barrier layer, the fixed magnetic material layer and the free The magnetic material layer and the second layer each include Co, Fe, and B, the tunnel barrier layer includes Mg and O, and the material stack further includes a composite between the fixed magnetic material layer and the first metal electrode An antiferromagnetic (SAF) structure, the third layer includes not less than 99% tungsten by weight and has a thickness of not less than 1.5 nm and not more than 2.5, and the material stack further includes the third layer A second metal electrode including tantalum.In one or more tenth embodiments, a non-volatile memory cell includes: a first electrode; a second electrode electrically coupled to a bit line of a memory array; between the first electrode and the second electrode Between vertical spin transfer torque memory (pSTTM) devices including a material stack having any of the elements and characteristics described in connection with the first to ninth embodiments.In one or more eleventh embodiments, a method of forming a magnetic tunnel junction material stack includes: depositing a first amorphous CoFeB layer over a substrate; and depositing over the first amorphous CoFeB layer A first dielectric material layer; a second amorphous CoFeB layer is deposited on the first dielectric material layer; an oxide layer is deposited on the second amorphous CoFeB layer; and the deposition on the oxide layer includes at least A protective layer of Co and Fe; depositing a cladding layer on the protective layer, wherein the cladding layer is mainly tungsten; and annealing the magnetic tunnel junction material stack to anneal the first amorphous layer The CoFeB layer and the second amorphous CoFeB layer are converted into a polycrystalline CoFeB.In one or more twelfth embodiments, considering the eleventh embodiment, depositing the cladding layer includes depositing the cladding layer to a thickness of not less than 0.5 nm and not more than 5 nm.In one or more thirteenth embodiments, considering the eleventh or twelfth embodiments, depositing the cladding layer includes depositing a cladding layer having not less than 99% tungsten by weight.In one or more fourteenth embodiments, in consideration of any one of the eleventh to thirteenth embodiments, depositing the second amorphous CoFeB layer includes depositing a second on the first dielectric material layer An amorphous CoFeB layer, and depositing the oxide layer includes depositing an oxide layer on the second amorphous CoFeB layer.In one or more fifteenth embodiments, in consideration of any one of the eleventh to fourteenth embodiments, the method further includes depositing on the second amorphous CoFeB layer before the annealing. A metal coupling layer and a third amorphous CoFeB layer is deposited on the metal coupling layer.Although certain features set forth herein have been described with reference to various embodiments, this description should not be interpreted in a limiting sense. Accordingly, various modifications of the embodiments described herein, as well as other embodiments apparent to those skilled in the art to which the present disclosure pertains, should be considered to fall within the spirit and scope of the present disclosure.
A test structure used to measure metal bottom coverage in semiconductor integrated circuits. The metal is deposited in etched trenches, vias and/or contacts created during the integrated circuit manufacturing process. A predetermined pattern of probe contacts are disposed about the semiconductor wafer. Metal deposited in the etched areas is heated to partially react with the underlying and surrounding undoped material. The remaining unreacted metal layer is then removed, and an electrical current is applied to the probe contacts. The resistance of the reacted portion of metal and undoped material is measured to determine metal bottom coverage. Some undoped material may also be removed to measure metal sidewall coverage. The predetermined pattern of probe contacts is preferably arranged in a Kelvin or Vander Paaw structure.
We claim: 1. An integrated circuit test structure comprising:a plurality of probe contacts deposited on a layer of undoped material according to a predetermined test pattern; at least one exposed area of undoped material, the at least one exposed area of undoped material disposed between a first and second probe contact; a metal layer deposited over the at least one exposed area of undoped material; and a layer of reacted metal and undoped material, the layer of reacted metal and undoped material disposed at the at least one exposed area of undoped material. 2. The test structure defined in claim 1, wherein the metal layer comprises titanium.3. The test structure defined in claim 1, wherein the undoped material comprises silicon.4. The test structure defined in claim 1, wherein the layer of reacted metal and undoped material comprises titanium silicide.5. The test structure defined in claim 1, wherein the predetermined test pattern comprises a Kelvin structure.6. The test structure defined in claim 1, wherein the predetermined test pattern comprises a Vander Paaw structure.7. The test structure defined in claim 1, wherein the metal layer covers at least a portion of an isolation layer deposited over the layer of undoped material.8. The test structure defined in claim 4, wherein the isolation layer comprises silicon dioxide.
This application is a division of application Ser. No. 09/080,917, filed May 18, 1998 now U.S. Pat. No. 6,127,193.BACKGROUND OF THE INVENTIONThis invention relates to test structures used in the fabrication of semiconductor integrated circuits, and more particularly, to test structures used to measure metal bottom coverage in semiconductor integrated circuits and a method for creating such test structures.Test structures are known in the art and are commonly used in the manufacture of semiconductor integrated circuits. Various types of test structures are used in the semiconductor industry in an effort to improve process precision, accuracy and to simplify manufacturing of an integrated circuit wafer. Test structures are also employed to help shrink the sizes of integrated circuits and the size of individual electrical elements within integrated circuits. They are also used in an effort to help improve and increase the processing speed of these devices.One problem commonly encountered in the manufacture of integrated circuits is measuring the amount of metal deposited during the manufacturing process. Specifically, metal may be deposited at the bottom or lower level of a trench structure, or a contact or via structure, that is created during the manufacture of the integrated circuit. These trenches, vias and contacts are typically created by etching through a particular layer previously deposited during the manufacturing process. Metal and other materials are then deposited within these trenches, vias and/or contacts in order to establish electrical contact between different layers of the semiconductor sandwich.In order to monitor and improve the manufacture of integrated circuits, it may be important to measure the thickness or amount of metal deposited in the bottoms of these etched structures. The conventional way to measure bottom coverage is to cross section a sample integrated circuit wafer and take Scanning Electron Microscope (SEM) or Transmission Electron Microscope (TEM) micrographs. Sample preparation for SEM and TEM is tedious and performing wafer maps with these techniques is impractical. This process is also time consuming and by nature destructive of the particular integrated circuit tested.Other known test structures used in the manufacture of integrated circuits include conventional Kelvin structures and line resistance structures. These other techniques, however, cannot successfully be used to measure metal coverage in the bottom of trenches, vias and/or contacts. In addition, the current qualification method used to measure film deposition uniformity is to create a 4-point probe wafer map of deposited metal over a flat wafer. However, unlike measuring surface uniformity across the wafer surface, bottom coverage uniformity may be unrelated to top surface uniformity, and the area of greatest concern in semiconductor manufacturing is the amount of material deposited at the bottom of topography features.What is lacking in the art is a test structure for measuring the amount of metal deposited in the bottom of etched structures quickly. The property of Titanium Silicide reacting and having high etch selectivity as compared to Ti alone could be used to pattern such structures. With such a non-invasive technique for measuring metal coverage, automated tests may be performed to measure metal bottom coverage unlike the previously known cross sectioning techniques. As a result, many more integrated circuits can be monitored and/or tested during manufacture in order to improve device yield and other operating parameters.BRIEF SUMMARY OF THE INVENTIONIn view of the above, a test structure for measuring metal bottom coverage, and a method for creating the test structure, is provided. According to the method of the invention, a layer of undoped material is deposited according to a predetermined test structure over a first isolation layer. A second isolation layer is deposited over the undoped material. The second isolation layer is then etched in a predetermined manner. A layer of metal is deposited over the exposed areas of the undoped material. Heat is then applied to the metal layer. A current is next applied to the predetermined test pattern, and the electrical resistance of the test pattern is measured.According to the test structure of the invention, a plurality of probe contacts are deposited on a layer of undoped material according to a predetermined test pattern. At least one area of exposed undoped material is disposed between a first and a second probe contact. A metal layer is deposited over the at least one exposed area of undoped material. A layer of reacted metal and undoped material is disposed at the at least one exposed area of undoped material.Through the electronic measurement of metal bottom coverage, many wafer samples may be measured quickly and repeatedly. Unlike SEM or TEM cross sectional analysis, wafer maps may be easily produced and used for in-line measurements and equipment qualification. Moreover, electrical measurements have the ability to be repeated unlike the previously known cross sectioning tests. The step coverage of metals at the bottom of a trench structure may also be more easily assessed. The invention also improves the precision and accuracy, and simplifies, the manufacturing of integrated circuits.These and other features and advantages of the invention will become apparent upon a review of the following detailed description of the presently preferred embodiments of the invention, taken in conjunction with the appended drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a cross-sectional view of a test structure made according to one presently preferred embodiment of the invention.FIG. 2 shows the etching of a trench, contact or via structure in a layer of silicon dioxide.FIG. 3 shows the deposition of a metal layer within the structure shown in FIG. 2.FIG. 4 illustrates the interaction between the metal layer deposited in FIG. 3 and the underlying layer of undoped silicon.FIG. 5 shows the remaining reacted metal after heating and etching away the unreacted metal.FIG. 6 shows one alternate embodiment of the invention where a portion of silicon dioxide is also removed from the trench, via or contact structure.FIG. 7 presents a top plan view of one presently preferred mask pattern for the test structure shown in FIG. 1.DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EMBODIMENTS OF THE INVENTIONReferring now to the drawings, where like reference numerals refer to like elements throughout, one presently preferred test structure 10 used to measure the amount of metal deposited on the bottom of trenches and vias/contacts (bottom coverage) is shown in FIG. 1. The measurement is preferably performed electrically or by profileometer, which is quick and allows wafer mapping. This technique is designed to measure the amount of material on the bottom of topography features.As shown in FIG. 1, the test structure 10 is constructed from layers 12 of deposited films over a substrate 14. In one preferred embodiment of the invention, in order to ensure electrical isolation for the test structure 10, a thin oxide layer 22 is deposited between the substrate 14 and an undoped silicon layer 18. As those skilled in the art will appreciate, although undoped silicon is the presently preferred medium for the test structure, lightly or partially doped materials can also be employed without departing from the spirit and scope of the invention. A dielectric layer 16 is then deposited over the undoped silicon layer 18. Nominal ranges for film thicknesses are:<tb> <sep>Film Thickness<tb> <sep> <sep>Oxide layer 22<sep>100-1000 Angstrom<tb> <sep> <sep>Undoped layer 18<sep>2000 AngstromDielectric layer 16-1000-20,000 AngstromAs those skilled in the art will appreciate, SiO2 is the preferred dielectric material, but other insulators such as Si3N4, or spin on glass, may be used without departing from the spirit and scope of the invention.Referring to FIG. 2, the SiO2 dielectric layer 16 is preferably patterned using standard semiconductor photolithography and SiO2 etching techniques. A sample test pattern is shown in FIG. 7, but other test patterns are possible (see below). As shown in FIG. 2, in one embodiment, the etch is preferably stopped at the undoped silicon layer 18. Alternatively, a portion of the undoped silicon layer 18 may be etched as well (about 1000-3000 Angstrom) in order to also study bottom sidewall coverage.Referring to FIG. 3, the metal layer 20 to be studied is deposited within the SiO2 dielectric layer 16. The metal layer 20 may be Co, Ti, Cu, Ni, or any other metal which reacts with silicon. Deposition techniques that may be used include either sputter or evaporation Physical Vapor Deposition (PVD), Long Throw PVD, Collimated PVD, Chemical Vapor Deposition, and Ionized Metal Deposition. Nominal deposited film thicknesses are preferably between 100 and 1000 Angstrom.Once the metal layer 20 is deposited, the wafer (not shown) is raised to a high temperature which causes the metal layer 20 to react with the Si dielectric material 18. Preferably, the nominal temperature should be 650[deg.] Celsius applied for 60 seconds. Only the metal 24 contacting the Si dielectric 16 reacts, so only the metal 24 at the bottom 26 of the feature becomes TiSi2, as shown in FIG. 4.The remaining metal layer 20 is selectively etched using standard semiconductor etchants leaving the structure shown in FIG. 5. In the presently preferred embodiment, the SiO2 dielectric layer 16 is not etched, but an etchant which etches the SiO2 layer 16 may alternatively be used. One preferred etchant that will etch Ti and SiO2 is HF acid. An alternate etchant suitable to etch deposited (Cu is HNO3. Because undoped polycrystalline Si has very high resistivity, the TiSi2 30 is now electrically isolated and patterned, as shown in FIG. 5.Alternatively, if the SiO2 layer 16 is stripped in addition to the Ti metal layer 20 as shown in FIG. 6, then a profileometer or Atomic Force Microscope could be used to measure the actual profile of the TiSi2 line 24. In this embodiment, an alternative mask pattern (not shown) of an array of contacts/vias may be used to measure the amount of deposited metal in the bottom of contacts/vias.A top plan view of the presently preferred mask pattern 30 suitable for making the aforementioned electrical measurement is shown in FIG. 7. In the preferred embodiment, a conventional Kelvin test structure is used to measure the line resistance of the reacted TiSi2 layer 24. From the line resistance, the amount of metal deposited on the bottom 26 of trenches may be calculated. Those skilled in the art will appreciate that the open pad areas 32 used to make probe contacts do not affect the resistance measurement because a 4-point measurement is made. Because the silicon layer 18 is undoped, the TiSi2 layer 24 is also effectively electrically isolated. The mask pattern 30 shown in FIG. 7 is used to measure the resistance. This novel masking technique creates a Kelvin structure that is used to measure the amount of material on the bottom of structures.As those skilled in the art will appreciate, other resistance structures may be used without departing from the spirit and scope of the invention. Such structures include strait wire resistance test structures or area test structures.It is to be understood that a wide range of changes and modifications to the embodiments described above will be apparent to those skilled in the art and are contemplated. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of the invention.
Systems, apparatuses and methods may provide for detecting an issued request in a queue that is shared by a plurality of domains in a memory architecture, wherein the plurality of domains are associated with non-uniform access latencies. Additionally, a destination domain associated with the issued request may be determined. Moreover, a first set of additional requests may be prevented from beingissued to the queue if the issued request satisfies an overrepresentation condition with respect to the destination domain and the first set of additional requests are associated with the destinationdomain. In one example, a second set of additional requests are permitted to be issued to the queue while the first set of additional requests are prevented from being issued to the queue, wherein thesecond set of additional requests are associated with one or more remaining domains in the plurality of domains.
1.A delay sensing computing system comprising:a memory architecture comprising a plurality of domains, at least two domains in the domain comprising different associated memory access delays;a switch interconnecting two or more domains in the plurality of domains;a queue shared by the plurality of domains;a queue monitor for detecting a request issued in the queue;a system address decoder for determining a target domain associated with the issued request;Requesting an arbiter for preventing the first set of additional requests from being issued to the said request if the issued request satisfies an over-representation condition regarding the target domain and the first set of additional requests are associated with the target domain queue.2.The system of claim 1 wherein said request arbiter is for allowing a second set of additional requests to be issued to said queue while preventing said first set of additional requests from being issued to said queue, and wherein The second set of additional requests is associated with one or more remaining domains in the plurality of domains.3.The system of claim 1 further comprising a core for implementing a credit policy with respect to said target domain to prevent said first set of additional requests from being issued.4.The system of claim 1 further comprising a throttle component for transmitting a throttle signal to the core, wherein said throttle signal indicates said over-representation state.5.The system of claim 1 wherein said system address decoder transmits a decoding result to a core initiating said issued request, and wherein said decoding result indicates said issued request is associated with said target domain Union.6.The system of any of claims 1 to 5, further comprising one or more cores for predicting that the first set of additional requests are associated with the target domain.7.The system of claim 1 further comprising one or more of the following:a processor communicatively coupled to the memory architecture;Communicatingly coupled to a display of the memory architecture;a network interface communicatively coupled to the processor; orA battery communicatively coupled to the processor.8.A cache proxy device comprising:a queue monitor for detecting issued requests in a queue shared by a plurality of domains in a memory architecture, wherein at least two domains in the domain include different associated memory access delays;a system address decoder for determining a target domain associated with the issued request;Requesting an arbiter for preventing the first set of additional requests from being issued to the said request if the issued request satisfies an over-representation condition regarding the target domain and the first set of additional requests are associated with the target domain queue.9.The apparatus of claim 8, wherein the request arbiter is configured to allow a second set of additional requests to be issued to the queue while preventing the first set of additional requests from being issued to the queue, and wherein The second set of additional requests is associated with one or more remaining domains in the plurality of domains.10.The apparatus of claim 8 further comprising a core for implementing a credit policy regarding said target domain to prevent said first set of additional requests from being issued.11.The apparatus of claim 8 further comprising a throttle component for transmitting a throttle signal to the core, wherein said throttle signal indicates said over-representation state.12.The apparatus of claim 8, wherein the system address decoder transmits a decoding result to a core that initiates the issued request, and wherein the decoding result indicates that the issued request is related to the target domain Union.13.The apparatus of any one of claims 8 to 12, further comprising one or more cores for predicting that the first set of additional requests are associated with the target domain.14.A method of operating a caching proxy device, comprising:Detecting issued requests in queues shared by multiple domains in a memory fabric, wherein at least two domains in the domain include different associated memory access delays;Determining a target domain associated with the issued request;Preventing the first set of additional requests from being issued to the queue if the issued request satisfies an over-representation condition with respect to the target domain and a first set of additional requests is associated with the target domain.15.The method of claim 14 further comprising allowing a second set of additional requests to be issued to said queue while preventing said first set of additional requests from being issued to said queue, wherein said second set of additional requests Associated with one or more remaining domains in the plurality of domains.16.The method of claim 14 wherein preventing the first set of additional requests from being issued comprises implementing a credit policy for the target domain in a core.17.The method of claim 14 wherein preventing the first set of additional requests from being sent comprises transmitting a throttle signal to a core, wherein the throttle signal indicates the over-representation state.18.The method of claim 14 further comprising transmitting a decoding result to a core initiating the issued request, wherein the decoding result indicates that the issued request is associated with the target domain.19.The method of any of claims 14 to 18, further comprising predicting, in one or more cores, the first set of additional requests to be associated with the target domain.20.At least one computer readable storage medium comprising a set of instructions that, when executed by a computing device, cause the computing device to:Detecting issued requests in queues shared by multiple domains in a memory fabric, wherein at least two domains in the domain include different associated memory access delays;Determining a target domain associated with the issued request;Preventing the first set of additional requests from being issued to the queue if the issued request satisfies an over-representation condition with respect to the target domain and a first set of additional requests is associated with the target domain.21.The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause the computing device to allow the second set of additionals while preventing the first set of additional requests from being issued to the queue A request is issued to the queue, and wherein the second set of additional requests is associated with one or more remaining domains in the plurality of domains.22.The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause a computing device to implement a credit policy for the target domain in a core to prevent the first set of additional requests Was issued.23.The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause a computing device to send a throttle signal to a core, and wherein the throttle signal indicates the over-representation state.24.The at least one computer readable storage medium of claim 20, wherein the instructions, when executed, cause a computing device to transmit a decoding result to a core that initiated the issued request, and wherein the decoding result indication The issued request is associated with the target domain.25.A cache proxy device comprising means for performing the method of any one of claims 14 to 18.
Non-uniform memory access latency adjustment for bandwidth quality of serviceCross-reference to related applicationsThis application claims the benefit of priority to US Provisional Patent Application No. 14/998,085, filed on December 24, 2015.Technical fieldEmbodiments generally relate to memory structures. More specifically, embodiments relate to non-uniform memory access latency adjustments for achieving bandwidth quality of service.Background techniqueRecent developments in memory technology may lead to the emergence of more advanced memory structures to complement and/or replace conventional dynamic random access memories (DRAMs). Thus, a given memory architecture in a computing system may include many different memory pools, each with different access latency, bandwidth, and/or other properties. Multiple compute cores can access various memory pools through a shared buffer with a limited number of entries. Due to the non-uniform memory access (NUMA) latency of the memory pool, requests accessing higher latency pools may control the shared buffer over time. For example, if pool A is relatively "fast" and has a lower access latency (eg, 50 ns) and pool B is relatively "slow" and has a higher access latency (eg, 500 ns), then the average ratio of requests for service access pool A is The service requests to access pool B are ten times faster. When pool A access requests are quickly serviced and removed from the shared buffer, they can be replaced with slower requests to pool B. In this case, the shared buffer may eventually be filled with requests to access pool B. Therefore, the process of generating a pool A request may experience a negative impact on quality of service (QoS) and/or performance.DRAWINGSThe various advantages of the embodiments will become apparent to those skilled in the <RTIgt;1 is a block diagram of an example of a computing system in accordance with an embodiment;2 is a flow diagram of an example of a method of operating a caching proxy device in accordance with an embodiment;3 is a block diagram of an example of a credit policy enforcement scenario, in accordance with an embodiment;4 is a block diagram of an example of a throttling scenario, in accordance with an embodiment;FIG. 5 is a block diagram of an example of a caching proxy, according to an embodiment; and6 is a block diagram of an example of a delay aware computing system, in accordance with an embodiment.detailed descriptionRecent developments in memory architecture can provide non-volatile memory (NVM) for storing volatile data that is considered to be stored in volatile memory. For example, such volatile data may include, for example, data used by an application or operating system that is considered by the application or operating system to be stored in volatile memory and not stored in a volatile state after system reset. In memory. Examples of NVMs may include, for example, block addressable memory devices, such as NAND or NOR technology, phase change memory (PCM), three-dimensional cross-point memory or other byte addressable non-volatile memory devices, using chalcogenide phase transitions. Memory devices for materials (eg chalcogenide glass), resistive memories, nanowire memories, ferroelectric transistor random access memories (FeTRAM), flash memories such as solid state disk (SSD) NAND or NOR, multi-threshold level NAND flash, NOR Flash memory, magnetoresistive random access memory (MRAM) memory including memristor technology, spin transfer torque (STT)-MRAM, or any combination or other memory described above. These memory structures are especially useful in data center environments such as high performance computing (HPC) systems, big data systems, and other architectures involving relatively high bandwidth data transfers.Turning now to FIG. 1, a latency aware computing system 10 is illustrated in which a memory architecture includes a plurality of domains (eg, pools, levels) associated with non-uniform access latency. The computing system 10 can typically be with computing functions (eg, data center, server, personal digital assistant/PDA, laptop, tablet), communication functions (eg, smart phones), imaging functions, media playback functions (eg, smart TV/TV) A portion of an electronic device/platform that is wearable (eg, watches, glasses, headwear, footwear, jewelry), vehicle functions (eg, cars, trucks, motorcycles), or the like, or any combination thereof. In the illustrated example, node 12 ("node 0") includes slot 14 ("slot 0", for example including a semiconductor die having a host processor, multiple cores, and one or more cache agents/ A chip, not shown), the slot 14 is communicatively coupled to slot 16 ("slot 1" via a local link 18 (eg, unified path interconnect / UPI), for example, including having a main processor, multiple The core and one or more cached semiconductor dies/chips, not shown).Similarly, node 20 ("node 2") may include slot 22 ("slot 0", for example including a semiconductor die/chip having a host processor, multiple cores, and one or more cache agents, not shown The slot 22 is communicatively coupled to the slot 24 ("slot 1" via a local link 26 (eg, UPI), for example, including having a host processor, multiple cores, and one or more cache agents Semiconductor die/chip, not shown). For example, each slot 14, 16, 22, 24 can be coupled to a local memory, such as a volatile memory. In this regard, the caching agents of nodes 12, 20 can manage local (eg, on-die) requests to access computing system 10 using shared queues, such as buffers, super queues (SQ), request tables (TORs), and the like, respectively. Local and remote storage.Exemplary volatile memories include dynamic volatile memory, including DRAM (Dynamic Random Access Memory), or some variations such as synchronous DRAM (SDRAM).The memory subsystems described herein are compatible with many memory technologies, such as DDR4 (DDR version 4, the initial specification issued by JEDEC in September 2012), LPDDR4 (LOW POWER DOUBLE DATA RATE (LPDDR) version 4, JESD 209-4, originally by JEDEC was released in August 2014), WIO2 (Wide I/O 2 (WideIO2), JESD229-2, originally released by JEDEC in August 2014), HBM (HIGH BANDWIDTH MEMORY DRAM, JESD235, originally by JEDEC in 2013 Released in October), DDR5 (DDR version 5, currently discussed by JEDEC), LPDDR5 (LPDDRversion 5, currently discussed by JEDEC), HBM2 (HBM version 2, currently discussed by JEDEC) and/or others, and derivatives based on these specifications Or extended technology.For example, node 28 ("node 3") may include slot 30 ("slot 0", for example, including a semiconductor die/chip having a host processor, multiple cores, and one or more cache agents, not shown The slot 30 is communicatively coupled to the DDR standard compatible memory 32 and the HBM standard compatible memory 34. The illustrated slot 30 is also communicatively coupled to slot 36 ("slot 1" via a local link 38 (eg, UPI), for example, including having a main processor, multiple cores, and one or more caching agents Semiconductor die/chip, not shown). Slot 36 may in turn be locally coupled to DDR memory 40 and high bandwidth memory 42.In addition, another node 44 ("Node 1") can include an NVM server 46 configured to store volatile data, wherein the illustrated NVM server 46 is coupled to a plurality of NVM nodes 48 (eg, "NVM Node 0" to " NVM node n"). Node 12 can be communicatively coupled to switch 50 via interface 52 (e.g., Host Fabric Interface/HFI) and link 54. Similarly, node 20 can be communicatively coupled to switch 50 via interface 56 (e.g., HFI) and link 58, which can be communicatively coupled to switch 50 via interface 60 (e.g., HFI) and link 62, and node 44 The switch 50 can be communicatively coupled via interface 64 (e.g., HFI) and link 66. The memory architecture of the illustrated system 10 can be considered a non-uniform memory access (NUMA) architecture to the extent that different domains can be accessed at different speeds, depending on the location of the core requesting access and the location of the memory being accessed.For example, the core of slot 30 in node 28 can observe and/or encounter at least four different delay domains: (1) local DDR memory 32; (2) local high bandwidth memory 34; (3) by slot 36 The exposed memory; and (4) memory exposed by NVM server 46 on node 44, memory exposed by node 12, and memory exposed by node 20. Each of the delay domains encountered by slot 30 can be considered a "home" that exhibits different behavior in terms of latency and performance impact of retrieving data (e.g., cache lines) from the domain. In fact, in addition to remote access latency, performance can be affected by consistency management (eg, snooping). As will be discussed in more detail, the adaptive caching proxy can perform load balancing and fair operations to control the rate at which threads running on the core issue requests to different delay domains. Thus, since the shared queue in slot 30 is dominated and/or over-represented by a local access request (e.g., NVM node 48), the request to access DDR memory 32 by the thread running on slot 30 may not experience QoS or Degraded performance.FIG. 2 illustrates a method 70 of operating an adaptive caching proxy device. Method 70 can generally be implemented in a computing system node (e.g., one or more of nodes 12, 20, 28, 44 (FIG. 1) that have been discussed). More specifically, method 70 can be implemented as one or more of the following: in a set of logical instructions stored in a machine or computer readable storage medium, such as random access memory (RAM), read only memory (ROM) Programmable ROM (PROM), firmware, flash memory, etc.; in configurable logic such as Programmable Logic Array (PLA), Field Programmable Gate Array (FPGA), Complex Programmable Logic Device (CPLD); In fixed-function logic hardware, such as an application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS), or transistor-transistor logic (TTL) technology, or any combination thereof. For example, computer program code for performing the operations shown in method 70 can be written in any combination of one or more programming languages, including object oriented programming languages ​​such as JAVA, SMALLTALK, C++, and the like. A conventional procedural programming language, such as a "C" programming language or a similar programming language.The illustrated processing block 72 provides a request to detect an issue in a queue shared by multiple domains in the memory architecture. Aggregation requests from multiple domains result in inconsistent access delays to the storage system. A destination domain associated with the issued request can be determined at block 74. Block 74 may include applying a set of System Address Decoder (SAD) rules (e.g., decoders configured using precompiled code/p codes) to the issued request in a prioritized order. The most appropriate decoder rule may correspond to the destination (e.g., home) delay field/pool/level of the request being issued. In this regard, in addition to the target node, address range, and other decoder domains, the decoder can automatically identify the memory level as, for example, an integer value (eg, 0 to N, where 0 is the fastest access delay and N is the most Slow delay). The memory level address definitions can be stored to any suitable memory location (eg, DRAM address space, memory mapped input/output/MMIO address space, etc.).Block 76 may determine if the target domain satisfies an over-representation condition. Block 76 may generally include identifying the target domain (e.g., based on a decoding result indicating the associated target domain), transmitting the decoded result to the core of the request that initiated the issue, and determining if the target domain has reached a certain balance relative to the other delay domains. Or fairness threshold. For example, if the shared queue holds up to 12 entries and each of the four different delay domains is assigned three entries, block 76 may determine if the shared queue contains three requests to access the target domain. If so, block 78 is shown to prevent the first set of additional requests from being sent to the queue, where the first set of additional requests are also associated with the target domain.As will be discussed in greater detail, block 78 may include implementing a credit policy with respect to the target domain in the core. Therefore, the core may predict that the first set of additional requests are associated with the target domain and use the forecast to implement the credit strategy. Block 78 may also include transmitting a throttle signal to the core, wherein the throttle signal indicates an over-representation state (eg, "no space left in the queue for the target domain"). Additionally, block 80 is shown to allow a second set of additional requests to be issued to the queue while preventing the first set of additional requests from being issued to the queue, wherein the second set of additional requests is associated with one or more remaining domains in the plurality of domains .3 shows an example in which a caching agent 82 is communicatively coupled to a memory architecture 84 that includes a plurality of delay domains ("domain 1" to "domain N") and is configured to implement credit for each delay domain Multiple cores 86 (86a, 86b) of policy 88. Credit strategy 88 can be programmable. In the illustrated example, memory architecture 84 shares queue 87, and first core 86a sends first request 90 to cache agent 82, which may reside on the same semiconductor die as first core 86a , but at different locations on the die (for example, in non-nuclear areas). The SAD may also be located near the caching agent 82 in the non-core area. The first core 86a may use the memory type predictor 92 to predict the target domain in the memory architecture 84 for the first request 90. In this regard, predictor 92 can maintain a prediction table that contains the last address range accessed (on a per-delay domain and per cache agent basis). An example prediction table is shown in Table I below.Cache Proxy ID Delay Domain Last Visited Range Granular Mask 0 0 120000000 040000000 0 1 700000000 100000000 ... ... ... ... M N F80000000 040000000Table IIn the illustrated example, the size of the range is fixed for each domain and is specified using a bit mask, where the granularity can be configurable for each domain. Therefore, assuming that the granularity of Domain 1 is, for example, 4 GB, the last address (for example, 0x78C9657FA) sent to Cache Proxy 0 and targeting Domain 1 belongs to the address range [0x700000000, 0x 700000000+4GB]. Therefore, in order to predict the domain of the request target address @X and the caching proxy (for example, "CAm" in the expression below), the table will serve as a content addressable memory structure by applying the corresponding mask and @X to the "XX". &" operation to access:PredictedDomain=(DomainPredictionTable[CAm][@X&granularity_mask_domain])If the PredictedDomain is NULL (meaning there is no domain match), the PredictedDomain can be automatically assigned a zero value (eg, assuming zero corresponds to the fastest and/or closest domain). In short, an application accessing a latency domain might run within an address range of that domain. By appropriately specifying the granularity, it is possible to implement an intelligent and accurate prediction of the target domain associated with an access request, such as the first request 90. A benefit of such a prediction scheme is that it may potentially result in a high hit rate and may be implemented using a Content Addressable Memory (CAM) structure that provides results over several cycles.The first core 86a may also determine whether the predicted target domain complies with the credit policy 88 and, if so, speculatively deduct from the credit assigned to the predicted target domain. Thus, if the first request 90 is predicted to access the domain 3 shown, the first core 86a may deduct a single credit from the current credit available for the domain 3. Thus, the first request 90 can include a request payload and a predicted target domain. Upon receiving the first request 90, the caching agent 82 can use the SAD to determine the actual target domain associated with the first request 90. If the prediction is correct, the caching agent 82 can return an acknowledgment 92 (ACK) including the request data, the status message (e.g., to), and the actual target domain. Once the first request 90 is completed, the first core 86a can update the prediction table and credit policy 88 (e.g., increment domain 3 credits). On the other hand, if the prediction is incorrect, the caching proxy 82 can return a non-acknowledgement (NACK) and an indication of the correct target domain. In this case, the first core 86a may update the prediction table and resubmit the first request 90.The first core 86a may also predict that the second request 96 is associated with a target domain that does not have a remaining credit (eg, Domain 1 or Domain N in the illustrated example). In this case, the first core 86 can implement the credit policy 88 by blocking or otherwise rejecting the second request 96. However, the third request 97 may be associated with a remaining domain such as, for example, Domain 2 or Domain 3. In this case, the first core 86a may issue a third request 97 to the caching proxy 82 and receive an ACK 99 from the caching proxy 82.Turning now to FIG. 4, an example is shown in which caching agent 98 is communicatively coupled to memory architecture 84 and a plurality of cores 100 (100a, 100b) that are configured to respond to programmable throttle signals issued by caching proxy 98. In the illustrated example, memory architecture 84 shares queue 104 and first core 100a sends first request 102 to caching proxy 98. More specifically, if the caching proxy 98 determines that an over-representation condition occurs because the first request 102 is associated with, for example, domain 1 and the domain 1 allocation of the shared queue 104 is full, the caching proxy 98 may return an ACK 108 and generate a throttle. Signal 106. Similarly, accessing the second request 110, such as domain N, may cause the caching proxy 98 to return an ACK 112 and generate a throttle signal 114. Thus, if a subsequent request 116 to access domain 1 is encountered in the first core 100a, the throttle signal 106 may cause the core 100a to block subsequent requests 116 and prevent them from being issued to the shared queue 104. However, another request 118 to access domain 2 can still be issued to the caching proxy 98 because the caching proxy 98 has not yet generated a throttle signal for domain 2. In this case, caching proxy 98 may generate ACK 120 in response to successful service and/or completion of other requests 120.Turning now to Figure 5, a caching proxy device 122 (122a-122c) is shown. Apparatus 122, which may include logic instructions, configurable logic, fixed function logic hardware, or the like, or any combination thereof, may generally implement one or more aspects of method 70 (FIG. 2) that have been discussed. In one example, device 122 can easily replace cache agent 82 (FIG. 3) and/or cache agent 98 (FIG. 4). More specifically, the illustrated caching proxy device 122 includes a queue monitor 122a to detect outgoing requests in queues shared by multiple domains in a memory fabric, where multiple domains are associated with non-uniform access latency . Additionally, system address decoder 122b can determine the target domain associated with the issued request.The illustrated apparatus 122 further includes a request arbiter 122c for preventing the first set of additional requests from being sent to the queue if the issued request satisfies an over-representation condition with respect to the target domain and the first set of additional requests are associated with the target domain . The request arbiter 122c may also allow the second set of additional requests to be issued to the queue while preventing the first set of additional requests from being sent to the queue, wherein the second set of additional requests are associated with one or more remaining domains in the plurality of domains .In one example, the request arbiter 122c includes a throttle component 124 to send a throttle signal to the core, wherein the throttle signal indicates an over-representation state. Alternatively, the core may implement a credit policy with respect to the target domain to prevent the first set of additional requests from being issued. Additionally, system address decoder 122b may send the decoded result to the core of the request that initiated the issue, where the result of the decoding indicates that the issued request is associated with the target domain.FIG. 6 illustrates a latency aware computing system 126. Computing system 126 can typically be with computing functions (eg, data center, server, personal digital assistant/PDA, laptop, tablet), communication functions (eg, smart phones), imaging functions, media playback functions (eg, smart TV/TV) A portion of an electronic device/platform that is wearable (eg, watches, glasses, headwear, footwear, jewelry), vehicle functions (eg, cars, trucks, motorcycles), or the like, or any combination thereof. In the illustrated example, system 126 includes a power source 128 for powering system 126 and a processor 130 having an integrated memory controller (IMC) 132 coupled to main memory 134 (eg, a volatile "near" memory). IMC 132 may also be coupled to another memory module 136 (eg, a dual in-line memory module/DIMM) that includes a non-volatile memory structure such as, for example, NVM 138. NVM 138 may include a "far" memory 140, which may also be used to store volatile data. Thus, far memory 140 and main memory 134 can be used as a two-level memory (2LM) structure, where main memory 134 is typically used as a low latency and high bandwidth cache for far memory 140.NVM 138 may include any of the examples of non-volatile memory devices listed above. As already mentioned, the memory module 136 can include volatile memory, such as DRAM configured as one or more memory modules, such as, for example, DIMMs, small DIMMs (SODIMMs), and the like. Exemplary volatile memories include dynamic volatile memory, including DRAM (Dynamic Random Access Memory) or some variants, such as synchronous DRAM (SDRAM).The memory subsystems described herein are compatible with many memory technologies, such as DDR4 (DDR version 4, the initial specification issued by JEDEC in September 2012), LPDDR4 (LOW POWER DOUBLE DATA RATE (LPDDR) version 4, JESD 209-4, originally by JEDEC was released in August 2014), WIO2 (Wide I/O 2 (WideIO2), JESD229-2, originally released by JEDEC in August 2014), HBM (HIGH BANDWIDTH MEMORY DRAM, JESD235, originally by JEDEC in 2013 Released in October), DDR5 (DDR version 5, currently discussed by JEDEC), LPDDR5 (LPDDRversion 5, currently discussed by JEDEC), HBM2 (HBM version 2, currently discussed by JEDEC) and/or others, and derivatives based on these specifications Or extended technology.The illustrated system 126 also includes an input output (IO) module 142 implemented as a system on chip (SoC) on the semiconductor die 144 with the processor 130, where the IO module 142 acts as a host device and can be, for example, with a display 146 (eg, The touch screen, liquid crystal display/LCD, light emitting diode/LED display, network controller 148, and mass storage device 150 (eg, hard disk drive/HDD, optical disk, flash memory, etc.) communicate. The memory module 136 can include an NVM controller 152 having logic 154 that is coupled to the far memory 140 via an internal bus 156 or other suitable interface. The illustrated logic 154 can implement one or more aspects of the method 70 (FIG. 2) that has been discussed. Logic 154 may alternatively be implemented elsewhere in system 80.Additional notes and examples:Example 1 can include a latency-aware computing system, the system comprising: a memory architecture comprising a plurality of domains, the at least two of the domains comprising different associated memory access delays; for two or more of the plurality of domains a switch interconnected by the plurality of domains; a queue shared by the plurality of domains; a queue monitor for detecting the issued request in the queue; and a system address decoder for determining the association with the issued request a target domain; and a request arbiter for preventing the first set of additional requests from being issued if the issued request satisfies an over-representation condition with respect to the target domain and the first set of additional requests are associated with the target domain Go to the queue.Example 2 may include the system of example 1, wherein the request arbiter is configured to allow a second set of additional requests to be issued to the queue while preventing the first set of additional requests from being issued to the queue, and wherein the second set of additional A request is associated with one or more remaining domains in the plurality of domains.Example 3 may include the system of Example 1, further comprising a core for implementing a credit policy regarding the target domain to prevent the first set of additional requests from being issued.Example 4 may include the system of example 1, further comprising a throttling component for transmitting a throttle signal to the core, wherein the throttle signal indicates the over-representation state.Example 5 may include the system of example 1, wherein the system address decoder sends a decoding result to a core that initiates the issued request, and wherein the decoding result indicates that the issued request is associated with the target domain .Example 6 may include the system of any one of examples 1 to 5, further comprising one or more cores for predicting that the first set of additional requests are associated with the target domain.Example 7 can include the system of claim 1, further comprising one or more of: a processor communicatively coupled to the memory architecture; a display communicatively coupled to the memory architecture; communicatively coupled to the processor a network interface; or a battery communicatively coupled to the processor.Example 8 can include a caching proxy device, comprising: a queue monitor for detecting an issued request in a queue shared by a plurality of domains in a memory architecture, wherein at least two of the domains include different associated memory accesses a system address decoder for determining a target domain associated with the issued request; and a request arbiter for if the issued request satisfies an over-representation condition with respect to the target domain and the first set of additional The request is associated with the target domain, preventing the first set of additional requests from being sent out to the queue.Example 9 may include the apparatus of example 8, wherein the request arbiter is configured to allow a second set of additional requests to be issued to the queue while preventing the first set of additional requests from being issued to the queue, and wherein A second set of additional requests is associated with one or more remaining domains in the plurality of domains.Example 10 can include the apparatus of example 8, further comprising a core for implementing a credit policy regarding the target domain to prevent the first set of additional requests from being issued.Example 11 may include the apparatus of example 8, further comprising a throttle component for transmitting a throttle signal to the core, wherein the throttle signal indicates the over-representation state.Example 12 may include the apparatus of example 8, wherein the system address decoder transmits a decoding result to a core that initiated the issued request, and wherein the decoding result indicates that the issued request is associated with the target domain .Example 13 may include the apparatus of any of examples 8 to 12, further comprising one or more cores for predicting that the first set of additional requests are associated with the target domain.Example 14 can include a method for operating a caching proxy device, the method comprising: detecting an issued request in a queue shared by a plurality of domains in a memory architecture, wherein at least two of the domains comprise different correlations a memory access delay; determining a target domain associated with the issued request; and if the issued request satisfies an over-representation condition with respect to the target domain and a first set of additional requests are associated with the target domain, Then the first set of additional requests are prevented from being sent to the queue.Example 15 may include the method of example 14, further comprising: allowing a second set of additional requests to be issued to the queue while preventing the first set of additional requests from being issued to the queue, wherein the second set of additional requests and One or more remaining domains in the plurality of domains are associated.Example 16 may include the method of example 14, wherein preventing the first set of additional requests from being issued comprises implementing a credit policy for the target domain in a core.Example 17 may include the method of example 14, wherein preventing the first set of additional requests from being sent comprises transmitting a throttle signal to a core, wherein the throttle signal indicates the over-representation state.Example 18 can include the method of example 14, further comprising transmitting a decoding result to a core initiating the issued request, wherein the decoding result indicates that the issued request is associated with the target domain.Example 19 may include the method of any one of examples 14 to 18, further comprising predicting, in one or more cores, the first set of additional requests to be associated with the target domain.Example 20 can include at least one computer readable storage medium including a set of instructions that, when executed by a computing device, cause a computing device to: detect a request issued in a queue shared by a plurality of domains in a memory architecture, wherein At least two of the domains include different associated memory access delays; determining a target domain associated with the issued request; and if the issued request satisfies an over-representation condition with respect to the target domain and the first set An additional request is associated with the target domain, preventing the first set of additional requests from being issued to the queue.Example 21 can include the at least one computer readable storage medium of example 20, wherein the instructions, when executed, cause the computing device to allow the second set of additional requests to be issued while preventing the first set of additional requests from being issued to the queue Go to the queue, and wherein the second set of additional requests is associated with one or more remaining domains in the plurality of domains.Example 22 may include at least one computer readable storage medium of example 20, wherein the instructions, when executed, cause a computing device to implement a credit policy with respect to the target domain in a core to prevent the first set of additional requests from being issued .Example 23 can include the at least one computer readable storage medium of example 20, wherein the instructions, when executed, cause a computing device to send a throttle signal to a core, and wherein the throttle signal indicates an over-representation state.Example 24 may include at least one computer readable storage medium of example 20, wherein the instructions, when executed, cause a computing device to transmit a decoding result to a core that initiated the issued request, and wherein the decoding result indicates the The request is made to be associated with the target domain.Example 25 may include at least one computer readable storage medium of any one of examples 20 to 24, wherein the instructions, when executed, cause the computing device to predict, in one or more cores, that the first set of additional requests are associated with the target domain .Example 26 can include a caching proxy device that includes means for detecting issued requests in a queue shared by a plurality of domains in a memory architecture, wherein at least two of the domains include different associated memories An access delay, a means for determining a target domain associated with the issued request, and for if the issued request satisfies an over-representation condition regarding the target domain and the first set of additional requests and the target domain The association prevents the first set of additional requests from being issued to the unit of the queue.Example 27 can include the apparatus of example 26, further comprising means for allowing a second set of additional requests to be issued to the queue while preventing the first set of additional requests from being issued to the queue, wherein the second A group attach request is associated with one or more remaining domains in the plurality of domains.Example 28 may include the apparatus of example 26, wherein the means for preventing the first set of additional requests from being issued comprises means for implementing a credit policy for the target domain in the core.Example 29 may include the apparatus of example 26, wherein the means for preventing the first set of additional requests from being issued comprises means for transmitting a throttle signal to the core, wherein the throttle signal indicates an over-representation state.Example 30 can include the apparatus of example 26, further comprising means for transmitting a decoding result to a core of the originating request, wherein the decoding result indicates that the issued request is associated with a target domain.The example 31 can include the apparatus of any one of the examples 26 to 30, further comprising means for predicting, in the one or more cores, the first set of additional requests associated with the target domain.The techniques described herein can thus provide new hardware and software interfaces that enable fair and flexible provisioning of memory bandwidth in multiple NUMA systems. Thus, emerging memory technologies and fabric techniques that provide access to remote memory via memory semantics can be successfully employed. Moreover, the technology makes it possible to avoid performance degradation associated with bandwidth throttling.The embodiments are applicable to all types of semiconductor integrated circuit ("IC") chips. Examples of such IC chips include, but are not limited to, processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, system on chip (SoC), SSD/NAND controller ASICs, and the like. Additionally, in some of the figures, the signal conductors are indicated by lines. Some may be different to indicate more constituent signal paths, have digital labels to indicate multiple constituent signal paths, and/or have arrows at one or more ends to indicate the primary information flow direction. However, this should not be explained in a limited way. Rather, such additional details may be utilized in connection with one or more exemplary embodiments to facilitate an easier understanding of the circuit. Any represented signal line, whether or not with additional information, may actually include one or more signals that may travel in multiple directions, and may be implemented with any suitable type of signal scheme, such as a digital or differential pair. Analog lines, fiber optic lines, and/or single-ended lines.Example sizes/models/values/ranges may have been given, although embodiments are not limited thereto. As manufacturing techniques, such as lithography, mature over time, it is expected that devices of smaller size can be fabricated. In addition, well known power/ground connections to IC chips and other components may or may not be shown in the drawings for simplicity of illustration and discussion, and in order not to obscure certain aspects of the embodiments. Furthermore, in order to avoid obscuring the embodiments, and in view of the fact that the details of the implementation of such a block diagram arrangement are highly dependent on the platform in which the embodiments are to be implemented, the arrangement may be shown in block diagram form, ie these details should be in the field Within the vision of the technician. In the description of the specific embodiments (e.g., the circuit), in order to describe the example embodiments, it should be apparent to those skilled in the art . The description is therefore to be regarded as illustrative rather than restrictive.The term "coupled" may be used herein to refer to any type of direct or indirect relationship between the components discussed, and may be applied to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms "first", "second", and the like may be used herein only for ease of discussion and without a particular time or chronological meaning unless otherwise indicated.Those skilled in the art will appreciate from the foregoing description that the broad teachings of the embodiments can be implemented in various forms. Accordingly, the present invention has been described with respect to the specific embodiments thereof, and the true scope of the embodiments should not be so limited, as other modifications will become apparent to those skilled in the art after the study of the drawings, the description and the appended claims. .
Footprints (340-343), or resource allocations, of waves within resources that are shared by processor cores (111-114) in a multithreaded processor (105) are measured concurrently with the waves executing on the processor cores. The footprints are averaged over a time interval. A number of waves are spawned and dispatched for execution in the multithreaded processor based on the average footprint.In some cases, the waves are spawned at a rate that is determined based on the average value of the footprints of waves within the resources. The rate of spawning waves is modified in response to a change in the average value of the footprints of the waves within the resources.
1.A method of controlling wave creation, comprising:measuring the resource allocation of the wave within the resources shared by the processor cores concurrently with the wave being executed on the processor cores in the multithreaded processor;at the multithreaded processor, averaging the resource allocations over a time interval; andat the multi-threaded processor, a number of waves are derived based on an average resource allocation and the derived waves are dispatched for execution in the multi-threaded processor,wherein averaging the resource assignments includes generating a plurality of average resource assignments for a plurality of subsets of the waves, and wherein deriving the number of waves includes deriving a number of waves determined based on the plurality of average resource assignments.2.The method of claim 1, wherein measuring the resource allocation for the wave comprises at least one of creating the wave, allocating resources to the wave, and de-allocating resources from the wave The resource allocation for the wave is measured at the time of the wave or at time intervals corresponding to a predetermined number of execution cycles.3.The method of claim 1, wherein measuring the resource allocation during the time interval comprises measuring a maximum resource allocation for the wave while the wave is executing on the processor core.4.The method of claim 1, wherein measuring the resource allocation of the wave within the resource comprises measuring the resource allocation of the wave within the resource during a tail time interval relative to a reference time .5.5. The method of claim 4, wherein averaging the resource allocations in the time interval comprises generating a moving average of the resource allocations in the tail time interval.6.6. The method of claim 5, wherein deriving the number of waves comprises deriving the number of waves after the reference time based on the moving average of the resource allocation within the tail time interval.7.The method of claim 6, further comprising:The number of waves allocated for execution is modified in response to a change in the moving average relative to a previous moving average in a previous tail interval.8.6. The method of claim 1, wherein the plurality of subsets of the wave includes at least one of: a subset of the wave that includes a single instruction multiple data operation, a subset of the wave that has completed execution, and A subset of waves corresponding to the shader type executing the wave.9.A processing system comprising:multiple processor cores; anda controller, the controller configured to derive a certain number of waves for execution by the plurality of processor cores, wherein the number of the derived waves is used in the waves by the plurality of processor cores determined by the average value of the measured resource allocations within the resource sharing performed on the wave,wherein the average of the measured resource allocations comprises a plurality of averages of the measured resource allocations for subsets of the waves, and wherein the controller is configured to derive the plurality of averages based on the measured resource allocations The value determines the number of waves.10.9. The processing system of claim 9, wherein the resource allocation of the wave is in correspondence with at least one of creating the wave, allocating resources to the wave, and de-allocating resources from the wave Time or measured at intervals corresponding to a predetermined number of execution cycles.11.9. The processing system of claim 9, wherein the measured resource allocation is a maximum resource allocation of the wave measured while the wave is executing on the plurality of processor cores.12.9. The processing system of claim 9, wherein the resource allocation for the wave is measured during a tail time interval relative to a reference time.13.13. The processing system of claim 12, wherein an average of said plurality of said averages of resource allocations of measured waves for a plurality of subsets of said waves is the measured resource within said tail time interval Assigned moving average.14.14. The processing system of claim 13, wherein the controller is configured to derive the number of waves after the reference time based on the moving average of measured resource allocations within the tail time interval.15.15. The processing system of claim 14, wherein the controller is configured to modify all of the waves dispatched for execution in response to a change in the moving average relative to a previous moving average in a previous tail time interval. stated quantity.16.10. The processing system of claim 9, wherein the plurality of subsets of the waves comprise at least one of: a subset of the waves comprising single instruction multiple data operations, a subset of waves that have completed execution and a subset of said waves corresponding to the shader type executing the wave.17.A method of controlling wave creation, comprising:At a multi-threaded processor, waves are derived at a rate determined based on an average of the wave's resource allocations within resources shared by the waves as they execute on the multi-threaded processor of;allocating the derived wave for execution by processor cores in the multithreaded processor; andat the multi-threaded processor, modifying the rate of the derived wave in response to a change in the average value of the resource allocation for the wave,wherein the average of the resource allocations for the wave within the resources shared by the wave comprises a plurality of averages of the measured resource allocations for subsets of the waves, and wherein the controller is configured to derive a value based on the measured The number of waves determined by the average of the plurality of resource allocations.18.18. The method of claim 17, wherein modifying the rate of the derived waves comprises increasing in response to a decrease in the average of the plurality of averages of the resource allocations for the plurality of subsets of the waves The velocity of the derivative waves is increased, and the velocity of the derivative waves is decreased in response to an increase in the average of the plurality of averages of the resource allocations for the plurality of subsets of the waves.
Wave Creation Control with Dynamic Resource AllocationBackground techniqueGraphics processing units (GPUs) and other multithreaded processing units typically implement multiple processing units (also known as processor cores or compute units) that execute multiple instances of a single program on multiple data sets simultaneously. These instances are called threads or waves. Waves are created (or forked) and then dispatched to each of the multithreaded processing units. A processing unit may include hundreds of processing elements such that thousands of waves are programs that are executed simultaneously in the processing unit. The processing elements in a GPU typically process three-dimensional (3-D) graphics using a graphics pipeline formed by a series of programmable shaders and fixed-function hardware blocks. For example, a 3-D model of an object visible in a frame may be represented by a set of primitives (such as triangles, other polygons, or tiles) that are processed in a graphics pipeline to generate pixel values for display to a user. In a multi-threaded GPU, waves execute different instances of a shader to perform computations on different primitives simultaneously or in parallel. Waves executing concurrently in a multithreaded processing unit share some of the processing unit's resources. Shared resources include Vector General Purpose Registers (VGPRs) to store state information for waves, Local Data Shares (LDS) to store data for waves, bandwidth available to move information between local cache hierarchies and memory, and the like.Description of drawingsThe present disclosure may be better understood, and its numerous features and advantages appreciated by those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.1 is a block diagram of a processing system including a graphics processing unit (GPU) for creating visual images intended for output to a display, according to some embodiments.2 depicts a graphics pipeline capable of processing higher-order geometric primitives to generate rasterized images of a three-dimensional (3-D) scene, according to some embodiments.3 is a block diagram of a portion of a processing system that supports wave creation based on dynamic allocation of shared resources, according to some embodiments.4 is a graph of the measured footprint of a wave in a shared resource over time, according to some embodiments.5 includes a graph of the average footprint of waves within a shared resource and a graph of the number of waves in flight in a multi-threaded processing unit, according to some embodiments.6 is a flow diagram of a method for controlling wave creation based on an average value of the footprint of waves executing in a multi-threaded processing unit, according to some embodiments.7 is a graph of the measured footprint of two different waves in a shared resource over time, according to some embodiments.Detailed waysThe number of waves that multithreaded processing units can execute concurrently is limited by the availability of shared resources. Conventional wave derivation techniques assume that all waves require the same allocation of resources (also referred to herein as the wave's footprint), and further require that the wave's footprint remains constant as the wave executes. The number of waves allocated for simultaneous execution is determined by comparing the assumed static footprint to the total available resources. The actual footprint of a wave often differs from the assumed static footprint, and the footprint of a wave often varies with the execution of the wave, resulting in inefficient allocation of resources to the wave. For example, if the static footprint of each wave is assumed to have a maximum value that is greater than or equal to the actual footprint of any individual wave, concurrent execution of waves on a multithreaded GPU consumes less than all available resources. As another example, if it is assumed that the footprint of each wave has a minimum value that is less than or equal to the footprint of the wave when it is executed, then the execution of the waves becomes serialized as the waves compete for the same resources, which reduces or eliminates The degree of latency hiding achieved by executing waves in parallel. Serialization occurs when the memory bandwidth used by a wave exceeds the available memory cell bandwidth divided by the number of waves in execution.By measuring the footprint of a wave that is sharing a resource while executing the wave on a multithreaded processor, the utilization of the shared resource by a multithreaded processor (such as a GPU) is improved, while also avoiding serialization due to contention for the shared resource . The measured footprints of the waves are averaged over a certain time interval to determine the average footprint of the waves. The number of waves (or the rate at which waves are derived for execution) that are then dispatched for execution in the multi-threaded processor is determined based on the average footprint. For example, the number of waves dispatched for simultaneous execution on a multi-threaded processor may be set equal to the available shared resources divided by the average footprint. In some embodiments, the average footprint is determined using a moving average, such as an exponentially weighted moving average of the footprints of waves that have been assigned for simultaneous execution on a multi-threaded processor. The footprint of a wave can be determined when a wave is created, when shared resources are allocated to a wave, when shared resources are deallocated, during each processor cycle, after a predetermined number of cycles have been completed, or at any other time or at any time. Measurements are made at other time intervals. A single average footprint is determined by averaging all dispatched waves, or multiple average footprints are determined for a subset of dispatched waves. The subset is determined based on common characteristics of the dispatched waves (such as waves running on single instruction multiple data (SIMD), waves that have completed execution, waves that are executing different types of shaders, etc.).1 is a block diagram of a processing system 100 including a graphics processing unit (GPU) 105 for generating visual images intended for output to a display 110, according to some embodiments. GPU 105 is a multi-threaded processor that includes a plurality of processor cores 111, 112, 113, 114, collectively referred to herein as "processor cores 111-114." The processor cores 111-114 are configured to execute instructions concurrently or in parallel. Although four processor cores 111-114 are shown in FIG. 1 for clarity, some implementations of GPU 105 include tens or hundreds or more processor cores. The processing resources of processor cores 111 - 114 are used to implement a graphics pipeline that renders images of objects for presentation on display 110 . Some embodiments of processor cores 111-114 execute multiple instances (or waves) of a single program concurrently on multiple data sets. Wave derivation control logic in GPU 105 derives waves for execution on processor cores 111-114 based on the dynamically determined wave footprint, as discussed herein.Processing system 100 includes memory 115 . Some implementations of memory 115 are implemented as dynamic random access memory (DRAM). However, memory 115 may also be implemented using other types of memory including static random access memory (SRAM), non-volatile RAM, and the like. In the embodiment shown, GPU 105 communicates with memory 115 via bus 120 . However, some embodiments of GPU 105 communicate with memory 115 through direct connections or via other buses, bridges, switches, routers, and the like. GPU 105 may execute instructions stored in memory 115 , and GPU 105 may store information in memory 115 such as the results of the executed instructions. For example, memory 115 may store copies 125 of instructions from program code to be executed by processor cores 111 - 114 in GPU 105 .Processing system 100 includes a central processing unit (CPU) 130 for executing instructions. Some embodiments of CPU 130 include multiple processor cores 131, 132, 133, 134 (collectively referred to herein as "processor cores 131-134") that may independently execute instructions concurrently or in parallel. CPU 130 is also connected to bus 120 and thus can communicate with GPU 105 and memory 115 via bus 120 . CPU 130 may execute instructions such as program code 135 stored in memory 115 , and CPU 130 may store information such as results of the executed instructions in memory 115 . CPU 130 can also initiate graphics processing by issuing draw calls to GPU 105 . A draw call is a command generated by CPU 130 and transmitted to GPU 105 to instruct GPU 105 to render an object (or a portion of an object) in a frame. Some embodiments of draw calls include information defining textures, states, shaders, render objects, buffers, etc., used by GPU 105 to render objects or portions thereof. The information included in the draw call may be referred to as a state vector, which includes state information. GPU 105 renders the object to generate pixel values, which are provided to display 110, which uses the pixel values to display an image representing the rendered object.Input/output (I/O) engine 140 handles input or output operations associated with display 110 and other elements of processing system 100 (eg, keyboard, mouse, printer, external disk, etc.). I/O engine 140 is coupled to bus 120 so that I/O engine 140 can communicate with GPU 105 , memory 115 or CPU 130 . In the embodiment shown, the I/O engine 140 is configured to read information stored on an external storage medium 145, such as a compact disc (CD), digital versatile disc (DVD), networked server, and the like. The external storage medium 145 stores information representing program codes for implementing application programs such as video games. Program code on external storage medium 145 may be written to memory 115 to form copies 125 of instructions to be executed by GPU 105 or program code 135 to be executed by CPU 130 .The processor cores 111 - 114 in the multi-threaded GPU 105 share resources for supporting the execution of waves in the GPU 105 . Some embodiments of GPU 105 implement a set of vector general purpose registers (VGPRs, not shown in FIG. 1 for clarity) that store state information for waves executing on processor cores 111-114. VGPR is shared between waves that are executing concurrently on processor cores 111-114. For example, each wave is assigned a subset of the VGPR to store state information for that wave. Waves also share other resources of GPU 105, including local data sharing divided between concurrently executing waves, memory bandwidth shared by waves for accessing local caches, and the like. Processor cores 131-134 in multithreaded CPU 130 also share resources. Dynamic allocation of wave derivation and shared resources as discussed below in the context of GPU 105 is also implemented in some embodiments of multi-threaded CPU 130 .Different waves consume different amounts of resources when executed on processor cores 111-114. Therefore, these waves have different resource footprints. Furthermore, the resources consumed by a wave typically change during the execution of the wave. For example, the number of VGPRs required to store state information for a wave changes as the wave executes. The amount of intermediate results generated by a wave typically increases as the wave begins executing, peaks during the execution of the wave, and then decreases as the wave finishes executing. Thus, the number of VGPRs required to store intermediate results (and other state information) increases, peaks, and then decreases according to the amount of information that needs to be stored. Similar patterns are also observed in the consumption of other resources including local data sharing and memory bandwidth.GPU 105 derives waves for execution on processor cores 111-114 based on dynamic estimates of the wave's footprint within GPU 105's shared resources. The footprint of the waves within the shared resource is measured concurrently with the waves executing on the processor cores 111-114. The measured footprints of the waves are averaged over a time interval, eg, using an exponentially weighted moving average of the measured footprints. A number of waves are derived and dispatched for execution by processor cores 111-114 based on the average footprint. For example, the number of waves derived can be set equal to the available shared resources divided by the average footprint. Available shared resources are equal to total shared resources minus shared resources allocated to waves currently executing on processor cores 111-114.In some cases, GPU 105 derives the waves at a rate determined based on an average value of the footprints of the waves within the shared resource. The GPU 105 modifies the rate of the derived waves in response to changes in the average value of the wave's footprint within the resource. The GPU 105 may determine the number of derived waves (or the rate at which the waves are derived) based on the average footprint within a single resource considered to be the bottleneck in the processing system 100, or the GPU 105 may be based on the average occupancy within multiple shared resources A combination of spaces to determine the number of waves.2 depicts a graphics pipeline 200 capable of processing higher-order geometric primitives to generate rasterized images of a three-dimensional (3-D) scene, according to some embodiments. Graphics pipeline 200 is implemented in some implementations of GPU 105 shown in FIG. 1 . For example, graphics pipeline 200 may be implemented using processor cores 111 - 114 in multithreaded GPU 105 shown in FIG. 1 .Graphics pipeline 200 includes input assembler 202 configured to access information from storage resource 201 for defining objects representing portions of a scene model. Vertex shader 203 , which can be implemented in software, logically receives as input a single vertex of a primitive and outputs a single vertex. Some implementations of shaders, such as vertex shader 203, implement large-scale single instruction multiple data (SIMD) processing, such that multiple vertices can be processed simultaneously, eg, by processor cores 111-114 shown in FIG. The graphics pipeline 200 shown in FIG. 2 implements a unified shader model such that all shaders included in the graphics pipeline 200 have the same execution platform on a shared large-scale SIMD compute unit. Accordingly, shaders (including vertex shader 203) are implemented using a common set of resources referred to herein as unified shader pool 204. Some embodiments of unified shader pool 204 are implemented using processor cores 111 - 114 in GPU 105 shown in FIG. 1 .The hull shader 205 operates on the input higher order tiles or control points used to define the input tiles. Hull shader 205 outputs tessellation factors and other tile data. Optionally, primitives generated by hull shader 205 may be provided to tessellation 206 . Tesseller 206 receives objects, such as tiles, from hull shader 205 and generates information identifying primitives corresponding to the input objects, eg, by tessellation based on tessellation provided by hull shader 205 to tessellation 106 Subfactors tessellate the input objects. Tessellation subdivides higher-order input primitives (such as tiles) into a set of lower-order output primitives that represent a finer level of detail, e.g., as specified by the granularity of the primitives produced by the tessellation process Indicated by the tessellation factor. Thus, the scene model can be represented by a smaller number of higher order primitives (to save memory or bandwidth), and additional detail can be added by tessellating the higher order primitives.Domain shader 207 inputs the domain position and (optionally) other tile data. Domain shader 207 operates on the provided information and generates a single vertex for output based on the input domain position and other information. Geometry shader 208 receives input primitives and outputs up to four primitives generated by geometry shader 208 based on the input primitives. One primitive stream is provided to rasterizer 209 and up to four primitive streams can be merged into buffers in storage resource 201 . Rasterizer 209 performs shading operations and other operations such as cropping, view division, clipping, and viewport selection, among others. Pixel shader 210 inputs a pixel stream and outputs zero or another pixel stream in response to the input pixel stream. The output binner block 211 performs blending, depth, stencil or other operations on the pixels received from the pixel shader 210 .These stages of graphics pipeline 200 can use processing resources in unified shader pool 204 to access storage resources 215 shared by waves being executed by different stages. Portions of storage resources 215 are implemented on-chip as part of GPU 105 shown in FIG. 1 or off-chip using some implementations of memory 115 shown in FIG. 1 . Storage resources 215 include LDS 220 for read/write communication and synchronization within workgroups of multiple waves. Storage resources 215 also include VGPR 225, which stores state information that defines the current state of the wave, such as intermediate results of operations that have been performed by the wave. Storage resources 215 also include a cache hierarchy 230 for caching information such as vertex data, texture data, and other data frequently used by one or more of these stages of graphics pipeline 200 . Storage resources 215 may also include other registers, buffers, memories, or caches. Shared resources of graphics pipeline 200 also include bandwidth in the memory fabric used to support communication between these stages of graphics pipeline 200 and storage resources 215 .Waves that are executing in graphics pipeline 200 have different footprints in memory resources 215 and other shared resources of graphics pipeline 200 . For example, a wave used to shade a very detailed foreground portion of an image has a larger footprint in a shared resource than a wave used to shade a less detailed background portion of the image. The footprint of the wave also changes as it travels along the graphics pipeline 200 . For example, a wave's footprint in a shared resource may start at a first value (a relatively small value) as it executes in vertex shader 203 , and then the wave's footprint may follow the wave in subsequent stages of graphics pipeline 200 to generate additional intermediate results. Accordingly, the wave derivation control logic is configured to derive the wave's footprint for use in the Waves executed in graphics pipeline 200 .3 is a block diagram of a portion 300 of a processing system that supports wave creation based on dynamic allocation of shared resources, according to some embodiments. Portion 300 is used to implement some embodiments of processing system 100 shown in FIG. 1 . For example, portion 300 includes a multi-threaded processing unit 305 for implementing some embodiments of GPU 105 or CPU 130 shown in FIG. 1 . Processing unit 305 includes a plurality of processor cores 310, 311, 312, 313, collectively referred to herein as "processor cores 310-313." Processor cores 310 - 313 share a set of resources 315 including LDS 320 , VGPR 325 and cache 330 . The processor cores 310 - 313 also share the memory bandwidth of the connection 335 between the processing unit 305 and the shared resource 315 .Waves executing on processor cores 310 - 313 have different footprints within shared resource 315 . For example, the first wave has a footprint 340 in LDS 320 , a footprint 341 in VGPR 325 , and a footprint 342 in cache 330 . The first wave also has footprint 343 and memory bandwidth available in connection 335. The footprint 340-343 of the waves in the shared resources is measured concurrently with the waves executing on the processor cores 310-313. For example, footprint 340 can be measured as the number of bytes allocated to the first wave at a particular time, footprint 341 can be measured as the number of registers allocated to the first wave at a particular time, and footprint 342 can be measured is the number of cache entries allocated to the first wave at a particular time. The footprint 343 in the connection 335 may be measured or estimated based on the number of cache fetches or misses associated with the first wave at a particular time. Other metrics of footprint 340-343 (or footprint in other shared resources) may also be used.For each of the waves, multiple measurements of the occupied spaces 340-343 are performed over time. Measurements may be performed when a wave is created, whenever resources are allocated to a wave, and whenever resources are deallocated from a wave. The footprint of the waves 340-343 may also be measured at time intervals corresponding to a predetermined number of execution cycles. For example, the footprint 340-343 may be measured every execution cycle, every N execution cycles, or after other subsets of execution cycles. Thus, each wave is associated with a set of measurements indicative of changes in the wave's footprint 340-343 within the shared resource over time.The wave derivation controller 345 creates and dispatches new waves to the processor cores 310-313 based on the average of the measured footprints 340-343. Some embodiments of the wave derivation controller 345 receive information 350 indicative of an instruction or operation to be performed in the wave and information 355 indicative of a measurement of the occupied space 340 . The wave derivation controller 345 generates an average of the measured occupied spaces 340-343 over a specific time interval. Some embodiments of the wave derivation controller 345 generate an average value over a tail time interval relative to a reference time, eg, as an exponentially weighted moving average. Different averages may be generated for subsets of waves. For example, the averages of the footprints 340-343 may be generated for a subset of waves that include SIMOs, subsets of waves that have completed execution, subsets of waves that correspond to the type of shader executing the waves, etc. .The wave derivation controller 345 dispatches a determined number of waves based on the average of the occupied spaces 340-343 (or dispatches waves at a rate determined based thereon). For example, the number or rate of waves derived after the reference time used to determine the tail time interval is determined based on the average. The wave derivation controller 345 is also configured to modify the number of derived waves (or the rate of derived waves) in response to changes in the moving average relative to the previous moving average in the previous tail time interval. For example, waves may be derived at a higher rate in response to a decrease in the moving average, which indicates that more shared resources 315 are available for allocation to other waves. As another example, waves may be derived at a lower rate in response to an increase in the moving average, which indicates that less shared resources are available for allocation to other waves.Some embodiments of wave derivation controller 345 determine different numbers of derived waves (or different rates of derived waves) for different subsets of waves based on the average footprint calculated for the subsets. For example, the wave derivation controller 345 may be based on the average value of the subset of waves comprising the SIO, the average value of the subset of waves that have completed execution, the average value of the waves corresponding to the different types of shaders executing the wave Subset averages, etc. to derive different numbers of waves (or at different rates).4 is a graph 400 of a measured footprint 405 of a wave in a shared resource over time, according to some embodiments. In Figure 4, the measured footprint 405 is shown as a solid line. However, in some embodiments, the measured footprint 405 is formed from multiple discrete measurements at particular time intervals, such as time intervals corresponding to a predetermined number of execution cycles. Initially, the measured footprint 405 starts at a relatively low value, and then increases (or decreases) as the wave execution progresses. A moving average 410 of the measured occupied space 405 is calculated using the measurements performed during the tail time interval 415 relative to the reference time 420 . For example, moving average 410 may be an exponentially weighted moving average calculated using measurements performed during tail time interval 415 . The reference time 420 may correspond to the current time, or the reference time 420 may be selected to occur at predetermined time intervals.5 includes a graph 500 of the average footprint of waves within a shared resource 505 and a graph 510 of the number of waves in flight 515 in a multithreaded processing unit, according to some embodiments. The average footprint 505 is determined based on measurements of the footprint of a wave while the wave is executing on processor cores in a multithreaded processing unit, as discussed herein. The controller then determines 515 the number of waves in flight based on the average occupied space 505 . As used herein, the term "wave in flight" refers to a wave that has been forked and dispatched for execution on a processor core in a multithreaded processing unit, but not yet retired. Thus, the number 515 of waves in flight is determined by the number of waves the controller derives or the rate at which the controller derives new waves.Graph 500 shows minimum average footprint 520, which is initially assumed to be the footprint occupied by each of the waves within the shared resource. The graph 500 also shows the maximum average footprint 525, which represents the maximum amount of shared resources that will be allocated to individual waves. The controller derives the wave based on the current value of the average occupied space. Graph 510 shows that the number of waves 515 in flight is initially relatively high because the average footprint used to determine the number of derived waves (or the velocity of the derived waves) is equal to the minimum average footprint 520 . The number 515 of waves in flight decreases in response to the average footprint 505 increasing until the average footprint 505 reaches a maximum value 525 . Then, the number 515 of waves in flight increases in response to the average occupied space 505 decreasing.6 is a flow diagram of a method 600 for controlling wave creation based on an average of the footprint of waves executing in a multi-threaded processing unit, according to some embodiments. The method 600 is implemented in some embodiments of the computing system 100 shown in FIG. 1 and the portion 300 of the computing system shown in FIG. 3 . While the method 600 shown in FIG. 6 determines the average footprint of a wave executing in a multithreaded processing unit among shared resources, some embodiments of the method 600 determine the average footprint for multiple shared resources, different subsets of waves, and the like .At block 605, the controller derives a wave for execution in the multithreaded processing unit based on the initial footprint. In some embodiments, the initial footprint is set to a minimum average footprint. The controller continues to derive a number of waves (or at a rate determined based on this) based on the initial footprint.At block 610, a moving average of the footprint of the wave executing in the multithreaded processing unit is determined. Measure the footprint of individual waves in the shared resource, then use the measured footprint to calculate a moving average. For example, a moving average may be determined using measurements of occupied space performed in tail time intervals relative to a reference time.At decision block 615, the controller determines whether the average footprint has increased. If not, method 600 proceeds to decision block 620 . If the average footprint has increased, the method 600 proceeds to decision block 625 and the controller determines whether the average footprint is equal to the maximum footprint. If the average footprint is equal to the maximum footprint, the method 600 returns to block 610 and the controller continues to calculate a moving average based on the newly acquired measurement of the footprint. Accordingly, the controller continues to derive the number of waves determined based on the maximum footprint (or at a rate determined based thereon). If the average footprint is not equal to the maximum footprint, method 600 proceeds to block 630 . At block 630, in response to the increase in the average footprint, the controller reduces the number of waves in flight, eg, by decreasing the number of derived waves or by reducing the velocity of the derived waves.At decision block 620, the controller determines whether the average footprint has decreased. If not, method 600 proceeds to block 610 . Accordingly, the controller continues to derive an amount of waves (or at a rate determined based on this) that is determined based on the previous (and unchanged) average footprint. If the average footprint has decreased, the method 600 proceeds to decision block 635 and the controller determines whether the average footprint is equal to the minimum footprint. If the average footprint is equal to the minimum footprint, the method 600 returns to block 610 and the controller continues to calculate a moving average based on the newly acquired measurement of the footprint. Accordingly, the controller continues to derive the number of waves determined based on the minimum footprint (or at a rate determined based thereon). If the average footprint is not equal to the minimum footprint, method 600 proceeds to block 640 . At block 640, in response to the decrease in the average footprint, the controller increases the number of waves in flight, eg, by increasing the number of derived waves or by increasing the velocity of the derived waves.7 is a graph 700 of measured footprints 705, 710 in a shared resource for two different waves over time, according to some embodiments. In Figure 7, the measured footprints 705, 710 are shown as solid lines. However, in some embodiments, the measured footprint 705, 710 is formed from multiple discrete measurements at specific time intervals, such as time intervals corresponding to a predetermined number of execution cycles. The footprints 705, 710 are measured while the corresponding waves are executing on the processor core. The measured footprints 705, 710 are simultaneous and offset in time. However, other measurements are not necessarily simultaneous or offset in time. For example, in some cases the measured footprints 705, 710 are measured at different times or when the waves are executing on different processors.Waves have different properties that cause waves to use different code paths (eg, different execution paths within a GPU's shader). For example, if the wave is executed on a pixel shader configured to shade two types of materials within a screen image, when shading the pixels corresponding to objects of the first material type or the second material type, Pixel shaders can operate in different ways, which causes the waves used to shade different pixels to follow different code paths through the pixel shader. Although properties involving coloring different types of materials are used in this discussion for illustrative purposes, other properties of waves resulting in different maximum occupied areas can also be used to differentiate between different types of waves.Waves executing along different code paths arrive at different maximum footprints in the shared resource. In the illustrated embodiment, a first wave executing along a first code path reaches a first maximum footprint 715, and a second wave executing along a second code path reaches a second maximum footprint 720, the second largest The occupied space 720 is smaller than the first maximum occupied space 715 . The maximum footprint 715, 720 is determined by monitoring the footprint as the wave executes on the processor core. The average footprint of a wave (such as the first wave) executing along the first code path is calculated by averaging the maximum footprint of the wave (such as the first maximum footprint 715). The average footprint of the waves (such as the second wave) executing along the second code path is calculated by averaging the maximum footprints of the waves (such as the second maximum footprint 720).In some embodiments, the average maximum footprint of different types of waves is used to determine different numbers (or rates) of waves to derive depending on the type of wave being performed. For example, if a pixel shader is shading a first type of material, the average maximum footprint of waves executed along a corresponding first code path through the pixel shader is used to determine the number of waves derived (or rate). As another example, if a pixel shader is shading a second type of material, the average maximum footprint of waves executed along the corresponding second code path through the pixel shader is used to determine the number of waves derived ( or rate). In the case discussed above, the average maximum footprint of the waves of the first type is greater than the average maximum footprint of the waves of the second type. Therefore, waves of the first type are derived in fewer numbers (or at a lower rate) than waves of the second type.In some implementations, the apparatus and techniques described above are used in a system (such as described above with reference to FIGS. 1-6 ) that includes one or more integrated circuit (IC) devices (also referred to as integrated circuit packages or microchips). computing system). Electronic Design Automation (EDA) and Computer Aided Design (CAD) software tools can be used to design and fabricate these IC devices. These design tools are usually represented as one or more software programs. The one or more software programs include code executable by a computer system to manipulate the computer system to operate on code representing a circuit of one or more IC devices in order to perform at least part of a process to design or adjust a manufacturing system to make the circuit . This code may include instructions, data, or a combination of instructions and data. Software instructions representing the design tool or authoring tool are typically stored in a computer-readable storage medium accessible by the computing system. Likewise, code representing one or more stages of designing or fabricating an IC device may be stored in and from the same computer-readable storage medium or a different computer-readable storage medium access to the medium.Computer-readable storage media may include any non-transitory storage media or combination of non-transitory storage media that can be accessed by a computer system during use to provide instructions and/or data to the computer system. Such storage media may include, but are not limited to, optical media (eg, compact disc (CD), digital versatile disc (DVD), Blu-ray disc), magnetic media (eg, floppy disk, magnetic tape, or magnetic hard drive), volatile memory ( For example, random access memory (RAM) or cache), non-volatile memory (eg, read only memory (ROM) or flash memory), or microelectromechanical systems (MEMS) based storage media. A computer-readable storage medium can be embedded in a computing system (eg, system RAM or ROM), fixedly attached to the computing system (eg, a magnetic hard drive), removably attached to the computing system (eg, an optical disc or based on Universal Serial Bus (USB) flash memory), or coupled to a computer system (eg, Network Accessible Storage (NAS)) through a wired or wireless network.In some embodiments, certain aspects of the techniques described above may be implemented by one or more processors of a processing system executing software. Software includes one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer-readable storage medium. The software may include instructions and certain data that, when executed by one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. Non-transitory computer readable storage media may include, for example, magnetic or optical disk storage, solid state storage such as flash memory, cache memory, random access memory (RAM), or one or more other nonvolatile memory devices, and the like. Executable instructions stored on a non-transitory computer-readable storage medium may be source code, assembly language code, object code, or other instruction formats that are interpreted or otherwise executed by one or more processors.It should be noted that not all activities or elements described above in the general description are required, that a particular activity or part of a device may not be required, and that one or more other activities may be performed in addition to those described , or may include one or more other elements. Further, the order in which activities are listed is not necessarily the order in which they are performed. Additionally, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.Benefits, other advantages, and solutions to problems have been described above with respect to specific embodiments. However, the stated benefits, advantages, solutions to problems, and any feature that would result in any benefit, advantage, or solution appearing or becoming more apparent, should not be construed as critical or essential to any or all claims basic or basic characteristics. Furthermore, the specific embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the specific embodiments disclosed above may be altered or modified, and all such modifications are considered to be within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the following claims.
Systems and methods that provide one-shot remote direct memory access (RDMA) are provided. In one embodiment, a system that transfers data over an RDMA network may include, for example, a host (10). The host (10) may include, for example, a driver (30) and a network interface card (NIC) (40), the driver (30) being coupled to the NIC (40). The driver (30) and the NIC (40) may perform a one-shot initiation process and/or a one-shot completion process of an RDMA operation.
CLAIMS WHAT IS CLAIMED IS: 1. A system for transferring data over a remote direct memory access (RDMA) network, comprising: a host comprising a driver and a network interface card (NIC), the driver being coupled to the NIC, wherein the driver and the NIC perform a one-shot initiation process of an RDMA operation. 2. The system according to claim 1, wherein the driver posts a single command message to perform the one-shot initiation process. 3. The system according to claim 2, wherein the single command message comprises a command to describe pinned-down memory buffers of the host. 4. The system according to claim 3, wherein the single command message further comprises a command to bind a portion of the pinned-down memory buffers of the host to a steering tag (STag). 5. The system according to claim 4, wherein the single command message further comprises a command to write a send command. 6. The system according to claim 4, wherein the NIC places the STag value in an optional field in a direct data placement DDP or RDMA header. 7. The system according to claim 6, wherein the NIC encodes a value into a field in the DDP or RDMA header indicating that the STag value in the optional field is valid. <Desc/Clms Page number 15> 8. The system according to claim 6, wherein the NIC sets one or more bits in a field in the DDP or RDMA header indicating that the STag value in the optional field is valid. 9. The system according to claim 6, wherein the NIC sets one or more bits or encodes a value into a second field in the DDP or RDMA header to advertise the portion of the pinned memory buffers of the host associated with the STag. 10. The system according to claim 2, wherein the single command message provides a description of a section of memory. 11. The system according to claim 2, wherein the single command message is posted to a command ring of the host. 12. The system according to claim 11, wherein the driver allocates an STag value. 13. The system according to claim 12, wherein the STag value is returned synchronously from a command call. 14. The system according to claim 12, wherein the STag value is saved in a driver command table of the host. 15. The system according to claim 14, wherein the STag value saved in a driver command table is associated with an application reference number. 16. The system according to claim 1, wherein the NIC comprises an RDMA-enabled NIC. <Desc/Clms Page number 16> 17. A system for transferring data over a remote direct memory access (RDMA) network, comprising: a host comprising a driver and a network interface card (NIC), the driver being coupled to the NIC, wherein the driver and the NIC perform a one-shot completion process of an RDMA operation. 18. The system according to claim 17, wherein the NIC receives a message comprising an optional field carrying a STag value, the STag value being associated with pinned memory in a remote host. 19. The system according to claim 18, wherein a header of the message indicates the validity of the optional field with a bit flag or specified value in an encoded field. 20. The system according to claim 18, wherein the NIC de-associates the STag value with the pinned memory in the host, thereby preventing further access to the pinned memory using the de-associated STag value. 21. The system according to claim 18, wherein the NIC delivers the message to the driver, and wherein the driver compares the STag value received with a STag value previously sent. 22. The system according to claim 18, wherein the NIC de-associates the STag value with previously associated SGL information. 23. The system according to claim 20, wherein the NIC frees any resources dedicated to information regarding the pinned memory. <Desc/Clms Page number 17> 24. A method for transferring data over an RDMA network, comprising: initiating an RDMA write operation using a one-shot initiation process between a driver and a NIC; inserting an STag value in a first field of a DDP or RDMA header of an RDMA send message; and validating the STag value in the first field with a bit flag or other specified value in a second field of the DDP or RDMA header. 25. A method for transferring data over an RDMA network, comprising: completing an RDMA write operation using a one-shot completion process between a NIC and a driver of a host; receiving a completion message; identifying a STag value in a first field of a header of the completion message; and validating the STag value in the first field of the header by identifying a bit flag or other specified value in a second field of the header.
<Desc/Clms Page number 1> ONE-SHOT RDMA RELATED APPLICATIONS This application makes reference to, claims priority to and claims benefit from United States Provisional Patent Application Serial No. 60/404,709, entitled "Optimizing RDMA for Storage Applications"and filed on August 19,2002. INCORPORATION BY REFERENCE The above-referenced United States patent application is hereby incorporated herein by reference in its entirety. BACKGROUND OF THE INVENTION Some network technologies (e. g. , 1Gb Ethernet, TCP, etc. ) may provide the ability to move data between different memories in different systems. When network speeds began increasing beyond approximately 100Mbps, network interface cards (NICs) were adapted to provide direct memory access (DMA) techniques to limit system overhead for locally accessing the data over the network. Virtual memory operating systems (e. g., Windows and Unix) provide for addressing memory in addition to the physical system memory. A unit of information can, for example, either be present in the physical memory (i. e.,"pinned down") or may be swapped out to disk. A DMA device typically accesses only physical memory, and therefore, the operating system should guarantee that the unit of information to be moved over the network is"pinned down"in physical memory before the NIC can DMA the information. That is, a particular block of memory may be configured such that the block of memory cannot be moved or swapped to a disk storage. FIG. 1 shows a block representation of a conventional system in which data is copied from a pinned buffer in a first host to a pinned buffer in a second host. The first host 10 includes a pinned buffer 20, a driver 30 and a NIC 40. The pinned buffer 20 and the driver 30 are each coupled to the NIC 40. The second host 50 includes a pinned buffer 60, a driver 70 and a NIC 80. The pinned buffer 60 and the driver 70 are each coupled to the NIC 80. The NIC 40 is coupled to the NIC 80 via a network <Desc/Clms Page number 2> 90. The driver in this example may take many forms, such as, for example, a stand- alone driver or a driver as part of a more comprehensive software package. In operation, the driver 30 or other software in the host 10 writes a descriptor for a location of the pinned buffer 20 to the NIC 40. The driver 70 or other software in the host 50 writes a descriptor for a location of the pinned buffer 60 to the NIC 80. The driver 30, works with the operating system and other software and hardware in the system to guarantee that the buffers 20 are locked into physical host memory (i. e. , "pinned"). The NIC 40 reads data from the pinned buffer 20 and sends the read data on the network 90. The network 90 passes the data to the NIC 80 of the host 50. The NIC 80 writes data to the pinned buffer 60. Conventionally, different and incompatible upper layer protocol (ULP) applications may be used to perform a particular data transfer. For example, a storage application defined according to a storage protocol such as Internet Small Computer System Interface (iSCSI) may provide a particular data transfer using an iSCSI network. In another example, a database application defined according to a remote direct memory access protocol (RDMAP) may provide a particular data transfer using an RDMA network. However, iSCSI was developed and optimized for general storage such as in a storage networks. In contrast, RDMA was developed and optimized for different purposes such as, for example, interprocess communications (IPC) applications. Unfortunately, conventional systems have been unable to efficiently combine some of the advantageous features of iSCSI and RDMA into a single ULP application using a single network. For example, conventional iSCSI systems have proven to be inflexible when applied to non-storage applications and conventional RDMA systems have not been developed to efficiently provide data storage as already provided in conventional iSCSI systems. FIG. 2 shows a flow diagram of a conventional storage network system using iSCSI. In operation, data is written from Host 1 to Host 2. This operation may be similar, but is not limited to, the functionality exhibited by disk Host Bus Adapter (HBA) devices. In path 100, a driver on Host 1 writes a command to a command queue (e. g., a ring) that requests that the contents of a set of pre-pinned buffers be written to a specific disk location in Host 2. In path 110, NIC 1 reads the command <Desc/Clms Page number 3> from the queue and processes it. NIC 1 builds a mapping table for the pinned buffers on Host 1. The mapping table is given a handle, for example, "Command X. "In path 120, NIC 1 sends a write command to Host 2 that requests that data be pulled from "Command X"of Host 1 into a location on the disk in Host 2. The write command also requests that Host 2 inform Host 1 when the write command has been completed. In path 130, NIC 2 of Host 2 receives the write command and passes the write command to a driver for processing through a completion queue. In path 140, the driver of Host 2 reads the command and allocates buffers into which data may temporarily be stored. In path 150, the driver writes a command to the NIC command queue that"the allocated buffers be filled with data from the'Command X'of Host 1."It is possible that paths 130-150 can be executed entirely by NIC 2 if the driver pre-posts a pool of buffers into which data may be written. In path 160, NIC 2 processes the pull command. NIC 2 builds a mapping table for the pinned buffers on Host 2 and creates a handle, for example,"Command Y. "A command is sent to NIC 1 requesting"fill Command Y of Host 2 with data from"Command X"of Host 1."The sent command can be broken up into a plurality of commands to throttle data transfer into Host 2. In path 170, as NIC 1 receives each command, NIC 1 uses its mapping table to read data from Host 1. In path 180, NIC 1 formats each piece of read data into packets and sends the packets to Host 2. In path 190, as NIC 2 receives each pull response, NIC 2 determines where to place the data of each pull response using its mapping table and writes the data to Host 2. In path 200, after all the data has been pulled, NIC 2 writes a completion command to the Host 2 driver that says that it has completed the pull command specified in path 150. In path 210, Host 2 reads the command response and processes the data in the buffers to disk (path 211). In path 220, when the data has been processed and the buffers on Host 1 are no longer needed, Host 2 writes a status command to NIC 2. The command states that the command received in path 140 has been completed and that"Command X"of Host 1 can be released. In path 230, NIC 2 reads the status command. In path 240, the status command is sent to Host 1. In path 250, NIC 1 receives the status command that indicates that the buffers associated with"Command X"of Host 1 are <Desc/Clms Page number 4> no longer needed. NIC 1 frees the mapping table associated with"Command X"of Host 1. Once the internal resources have been recovered, the status is written to the completion queue on Host 1. In step path, the driver of Host 1 reads the completion and is informed that the command requested in path 100 is complete. FIG. 3 shows a flow diagram of a conventional storage system implementation using a remote direct memory access protocol (RDMAP). In this exemplary operation, data is written from Host 1 to Host 2. In path 270, a driver requests the Operating System to pin memory and develop a table. The driver may also, in an additional path, request the NIC to register the memory. In path 280, NIC 1 responds with a region identification (RID) for the table in NIC 1. In path 290, the driver requests that a window be bound to a region. In one conventional example, the system may not employ a window concept and thus may not need to perform steps related to the window concept. In path 300, NIC 1 responds with the STag value that corresponds to the bound window/region pair. In path 310, the driver formats a write request packet and places the STag value somewhere within the packet as set forth by a particular storage protocol. In path 320, NIC 1 sends the write message to NIC 2, advertising the buffer available on Host 1. In path 330, NIC 2 receives the send message and posts the message to the driver. A driver on Host 2 then processes the send message and determines that data must be pulled to satisfy the command represented inside the send message. The driver, on path 340, queues an RDMA read command to NIC 2 to pull data from the STag on NIC 1 into the pinned memory on NIC 2. The Host 2 memory is pre-pinned. In path 350, NIC 2 processes the RDMA read command and sends a RDMA read request message to NIC 1. In path 360, NIC 1 receives and processes the RDMA read request message. NIC 1 responds by reading the data from the pinned memory on Host 1 as set forth by the internal pinned memory table. In path 370, RDMA read response data is transmitted to NIC 2. In path 380, NIC 2 writes data to the pinned memory of Host 2 for each RDMA read response it gets. Host 1 receives no indication of the progress of the writing of data into the Host 2 pinned memory. The operations indicated by paths 350-370 may be repeated as many times as Host 2/NIC <Desc/Clms Page number 5> 2 deem necessary. In path 390, on the last RDMA read response, NIC 2 indicates the RDMA read completion to the driver on Host 2. In path 400, the driver on Host 2 formats and posts a send command that indicates that the command request sent on path 320 have been completed and that the STag value is no longer needed. In path 410, NIC 2 sends the message to NIC 1. In path 420, NIC 1 receives the send message and indicates the send message to the driver on Host 1. NIC 1 is not aware that STag information was passed within the send message. NIC 1 is not adapted to correlate the send message sent on path 410 with the write message sent on path 320. In path 430, the driver or the ULP on Host 1 knows the command is complete and releases the resources in NIC 1. Host 1 issues an unbind command to release the STag value. In path 440, NIC 1 responds that the STag is now free. In path 450, the driver on Host 1, informed that it is done with the region, requests that the resources be freed. In path 460, NIC 1 responds that the last resource has been freed. Combining the functionality of RDMA technologies with iSCSI technologies into a single technology has proven difficult due to a number of incompatibilities. The iSCSI technologies are inflexible when used for other purposes, because iSCSI technologies are optimized for storage. For example, iSCSI technologies must pin memory for each data transfer. In contrast, RDMA technologies provide that a plurality of data transfers may reuse the pinned memory. The RDMA technologies suffer from a host/NIC interface that does not seemlessly approach the host/NIC interface of iSCSI technologies. For example, the initiation process according to RDMA requires a plurality of passes (e. g. , paths 270-310), while the initiation process according to iSCSI is accomplished in a single pass (e. g. , paths 100-110). Furthermore, RDMA suffers from a substantial delay during the initiation process (e. g. , paths 270-310) and the completion processes (e. g. , paths 420-260). RDMA technologies (e. g. , Infiniband) may differ from iSCSI technologies in other ways. Additional steps may be needed to advertise buffers through a dereferenced STag window. Building a pinned memory table (e. g. , STag creation) may be isolated from the line traffic processes (e. g. , send processes). When a send <Desc/Clms Page number 6> message is posted, the NIC may not be aware of the STag value that is passing within the send message. Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of ordinary skill in the art through comparison of such systems with some aspects of the present invention as set forth in the remainder of the present application with reference to the drawings. BRIEF SUMMARY OF THE INVENTION Aspects of the present invention may be found in, for example, systems and methods that provide one-shot remote direct memory access (RDMA). In one embodiment, the present invention may provide a system that transfers data over an RDMA network. The system may include, for example, a host. The host may include, ; for example, a driver and a network interface card (NIC). The driver may be coupled to the NIC. The driver and the NIC may perform a one-shot initiation process of an RDMA operation. In another embodiment, the present invention may provide a system that transfers data over an RDMA network. The system may include, for example, a host. The host may include, for example, a driver and a network interface card (NIC). The driver may be coupled to the NIC. The driver and the NIC may perform a one-shot completion process of an RDMA operation. In another embodiment, the present invention may provide a method that transfers data over an RDMA network. The method may include, for example, one or more of the following: initiating an RDMA write operation using a one-shot initiation process between a driver and a NIC; inserting a steering tag (STag) value in a first field of a direct data placement (DDP) /RDMA header of an RDMA send message; and validating the STag value in the first field by setting one or more bits (i. e., a"bit flag") in a second field of the DDP/RDMA header or by encoding a particular value in a second field of the DDP/RDMA header. In yet another embodiment, the present invention may provide a method that transfers data over an RDMA network. The method may include, for example, one or more of the following: completing an RDMA write operation using a one-shot <Desc/Clms Page number 7> completion process between a NIC and a driver of a host; receiving a completion message; identifying a STag value in a first field of a header of the completion message; and validating the STag value in the first field of the header by identifying one or more set bits (i. e. , a"bit flag") in a second field of the header or by identifying a particular encoded value in a second field of the header. These and other features and advantages of the present invention may be appreciated from a review of the following detailed description of the present invention, along with the accompanying figures in which like reference numerals refer to like parts throughout. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 shows a block representation of a conventional system in which data is copied from a pinned buffer in a first host to a pinned buffer in a second host. FIG. 2 shows a flow diagram of a conventional storage network system, for example, for use with Internet Small Computer System Interface (iSCSI). FIG. 3 shows a flow diagram of a conventional storage system over a remote direct memory access (RDMA) network. FIG. 4 shows a flow diagram of an embodiment of an optimize storage write over an RDMA network according to various aspects of the present invention. FIGS. 5A-B show embodiments that encode optional Steering Tag (STag) information according to various aspects of the present invention. DETAILED DESCRIPTION OF THE INVENTION Some aspects of the present invention may be found, for example, in a remote direct memory access protocol (RDMAP) that may provide the foundation upon which upper layer protocol (ULP) applications can use a single protocol (e. g., RDMAP) for data transfers over a single network (e. g., an RDMA network). Thus, for example, a data storage application and a database application may use the same network interface card (NIC) and the same network. Accordingly, a data storage application may seemlessly and efficiently transfer data over an RDMA network via <Desc/Clms Page number 8> an RDMA adapter having the look and feel of a data storage adapter (e. g., an Internet Small Computer System Interface (iSCSI) adapter). In some embodiments, the present invention may provide systems and methods that provide accelerated pinning operations over an RDMA network. Accelerated pinning operations over an RDMA network protocol may provide, for example, substantial storage performance improvement over conventional RDMAP networks that were otherwise previously handicapped. Two novel RDMAP options may be added, for example, to a typical unsolicited send message. The two novel options may include, for example, a send with close-and-free specified steering tag (STag) for completion operations and a send with an advertise STag operation for request operations. These command modes may provide for data information or other information within the storage protocols being known by a network interface card (NIC) at the appropriate times for the advertise case or the completion case. Accordingly, in some embodiments, a simpler command semantic with the driver may be achieved. Either feature may be implemented independently to gain the benefits at the start of the complete command or at the end of the complete command. In some embodiments, various aspects of the present invention may provide technologies that efficiently communicate the state of pinned buffers between a driver and a NIC with much less driver intervention, thereby substantially speeding up transfers. In some embodiments, the present invention may provide for adding an optional field to communicate an associated STag value to a direct data placement (DDP) header or an RDMA send header (also referred to herein collectively as a (DDP) /RDMA header). Some embodiments may find application with, for example, an RDMAP that may support, for example, anonymous send commands by adding an optional STag field and indications as to whether the send command is of the close- this-STag or advertise-this-STag type. FIG. 4 shows a flow diagram of an embodiment of an optimized storage write over an RDMA network according to various aspects of the present invention. A one- shot initiation process may be provided. In path 500, a driver on Host 1 may post a command (e. g., a single command) that provides scatter/gather (SGL) information, or <Desc/Clms Page number 9> some other description of a memory section in Host 1, for the command in a pre- formatted send body. In one embodiment, a single-command, may command that some pinned down memory buffers be incorporated into a region, that a portion of the pinned buffers be bound to an STag value, and that a send command be transmitted to the Host 2. In path 510, before the command is placed into the command ring going to NIC 1, the driver may allocate an STag value. The STag value may be returned synchronously from the command call and saved in a per-command table. The per- command table may, for example, include a driver command table holding a reference number (e. g., an iSCSI command sequence number maintained by the driver of Host 1) provided by the driver (or application) to that command along with the STag assigned to it. The STag value may be used later to verify that Host 1 and Host 2 have closed the correct STag value. In path 515, NIC 1 may process the send message. NIC 1 may process the SGL data (or other memory resource data) as needed to make sure that it is available when RDMA read or RDMA write commands are received. NIC 1 also may associate the STag value that is used to reference the SGL in the RDMA commands with the pinned memory. The driver or NIC 1 may also add the reference number into one of the fields of the send message. This allows Host 2 to correlate the STag with some command accounted for by the application (e. g., a storage application running on Host 2). From this point on, and for the purpose of the RDMA transactions, the STag value may replace the Command number for handling the complete command on NIC 1. NIC 1 also may prepend the layer 2 (L2), layer 3 (L3), layer 4 (L4) and direct data placement (DDP) /RDMA headers to the pre-formatted send command. The STag value may, for example, be placed in an optional field within a DDP or RDMA header with some bit (s) (i. e., a bit flag) or a field within the header indicating that the STag field is valid. Finally, on path 520, NIC 1 may transmit the send message. Without the location and bit (s) defined by the RDMAP, NIC 1 may have to understand more about the storage application (s) (or other applications, if this is done on behalf of another type of application) running on the hosts to correctly define a command name and place the command within the send message. <Desc/Clms Page number 10> In path 530, NIC 2 may receive the send message and may post the send message as a normal send message to a driver on Host 2. In addition to the send message body, the STag value may be indicated. In path 540, the driver on Host 2 may post an RDMA read command to pull data to Host 2. The data pull command may specify, for example, that data from the STag on Host 1 may be pulled into, for example, pre-pinned buffers on Host 2. The command may use, for example, the STag value received in path 530. In path 550, NIC 2 may send the RDMA read request command to NIC 1. The RDMA read request command may specify that STag data may be sent back in a read response. In path 560, NIC 1 may respond to the read request with a read response that includes, for example, data from the pinned buffers in Host 1 as referenced by the STag value. Each response segment may address the location in the pinned buffers of Host 2 in which the data may be written, for example, via NIC 2. In path 570, NIC 2 may receive the read responses and may write the data as specified in each read response to a respective location in the pre-pinned buffers of Host 2. In path 580, on the last read response, NIC 2 may complete the RDMA read command on Host 2. In path 590, the driver on Host 2 may process the data from the pinned buffers of Host 2 by, for instance, moving the data from the pinned buffers to the disk as in path 581. When the pinned buffers on Host 1 are no longer needed, the driver may post a send command that includes a typical protocol message, the STag value of Host 1 and a set close-and-free bit in the send message. The STag value may be carried in an optional field and some bit (s) or field may be set to indicate that the STag field is valid. The driver may also include the application reference number used in path 510, to improve the robustness of matching to the STag, which may be performed later on Host 1 prior to resource releasing. In path 600, NIC 2 may send the send command as specified. Various aspects of the present invention may provide a one-shot completion. In path 610, NIC 1 may receive the send-with-close message. Before delivering the send-with-close message to the driver on Host 1, NIC 1 may de-associate the STag with the SGL (or other reference to a memory section (s) ) provided by the Driver on Host 1 and may free any resources dedicated to the SGL. NIC 1 then may complete <Desc/Clms Page number 11> to the driver on Host 1 indicating the send data and that the status of freeing the SGL and the STag is complete. The completion may also include, for example, the STag value. In path 620, the driver on Host 1 may be aware that all the resources in NIC 1 may be free. The driver may check that Host 2 associated the correct STag value with the command by verifying the returned STag value with the one saves as set forth in path 510. If they do not match, then a fatal violation of the protocol may have occurred, and, for example, Host 2 may be exhibiting buggy or erratic behavior. Some embodiments of the present invention contemplate that storage reads may be similarly optimized. In this case, the RDMA write operations may replace the RDMA read operations in paths 550 and 560. FIGS. 5A-B show two embodiments of encoding for optional STag information according to various aspects of the present invention. FIG. 5A illustrates an embodiment of a send format for use in STag communications using RDMA on top of DDP according to various aspects of the present invention. In FIG. 5A, an RDMA message may include, for example, an optional STag field and operation codes (e. g., operational codes 0x8 and OxC). The operational codes may include, for example, a set of two or more bits from a command word in a protocol. If particular values are specified (e. g., values indicating operation codes 0x8 or OxC), then the optional STag field may be valid and the STag field may include, for example, a STag value that represents the SGL passed in the first command request from the driver. The STag field may be present in all RDMA headers or may be present in only those RDMA headers in which the validating operation codes are set. In one embodiment, the protocol may define whether the STag field is present in each RDMA header or not. The encoding of the operational codes may be such that one value may be specified for advertising and/or one value may be specified for closing. In one example, an operational code value of eight may represent a send with an event and advertise operation. In another example, an operational code value of twelve may represent a send with an event and close operation. <Desc/Clms Page number 12> FIG. 5B illustrates an embodiment of a send format for STag communications using a DDP protocol according to various aspects of the present invention. In FIG. 5B, an RDMA message may include, for example, an optional STag field and one or more bits (e. g. , a"bit flag") in a command field. In one embodiment, two bits (e. g., bits 12 and 13) of a DDP flag field may be employed. The DDP flag field may include, for example, 2 or more bits. If either of the two bits is set, then the optional STag field may be valid and the optional STag field may include, for example, the STag value that represents the SGL (or other indication of memory) passed in the first command request from the driver. The STag field may be present in all RDMA headers or may be present in only those RDMA headers in which the validating operation codes are set. In one embodiment, the protocol may define whether the STag field is present in each RDMA header or not. The encoding of the two bits may be such that three states are possible including, for example, no STag, advertise STag and close STag. In one example, bit 12 of the DDP flag field may be set to indicate an advertise STag operation. In another example, bit 13 may be set to indicate a close STag operation. One or more embodiments of the present invention may include one or more of the advantages as set forth below. In some embodiments, the present invention may provide for novel command formats being developed between a driver and a NIC to allow one-shot initiation. Since buffer information may be specified with a send message, the NIC may have an opportunity to ensure that the buffer work has been completed before the send command is transmitted. Furthermore, the NIC may insert the STag value (e. g. , a handle to SGL information) in a consistent manner. Accordingly, one-touch access for command generation may be provided. In some embodiments, the present invention may provide for one-shot completion. The NIC may order the buffer freeing operations with the send completion so that resources (e. g. , all resources) may be safely recovered without <Desc/Clms Page number 13> multiple calls by the driver. Accordingly, one-touch activity may be provided between the NIC and the driver. In some embodiments, the present invention may provide for existing storage protocols such as, for example, iSCSI to be efficiently mapped to RDMA. Buffer information needed for commands, and completions needed by the NIC for acceleration, may be, for example, in the RDMA headers and may be automatically pre-pended to the packets. Accordingly, complete iSCSI protocol packets (or any other storage protocol packets) may be sent as payload to RDMA without modification. This may also provide for sending iSCSI control information embedded in RDMA Send messages, for example, while accelerating data transfer with RDMA commands. In some embodiments, the present invention may provide for the construction of a NIC that supports RDMA and storage at optimal speeds. Some embodiments may provide for the construction of a NIC that supports RDMA and storage at optimal speeds without having to carry all the cost burden of implementing two different line protocols, two different command protocols or two different buffer management protocols. A small modification to the RDMA protocol may provide for operation at the same performance as customized storage designs. Thus, substantially less expensive design costs may result. While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiments disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.
An inductor structure (105, 500, 900) implemented within a semiconductor integrated circuit (IC) can include a coil (205, 505, 905) of conductive material that includes a center terminal (140, 510, 910) located at a midpoint of a length of the coil. The coil can be symmetrical with respect to a centerline (225, 535, 935) bisecting the center terminal. The coil can include a first differential terminal (210, 515, 915) and a second differential terminal (215, 520, 920). The inductor structure can include a return line (155, 560, 960) of conductive material positioned on the center line. The inductor structure can include an isolation ring (220, 525, 945) surrounding the coil. The inductor structure can include a patterned ground shield comprising a plurality of fingers (935, 1035) implemented within an IC process layer located between the coil (905) and a substrate (955) of the IC. The inductor structure can include an isolation wall (1 150) comprising a high conductive material formed to encompass the coil and the patterned ground shield. The isolation wall can be coupled to one end of each finger of the patterned ground shield.
CLAIMS What is claimed is: 1 . An inductor structure implemented within a semiconductor integrated circuit (IC), the inductor structure comprising: a coil of conductive material that comprises a center terminal located at a midpoint of a length of the coil; wherein the coil is symmetrical with respect to a centerline bisecting the center terminal; wherein the coil comprises a first differential terminal and a second differential terminal each located at an end of the coil; and a return line of conductive material coupled to the coil, wherein the return line is positioned on the centerline. 2. The inductor structure of claim 1 , further comprising an isolation ring that surrounds the coil and is separated from the coil by approximately a constant and predetermined distance. 3. The inductor structure of claim 2, wherein the isolation ring comprises a first end and a second end separated by a predetermined distance forming an opening. 4. The inductor structure of claim 2 or claim 3, wherein the isolation ring is coupled, at a midpoint of a length of the isolation ring, to a virtual AC ground when in a circuit in which the inductor structure is implemented. 5. The inductor structure of claim 3 or claim 4, wherein the first end and the second end of the isolation ring are equidistant from the centerline. 6. The inductor structure of any of claims 2-5, wherein the isolation ring is coupled to the return line at a location opposite the center terminal. 7. The inductor structure of any of claims 2-6, wherein: no supply voltage interconnect and no ground interconnect are located within the isolation ring; and no supply voltage interconnect and no ground interconnect cross the centerline within a predetermined distance of the isolation ring. 8. The inductor structure of any of claims 1 -7, wherein: the first differential terminal and the second differential terminal are each located at an end of the coil opposite the center terminal; the return line is located in a different conductive layer than the coil; and a length of the return line is approximately equal to a diameter of the coil at the centerline. 9. The inductor structure of any of claims 1 -8, further comprising: a patterned ground shield comprising a plurality of fingers implemented within an IC process layer located between the coil of conductive material and a substrate of the IC. 10. The inductor structure of claim 9, wherein: the coil is formed of a plurality of linear segments; for each of the plurality of linear segments of the coil, the plurality of fingers located below that linear segment are substantially parallel and separated by a predetermined distance from one another; and each finger is positioned substantially perpendicular to the linear segment of the coil beneath which the each finger is located. 1 1 . The inductor structure of any of claims 9-10, wherein the isolation ring comprises a low conductivity material and is coupled to one end of each finger. 12. The inductor structure of any of claims 9-1 1 , further comprising: an isolation wall comprising a high conductive material formed to encompass the coil and the patterned ground shield, wherein the isolation wall is coupled to one end of each finger. 13. The inductor structure of claim 12, wherein the isolation wall is coupled to the substrate of the IC. 14. The inductor structure of claim 12 or claim 13, wherein: the isolation wall comprises a plurality of vertically stacked conductive layers; each pair of adjacent, vertically stacked conductive layers is coupled by a via; a highest conductive layer used to form the isolation wall is implemented using a process layer at least as far from the substrate of the IC as a process layer used to form the coil; and a lowest conductive layer used to form the isolation wall is implemented using a process layer at least as close to the substrate of the IC as a process layer used to form the plurality of fingers. 15. An integrated circuit (IC) comprising the inductor structure of any of claims 1 -14.
SYMMETRICAL CENTER TAP INDUCTOR STRUCTURE FIELD OF THE INVENTION One or more embodiments disclosed within this specification relate to integrated circuits (ICs). More particularly, one or more embodiments relate to a center tap inductor structure implemented within an IC. BACKGROUND The frequency of signals associated with integrated circuits (ICs), whether generated within the IC or exchanged with devices external to the IC, has steadily increased over time. As IC signals reach radio frequency (RF) ranges exceeding a gigahertz, it becomes viable to implement inductor structures within ICs. Implementing an inductor structure within an IC, as opposed to using an external inductor device, typically reduces the manufacturing and implementation costs of the system requiring the inductor. IC inductor structures can be implemented within a variety of RF circuits such as, for example, low noise amplifiers (LNAs), voltage controlled oscillators (VCOs), input or output matching structures, power amplifiers, and the like. Many of these RF circuits, such as certain VCO architectures, can be implemented as differential circuits that rely on circuit and/or device symmetry to provide maximum circuit performance. Although IC inductor structures are advantageous in many respects, IC inductor structures introduce various non-idealities not present with external or discrete inductors. For example, an IC inductor structure is typically surrounded by other semiconductor devices that can generate noise. As IC devices reside over a common substrate material that is conductive, signals and noise generated by an IC device can be coupled into an IC inductor structure built over the common substrate material. Although IC inductor structures are typically built within one or more metal interconnect layers that reside farthest from the substrate layer, finite parasitic capacitances exist between the substrate layer and the metal interconnect layer(s). These parasitic capacitances can couple signals between the IC inductor structure and the substrate layer. Further, eddy currents induced within the substrate layer by an IC inductor structure can generate losses that reduce the quality factor, or so called "Q," of the IC inductor structure. Other non-idealities relate to the ability of interconnect lines routed in the vicinity of the IC inductor structure, particularly large ground and power supply lines, to couple signals both capacitively and inductively to the IC inductor structure. In addition, inductive coupling resulting from neighboring metal lines can alter the inductive value and self resonance of an IC inductor structure. Each of the non-idealities described can interfere with the implementation of an IC inductor structure as a consistent and reproducible element whose parameters are independent of the IC environment within which the IC inductor structure resides. SUMMARY One or more embodiments disclosed within this specification relate to integrated circuits (ICs) and, more particularly, to an inductor structure implemented within an IC. An embodiment disclosed within this specification can include an inductor structure implemented within a semiconductor IC. The inductor structure can include a coil of conductive material that includes a center terminal located at a midpoint of a length of the coil. The coil can be symmetrical with respect to a centerline bisecting the center terminal. The coil can include a first differential terminal and a second differential terminal. The inductor structure can include a return line of conductive material coupled to the coil. The return line can be positioned on the centerline. The inductor structure can include an isolation ring. The isolation ring can surround the coil and can be separated from the coil by approximately a constant and predetermined distance. The isolation ring can have a first end and a second end separated by a predetermined distance forming an opening. For example, the first end and the second end of the isolation ring can be equidistant from the centerline. In another aspect, the isolation ring can be coupled to the return line at a location opposite the center terminal. When in a circuit in which the inductor structure is implemented, the isolation ring can be coupled, at a midpoint of a length of the isolation ring, to a virtual AC ground of the circuit. In another aspect, no supply voltage interconnect and no ground interconnect can be located within the isolation ring. Further, no supply voltage interconnect and no ground interconnect can be permitted to cross the centerline within a predetermined distance of the isolation ring. In a further aspect, the first differential terminal and the second differential terminal can each be located at an end of the coil opposite the center terminal. The return line can be located in a different conductive layer than the coil. A length of the return line can be approximately equal to a diameter of the coil at the centerline. Additionally or alternatively, the inductor structure can include a patterned ground shield including a plurality of fingers implemented within an IC process layer located between the coil of conductive material and a substrate of the IC. According to another aspect, the coil can be formed of a plurality of linear segments. For each of the plurality of linear segments of the coil, the plurality of fingers located below that linear segment can be substantially parallel and separated by a predetermined distance from one another. Each finger can be positioned substantially perpendicular to the linear segment of the coil beneath which the each finger is located. In some embodiments, the isolation ring can comprise a low conductivity material and be coupled to one end of each finger. Additionally or alternatively, the inductor structure can include an isolation wall comprising a high conductive material formed to encompass the coil and the patterned ground shield. The isolation wall can be coupled to one end of each finger. The isolation wall can be coupled to the substrate of the IC. For example, the isolation wall can be coupled to a P-type diffusion material disposed within the substrate of the IC, and the P-type diffusion material can couple the isolation wall to the substrate of the IC. In some embodiments, the isolation wall includes a plurality of vertically stacked conductive layers. Each pair of adjacent, vertically stacked conductive layers can be coupled by a via. A highest conductive layer used to form the isolation wall can be implemented using a process layer at least as far from the substrate of the IC as a process layer used to form the coil. A lowest conductive layer used to form the isolation wall can be implemented using a process layer at least as close to the substrate of the IC as a process layer used to form the plurality of fingers. Another embodiment can include an inductor structure implemented within a semiconductor IC. The inductor structure can include a coil of conductive material having a center terminal located at a midpoint of a length of the coil. The coil can be symmetrical with respect to a centerline bisecting the center terminal. The coil can include a first differential terminal and a second differential terminal each located at an end of the coil opposite the center terminal. The inductor structure also can include an isolation ring surrounding the coil and separated from the coil by approximately a constant and predetermined distance. The isolation ring can include a first end and a second end separated by a predetermined distance forming an opening in the isolation ring. The inductor structure also can include a return line of conductive material located in different conductive layer of the IC than the coil. The return line can be positioned on the centerline substantially within the coil. In one aspect, a length of the return line can be approximately equal to a diameter of the coil at the centerline. The first end and the second end of the isolation ring can be equidistant from the centerline. The first end and the second end further can be located closer to the center terminal than either of the differential terminals of the coil. In another aspect, the isolation ring can be coupled to an end of the return line that is opposite the center terminal. The isolation ring further can be coupled, at a midpoint of a length of the isolation ring, to a virtual AC ground when in a circuit in which the inductor structure is implemented. In another aspect, no supply voltage interconnect and no ground interconnect can be located within the isolation ring. Further, no supply voltage interconnect and no ground interconnect can be permitted to cross the centerline within a predetermined distance of the isolation ring. Another embodiment can include an inductor structure implemented within a semiconductor IC. The inductor structure can include a plurality of coils of conductive material including a center terminal located at a midpoint of a length of the plurality of coils. Each of the plurality of coils can be symmetrical with respect to a centerline bisecting the center terminal. The plurality of coils can include a first differential terminal and a second differential terminal each located at an end of the plurality of coils. The inductor structure can include an isolation ring surrounding the plurality of coils and separated from the plurality of coils by approximately a constant and predetermined distance. The isolation ring can include a first end and a second end separated by a predetermined distance forming an opening in the isolation ring. The first end and the second end of the isolation ring can be equidistant from the centerline. The first end and the second end also can be located external to a portion of the plurality of coils opposite the center terminal, the first differential terminal, and the second differential terminal. The center terminal can be located on a same side of the plurality of coils as, and between, the first and second differential terminals. When in a circuit in which the inductor structure is implemented, the isolation ring can be coupled, at a midpoint of a length of the isolation ring, to a virtual AC ground of the circuit. In another aspect, no supply voltage interconnect and no ground interconnect are located within the isolation ring. Further, no supply voltage interconnect and no ground interconnect can be permitted to cross the centerline within a predetermined distance of the isolation ring. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a circuit diagram illustrating an exemplary circuit implemented with a center tap inductor structure in accordance with an embodiment disclosed within this specification. FIG. 2 is a first block diagram illustrating a topographical view of an inductor structure in accordance with another embodiment disclosed within this specification. FIG. 3 is a second block diagram illustrating a graphical representation of an inductor structure in accordance with another embodiment disclosed within this specification. FIG. 4 is a third block diagram illustrating a side view of an inductor structure in accordance with another embodiment disclosed within this specification. FIG. 5 is a fourth block diagram illustrating a two turn center tap inductor structure in accordance with another embodiment disclosed within this specification. FIG. 6 is a fifth block diagram illustrating a topographical view of an inductor structure in accordance with an embodiment disclosed within this specification. FIGs. 7-1 and 7-1 are sixth and seventh block diagrams each illustrating a side view of an inductor structure in accordance with another embodiment disclosed within this specification. FIG. 8 is a graph illustrating the influence of conductance of the material used to couple fingers of a patterned ground shield structure on the inductive and lossy characteristics of an IC inductor structure in accordance with another embodiment disclosed within this specification. FIG. 9 is a eighth block diagram illustrating a topographical view of an inductor structure in accordance with another embodiment disclosed within this specification. FIG. 10 is a ninth block diagram illustrating a topographical view of the inductor structure of FIG. 9 in accordance with another embodiment disclosed within this specification. FIG. 1 1 is a tenth block diagram illustrating a topographical view of the inductor structure of FIG. 9 in accordance with another embodiment disclosed within this specification. FIG. 12 is an eleventh block diagram illustrating a topographical view of the inductor structure of FIG. 9 in accordance with another embodiment disclosed within this specification. FIG. 13 is a twelfth block diagram illustrating a topographical view of the inductor structure of FIG. 9 in accordance with another embodiment disclosed within this specification. FIG. 14 is a thirteenth block diagram illustrating a topographical view of the inductor structure of FIG. 9 in accordance with another embodiment disclosed within this specification. DETAILED DESCRIPTION While the specification concludes with claims defining features of one or more embodiments that are regarded as novel, it is believed that the one or more embodiments will be better understood from a consideration of the description in conjunction with the drawings. As required, one or more detailed embodiments are disclosed within this specification. It should be appreciated, however, that the one or more embodiments are merely exemplary of the inventive arrangements. Therefore, specific structural and functional details disclosed within this specification are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the one or more embodiments in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting, but rather to provide an understandable description of the one or more embodiments disclosed herein. One or more embodiments disclosed within this specification relate to integrated circuits (ICs) and, more particularly, to an inductor structure for use within an IC. In accordance with one or more embodiments disclosed herein, a center tap inductor structure can be implemented that includes a return line within the inductor structure. The inductor structure can be implemented with a single turn coil constructed symmetrically about a center line that bisects the coil. When implemented within a high frequency differential circuit, the center tap of the inductor structure can receive a current used to bias the high frequency differential circuit. The return line can be routed along the centerline of the single turn coil and used as a return path for the bias current to ground. In this manner, the bias current flowing within the circuit is returned to ground along the centerline of the inductor structure. An isolation ring can be configured to surround the single turn coil of the inductor structure. The isolation ring can be implemented with an opening located where the isolation ring intersects the centerline. The opening prevents currents induced within the isolation ring, by the single turn coil coupling to the isolation ring, from circularly flowing within the isolation ring. Routing the return line along the centerline of the inductor structure and breaking the current path in the isolation ring produces an inductor structure with greater differential symmetry. In addition, the parameters of the inductor structure demonstrate less variability when exposed to the effects of inductive and capacitive coupling. An inductor structure can be implemented that includes a patterned ground shield formed of groups of a plurality of parallel, conductive strips. The patterned ground shield of the inductor structure can isolate the electric field generated by current flow through the inductor structure from a substrate underlying the inductor structure. The patterned ground shield can be configured so as to not impede the magnetic field surrounding the coil(s) of the inductor structure. The strips of the patterned ground shield can be coupled together at the outer perimeter of the patterned ground shield. A ring of conductive material can be used to couple the strips together. In an embodiment, the ring of conductive material can be formed to have a specified conductivity. The conductivity can be within one of a plurality of different conductivity ranges. By forming the ring of conductive material with a conductivity within a selected conductivity range, the quality factor, i.e., the "Q," of the inductor structure can be controlled and/or optimized. This ring of conductive material can comprise an isolation ring or an isolation wall, or both, as illustrated in the exemplary embodiments described below. FIG. 1 is a circuit diagram illustrating an exemplary circuit 100 implemented with a center tap inductor in accordance with an embodiment disclosed within this specification. More particularly, circuit 100 can be a radio frequency (RF) differential circuit including a single turn center tap inductor structure. FIG. 1 is presented to illustrate electrical properties of a physical inductor structure and the non-idealities that are typically associated with an IC center tap inductor structure when implemented within an RF differential circuit such as circuit 100. It should be appreciated, however, that FIG. 1 , being a circuit diagram, is not intended to convey or illustrate physical location, e.g., layout, of the various components shown. As used within this specification, a "layout" or "IC layout," can refer to a representation of an IC in terms of planar geometric shapes which correspond to the design masks that pattern the metal layers, the oxide regions, the diffusion areas, or other layers that form devices of the IC. Circuit 100 represents a circuit architecture for a voltage controlled oscillator (VCO) within an IC. As shown, circuit 100 can include an inductor structure 105, a capacitor 1 10, a P-type metal oxide semiconductor (PMOS) current source 1 15, and N-type metal oxide semiconductor devices (NMOSs) 120 and 125. Within circuit 100, inductor structure 105 and capacitor 1 10 are coupled in parallel across nodes 145 and 150 to form an L-C tank circuit. The L- C tank circuit determines an oscillation frequency of the VCO implemented with circuit 100. The oscillation frequency of circuit 100 is a product of the value of inductor structure 105 and the value of capacitor 1 10 of the L-C tank. Within circuit 100, inductor structure 105 can be implemented as a center tap inductor structure. More particularly, inductor structure 105 can be implemented as a symmetrical single turn center tap inductor structure. As used within this specification, a "center tap" or "center terminal," refers to a coupling point made at a midpoint in a length of windings or coils of an inductor. In addition, inductor structure 105 can be a symmetrical center tap inductor structure, wherein inductor structure 105 is physically symmetrical on either side of a centerline that bisects a center terminal 140. Although a continuous series of windings or coils, a center tap inductor structure can be modeled as two discrete inductor structures of equal value coupled in series. For example, in FIG. 1 , inductor structure 105 is represented as two inductor structures coupled in series, denoted as inductors 105a and 105b. By implementing inductor structure 105 as a symmetrical center tap inductor structure coupled at the inductor midpoint, matching between inductors 105a and 105b can be improved. As circuit 100 is a differential circuit, improving the matching between inductors 105a and 105b can improve the differential symmetry and performance of circuit 100. Center terminal 140 is coupled to a drain of PMOS current source 1 15. A source of PMOS current source 1 15 is coupled to a voltage source 130 having a voltage potential of VDD. A gate of PMOS current source 1 15 receives a bias voltage, denoted as Vbias- The voltage potential of Vbias can determine a quantity of bias current, denoted as Ibias, sourced by PMOS current source 1 15 to center terminal 140. Through center terminal 140, the current Ibias can flow into inductor structure 105. Nodes 145 and 150 form the differential output of circuit 100. As such, the differential output voltage of circuit 100 is equal to the voltage difference between signals Vout+ and Vout- A drain of NMOS 120 and a gate of NMOS 125 are coupled to node 145. A drain of NMOS 125 and a gate of NMOS 120 are coupled to node 150. A source of each of NMOSs 120 and 125 is coupled to node 135 and to a negative voltage potential of source 130 that is typically the ground potential of circuit 100. NMOSs 120 and 125, taken together, form a cross-coupled differential pair containing a positive feedback loop. The positive feedback loop has a closed path from the gate of NMOS 120 to the gate of NMOS 125 via the drain of NMOS 120 and back to the gate of NMOS 120 via the drain of NMOS 125. In order to induce oscillation within circuit 100, a current Ibias can be injected into inductor structure 105 at center terminal 140. The current Ibias establishes a predetermined operating point within each of NMOSs 120 and 125. Properly designed to meet a set of oscillation conditions, for example, a gain of greater than one in the positive feedback loop of NMOSs 120 and 125, NMOSs 120 and 125 in conjunction with inductor structure 105 and capacitor 1 10 can combine to form an oscillator. In one or more embodiments, capacitor 1 10 can be implemented with a varactor, i.e., a voltage controlled variable capacitor, in order to vary the oscillation frequency of circuit 100 across a predetermined frequency range. As current Ibias flows though inductor structure 105, current Ibias is divided between inductors 105a and 105b. For simplicity of understanding how current Ibias flows between inductors 105a and 105b, current Ibias can be divided into component currents of a common mode current, denoted as IQM. and a differential current, denoted as Idiff. The current IQM can be considered a quantity of common DC current flowing symmetrically within each of inductors 105a and 105b. In illustration, in a balanced condition of circuit 100, i.e., (Vout+) - (V0ut-) = zero volts, the current sourced through each of NMOSs 120 and 125 is approximately equal to one half of the current Ibias- Accordingly, the current flowing through each of inductors 105a and 105b is approximately equal to one half of the current I ias- The current value of one half I ias can be considered the common mode current sourced through each of NMOSs 120 and 125. As circuit 100 oscillates, the current flowing through NMOS 120 increases as current flowing through NMOS 125 decreases. Then, in succession, the current flowing through NMOS 120 decreases as the current flowing through NMOS 125 increases. Thus, the current flowing through inductor 105a increases as the current flowing through inductor 105b decreases. Then, in succession, the current flowing through inductor 105a decreases as the current flowing through inductor 105b increases. This directional change in the current flow through inductors 105a and 105b can be considered the AC differential current, Idiff, flowing through inductor structure 105. As inductor structure 105 is a center tap, single turn inductor structure and, accordingly, inductors 105a and 105b are physically symmetrical to each other, the current Idiff represents an asymmetric flow of current through inductors 105a and 105b. For example, PMOS current source 1 15 of circuit 100 can be biased to generate a current Ibias equal to approximately 100mA. In that case, the current IQM flowing through each of inductors 105a and 105b is equal to approximately 50mA. At a subsequent time T1 , as circuit 100 oscillates, approximately 75mA can be flowing out of inductor 105a to node 145 and approximately 25mA can be flowing out of inductor 105b to node 150. In that case, a current Idiff of approximately 25mA can be considered to be flowing from node 150 to node 145 through inductor structure 105. Although illustrated in FIG. 1 with an arrow indicating a single direction for current Idiff, current Idiff can flow in either direction through inductor structure 105. The distinction between common mode current and differential current is significant to the performance of inductor structure 105 as the current IQM flows symmetrically on either side of center terminal 140 though inductor structure 105 while the current Idiff flows asymmetrically, in either direction, across inductor structure 105. The current flowing through each of NMOSs 120 and 125 is summed at node 135 and returned to source 130. As circuit 100 is a closed path between the positive voltage potential of source 130 and the negative voltage potential of source 130, the current received at center terminal 140 is equal to the current returned to the negative voltage potential of source 130. Accordingly, the current returned to the negative voltage potential of source 130 is equal to Ibias- Return 155 within circuit 100 of FIG. 1 represents the return pathway for current from the source of each of NMOSs 120 and 125 to the negative voltage potential of source 130. When implemented as a physical circuit within an IC, return 155 represents one or more segments of interconnect material that couple the source of each of NMOSs 120 and 125 to a ground bus implemented within a conductive layer of the IC located some finite distance from the source of each of NMOSs 120 and 125. Depending upon the location and manner of routing the interconnect material that couples the source terminal of each of NMOSs 120 and 125 to source 130, the interconnect material of return 155 can couple to inductor structure 105. The manner of this coupling can be both capacitive and inductive. Asymmetries in routing the interconnect of return 155 to return current Ibias to source 130, relative to inductor structure 105, can result in asymmetric coupling of return 155 to inductor structure 105. In addition, asymmetries in the current flowing within differing segments of the interconnect material of return 155 can result in asymmetric inductive coupling of return 155 to inductor structure 105. The coupling of other devices and physical features, e.g., metal interconnect, within a physical implementation of circuit 100 in an IC can impact circuit parameters of inductor structure 105 and, accordingly, circuit 100. In illustration, other IC devices and physical features coupling to inductor 105 can alter the inductance value of inductor structure 105, thereby shifting the center frequency of circuit 100. Asymmetric coupling of return 155 to inductor structure 105 can affect the inductive value of one of inductors 105a and 105b more significantly than the other, thereby degrading the differential integrity of circuit 100. In addition, asymmetric coupling of common mode noise to inductor structure 105 can couple more of the common mode noise to one of inductors 105a and 105b than the other. The asymmetric coupling of common mode noise, noise that is inherently reduced by a differential circuit, to inductors 105a and 105b can result in common mode noise being converted to differential noise. FIG. 2 is a second block diagram illustrating a topographical view of an inductor structure 105 in accordance with another embodiment disclosed within this specification. FIG. 2 illustrates a physical layout representation, as implemented within an IC, of the single turn, center tap inductor structure 105 discussed with reference to FIG. 1 . As such, like numbers will be used to refer to the same items throughout this specification. Inductor structure 105 can include a coil 205, a center terminal 140, differential terminals (terminals) 210 and 215, a return 155, and an isolation ring 220. Although denoted as four distinct objects for descriptive purposes within this specification, coil 205, center terminal 140, and terminals 210 and 215 are coupled together and represent one continuous area of conductive material. In addition, though implemented as one continuous area or segment of conductive material, coil 205, center terminal 140, and terminals 210 and 215 can be implemented within one or more different conductive layers of the IC. The conductive layers can be coupled together with one or more vias to create one continuous conductive pathway. Coil 205 can be implemented as a symmetrical, single turn coil of inductor structure 105. A centerline 225 can be determined that symmetrically bisects coil 205. Each segment of coil 205 residing on a particular side of centerline 225 can represent a physical layout of one of inductors 105a and 105b as described with reference to FIG. 1 . Although implemented as an octagonal coil within FIG. 2, coil 205 can be implemented in any of a variety of forms or shapes that can be implemented using available IC manufacturing processes so long as the symmetry of coil 205 about centerline 225 is retained. As such, the implementation of coil 205 as an octagonal coil within inductor structure 105 is provided for clarity and descriptive purposes only, and is not intended to be limiting. When implemented within an RF differential circuit, e.g., circuit 100 of FIG. 1 , inductor structure 105 can receive bias current Ibias at center terminal 140. As noted earlier within this specification, center terminal 140 is located at the midpoint of the length of coil 205, thereby assuring that each side of coil 205 is symmetric and of equal inductive value. Each of terminals 210 and 215 can be coupled to a differential output node of the RF differential circuit in which inductor structure 105 is implemented. As described earlier within this specification, when the RF differential circuit is in a balanced condition, the common mode current ICM that is sourced from each of terminals 210 and 215 is approximately equal to one half of Ibias- As the RF differential circuit switches state, a differential current, Idiff, can alternately flow in either direction within coil 205. As Idiff alternates in direction of flow, the quantity of current associated with Idiff also varies. With the current within coil 205 described in this manner, the current flowing through coil 205 can be represented as the sum of ICM and Idiff flowing through terminals 210 and 215 at any particular time. For example, center terminal 140 can receive a current Ibias of approximately 100 mA. As a result, the common mode current flowing through each of terminals 210 and 215 can be approximately 50mA. At a time T1 , approximately 75 mA can be flowing out of terminal 210 and approximately 25 mA can be flowing out of terminal 215. In that case, at time T1 , a differential current of approximately 25mA flows in coil 205 from terminal 215 to terminal 210. The distinction between common mode current and differential current is significant to the performance of inductor structure 105 as ICM flows sym metrically on either side of centerline 225 while Idiff flows alternately in either direction across centerline 225. Return 155 can be implemented with a segment of conductive material disposed within a conductive layer of the IC manufacturing process used to implement inductor structure 105. In one embodiment, the length of return 155 can be approximately equal in length to a diameter of coil 205 the centerline 225 and further can be located substantially within coil 205. The conductive layer in which return 155 is implemented can be a conductive layer that is different from the conductive layer used to implement coil 205, center terminal 140, and/or terminals 210 and 215. Implementing return 155 in this manner prevents one or more of coil 205, center terminal 140, or differential terminals 210 and 215 from being coupled to return 155. Further, through return 155, the current flowing through each side of coil 205, is summed and returned to source 130, which can be located at the end of return 155 adjacent to, or near, center terminal 140. Return 155 can be disposed on centerline 225, thereby symmetrically bisecting inductor structure 105, i.e., coil 205. The implementation of return 155 on centerline 225 assures that current used within inductor structure 105 is routed symmetrically back through inductor structure 105 to the lowest potential. Additionally, the implementation of return 155 on centerline 225 assures that the conductive material used to return the current used within inductor structure 105 back to the lowest potential is routed symmetrically through inductor structure 105. Implementing return 155 in this manner assures that any coupling induced by returning bias current to the lowest potential or by the interconnect material used to return bias current to the lowest potential is symmetrically applied to either side of coil 205 as bisected by centerline 225. Retaining this symmetry allows the retention of the matched inductive properties between each side of coil 205. As each section of coil 205 residing on either side of centerline 225 implements an individual inductor, e.g., inductors 105a and 105b as described with reference to FIG.1 , the matching of the inductive value of each side of coil 205 is required to assure differential signal balance within a circuit implemented with inductor structure 105. Any common mode noise coupled to coil 205 asymmetrically to one side of centerline 225 can be converted to a differential noise that can appear within the differential output signal of any differential circuit in which inductor structure 105 is implemented. Isolation ring 220 can include one or more substrate taps coupled to a segment of conductive material residing within a conductive layer of an IC manufacturing process used to implement inductor structure 105. In another embodiment, the lowest residing conductive layer of the IC manufacturing process, and therefore, the conductive layer vertically closest to the substrate taps, can be used to implement the segment of conductive material that is coupled to the substrate tap(s). The conductive material of isolation ring 220 can be coupled via one or more interconnects to a lowest voltage potential available within the IC in which inductor structure 105 is implemented, e.g., ground. In one aspect, isolation ring 220 can be said to electromagnetically couple to coil 205. Isolation ring 220 can surround coil 205 at a constant and predetermined distance 230 from an outer perimeter of coil 205. For example, coil 205 and isolation ring 220 can be concentric with respect to one another. Coil 205 and isolation ring 220 further can have a same shape despite isolation ring 220 being sized to surround coil 205 and being implemented within different conductive layers of the IC. As IC inductor structures reside over a conductive substrate material that is common to the entire IC, noise from surrounding devices can be injected into the substrate material residing directly beneath the inductor structure. The coils of an inductor structure are generally implemented within the conductive layer(s) farthest from the substrate layer and are separated from the substrate layer by one or more dielectric layers. Despite this isolation, both inductive and capacitive coupling can exist between the coils of the conventional inductor structure and the underlying substrate. For this reason, isolation rings can be located around the inductor structure and coupled to a common substrate voltage potential such as, for example, the ground potential of the IC. Coupling the substrate underlying the inductor structure to ground improves isolation of the underlying substrate from substrate noise injected by devices surrounding the inductor structure. Typically, the isolation rings used within a conventional IC inductor structure form a continuous substrate ring surrounding the inductor coils of the conventional IC inductor structure. As the conventional isolation ring is continuous, it forms a coil surrounding the coils of the conventional IC inductor structure. As a result, a mutual inductance exists between the coils of the conventional IC inductor structure and the coil formed by the conventional isolation ring. Through mutual inductance, a time varying differential current within the conventional IC inductor structure can generate a magnetic field that induces a current flow within the conventional isolation ring. The current generated within the conventional isolation ring generates a magnetic field that opposes the current flow within the conventional IC inductor structure. This opposing magnetic field reduces the absolute inductive value for the conventional IC inductor structure when operating within a circuit. As such, the mutual inductance between the conventional isolation ring and the conventional IC inductor structure decreases the inductive value of the conventional IC inductor structure. In addition, as the distance between the conventional isolation ring and the coils of the conventional inductor structure decreases, the mutual inductance between the conventional isolation ring and the coils of the conventional inductor structure increases, and the absolute inductive value of the conventional IC inductor structure decreases. The reduction in the inductive value of the conventional IC inductor structure from inductive coupling to the conventional isolation ring can approach 20 percent of the inductive value of the conventional IC inductor structure in the absence of the conventional isolation ring. To counter the effect of the conventional isolation ring on inductance values, isolation ring 220 includes an opening that creates a discontinuity within isolation ring 220. Unlike the conventional isolation ring, isolation ring 220 does not form a continuous coil surrounding coil 205. Ends 240 and 245 of isolation ring 220 are proximate to center terminal 140, e.g., at an opposing end of inductor structure 105 from differential terminals 210 and 215, and are separated by a predetermined distance 235 defining the opening. In another embodiment, the opening can be centered over centerline 225. In that case, each of ends 240 and 245 of isolation ring 220 can be equidistant from centerline 225. As illustrated, the opening is aligned with center terminal 140 on centerline 225, e.g., aligned on a same axis. A portion of isolation ring, e.g., a location opposite the opening, can be coupled to return 155, as will be illustrated in greater detail within this specification. The opening in isolation ring 220 can inhibit the circulation of current around isolation ring 220 by breaking the current pathway through isolation ring 220. The decrease in current flow within isolation ring 220 can reduce the impact of inductive coupling between coil 205 and isolation ring 220 upon the inductive value of inductor structure 105. For example, the inclusion of the opening defined by distance 235 between ends 240 and 245 within isolation ring 220 can reduce the effect that any variation in distance 230 has upon the inductive value of inductor structure 105. Similar to the way in which isolation ring 220 forms a coil that interacts with inductor structure 105, segments of conductive material used to interconnect circuit blocks within an IC can form coils that interact with inductor structure 105. In particular, power supply lines within an IC, e.g., VDD and ground, which are typically implemented with large areas of conductive material, are more likely to interact with inductor structure 105. In order to form a coil that interacts with inductor structure 105, a power supply line must bisect coil 205 of inductor structure 105 in a manner that crosses centerline 225. When the power supply line remains on one side of centerline 225, the impact of differential current flowing across differential terminal 210 and 215 on the power supply line is minimal. By allowing a supply line to cross centerline 225, differential currents flowing across coil 205 can induce current within the power supply line that crosses centerline 225. The current induced within the power supply line can generate magnetic fields that affect the inductive value of inductor structure 105. For this reason, when implemented within an IC layout, no power supply lines of the IC can reside within a perimeter defining inductor structure 105, or within a predetermined spacing from the perimeter defining inductor structure 105, that crosses centerline 225. In one embodiment, isolation ring 220, e.g., an outer edge of isolation ring 220, can be the perimeter defining inductor structure 105. By implementing return 155, the opening in isolation ring 220, and preventing supply lines from crossing centerline 225 as described, a center tap inductor structure can be implemented that exhibits greater differential symmetry and a more stable inductive value. The use of the various structural elements described with reference to FIG. 2, a reduction in the variation of the inductive value of inductor structure 105 to approximately 2 percent of the designed inductive value can be achieved. FIG. 3 is a second block diagram providing a graphical representation of an inductor structure in accordance with another embodiment disclosed within this specification. More particularly, FIG. 3 illustrates further aspects of inductor structure 105. Accordingly, FIG. 3 is intended to provide a better understanding of electrical and electromagnetic properties of inductor structure 105 and the influence of those properties upon the operation of circuit 100 of FIG. 1 . As such, circuit diagram representations of components such as PMOS 1 15, NMOSs 120 and 125, and VDD are superimposed to illustrate the operational context in which inductor structure 105 exists and operates within circuit 100. Referring to FIG. 3, a drain terminal of PMOS 1 15 is coupled to center terminal 140 via an interconnect 305. As previously described within this specification, PMOS 1 15 functions as a current source for the current Ibias from a positive voltage potential of source 130 to circuit 100. When implemented within an IC layout, interconnect 305 can be routed to inductor structure 105 along centerline 225, thereby retaining the structural and current symmetry within inductor structure 105. Differential terminal 210 is coupled to a drain terminal of NMOS 120. A gate terminal of NMOS 120 is coupled to differential terminal 215. Differential terminal 215 is coupled to a drain terminal of NMOS 125. A gate terminal of NMOS 125 is coupled to differential terminal 215. Implemented in this manner, NMOSs 120 and 125 form a cross-coupled differential pair. The source terminal of each of NMOSs 120 and 125 is coupled to interconnect 310. The common mode and differential current flowing through the source terminal of each of NMOSs 120 and 125, when summed within interconnect 305, is approximately equal to Ibias- Interconnect 310 graphically represents the metal interconnect necessary to couple the source terminal of each of NMOSs 120 and 125 to return 155. In order to retain the structural and current symmetry within inductor structure 105, interconnect 310 can be symmetrically coupled to a first end of return 155 adjacent to differential terminals 210 and 215. A second end of return 155, adjacent to center terminal 140, can be coupled to a negative voltage potential of source 130. Interconnect 315 is used to couple return 155 to the negative voltage potential of source 130. Interconnect 315 can be routed out of inductor structure 105 along centerline 225 to source 130. Coupled in this manner, current l^as can be returned via return 155 and interconnect 315 to source 130. For example, interconnect 315 can be located in a different conductive layer to facilitate routing along central line 225. In addition, routing interconnects 305, 310, and 315 in this manner assures that current I ias is routed to flow symmetrically into, and out of, inductor structure 105 along centerline 225. This symmetric routing approach prevents the formation of loops within inductor structure 105 that can couple to coil 205 that can vary the total inductance value of inductor structure 105 or inject external noise into inductor structure 105. The coupling of substrate noise can be further minimized by the coupling of isolation ring 220 to return 155 at location 250. Coupling isolation ring 220 to return 155 at location 250 electrically bisects isolation ring 220 into two symmetric segments about centerline 225. Location 250, also representing an electrical node, can correspond to a virtual AC ground of circuit 100 for differential current flowing within coil 205 and accordingly, induced current within isolation ring 220. As used within this specification, the term "virtual AC ground," refers to a node of a circuit that is maintained at a steady voltage potential when sourcing or sinking AC current without being directly physically coupled to a reference voltage potential. Coupling isolation ring 220 to the virtual AC ground at location 250 minimizes the ability of isolation ring 220 to form a loop that interacts with any segment of coil 205. In addition, coupling the isolation ring 220 in this manner minimizes the influence of isolation ring 220 upon the inductive value of inductor structure 105. The example illustrated within FIG. 3 is not intended to limit the one or more embodiments disclosed within this specification. For example, various devices illustrated in circuit schematic form can be replaced with one or more other passive and/or active devices. In this regard, for example, differential terminal 210 and differential terminal 215 can be coupled to location 250 and return 155 through one or more active devices, passive devices, or combinations of active and passive devices other than those shown. In general, the devices through which differential terminal 210 couples to return 155 will be the same as the devices through which differential terminal 215 couples to return 155, though this need not be the case. In similar fashion, center terminal 140 can couple to return 155 through one or more other types of circuit elements that are different from those illustrated in FIG. 3. FIG. 4 is a third block diagram providing a side view of an inductor structure in accordance with another embodiment disclosed within this specification. FIG. 4 shows a side view of inductor structure 105 of FIG. 3 taken from the perspective of directional arrow 300. It should be noted that within FIG. 4, being a side view, one or more objects visible within FIG. 3 may not be visible within FIG. 4. Similarly, one or more objects that appear within FIG. 4 may not be visible within FIG. 3. Referring to FIG. 4, three distinct conductive layers are used to implement the elements of inductor structure 105. Although implemented with three conductive layers, inductor structure 105 can be implemented with one or more additional conductive layers. As such, the implementation of inductor structure 105 with three conductive layers within this specification is provided for clarity and descriptive purposes only, and is not intended to be limiting. For example, inductor structure 105 can be implemented using four conductive layers. In that case, coil 205 can be implemented using two adjacent conductive layers coupled together by vias. In this manner, the quality factor, i.e., Q, of coil 205 can be improved by reducing the series resistance associated with coil 205. Continuing with FIG. 4, center terminal 140, coil 205, and differential terminal 210 are implemented with a single continuous segment of conductive material within a conductive layer farthest from a substrate layer shown as substrate 420. It should be noted that within FIG. 4, differential terminal 215 is obstructed from view by differential terminal 210. Return 155 is implemented within a conductive layer residing between the conductive layer used to implement coil 205 and substrate 420. In the example pictured in FIG. 4, return 155 is implemented within a conductive layer between the conductive layer in which coil 205 is implemented and the conductive layer in which isolation ring 220 is implemented. Generally, inductors are implemented within ICs using the conductive layer(s) farthest from substrate 420. Typically, these upper conductive layer(s) of an IC manufacturing process are thicker than lower residing conductive layers and, as a result, create inductors with lower series resistance and higher Q. Although implemented below coil 205 as shown in FIG. 4, return 155 also can be implemented within one or more conductive layers that reside above the conductive layer(s) used to implement coil 205. As such, the implementation of return 155 as illustrated within FIG. 4 of this specification is provided for clarity and descriptive purposes only, and is not intended to be limiting. Interconnect 305 represents a region of conductive material that couples center terminal 140 of inductor structure 105 to a drain terminal of PMOS 1 15. Interconnect 310 represents a region of conductive material that couples return 155 to a source terminal of each of NMOSs 120 and 125. Interconnect 315 represents a region of conductive material that couples return 155 to a positive potential of source 130. Although illustrated as a single layer of conductive material, each of interconnects 305, 310, and 315 can be implemented using one or more conductive layers and one or more vias that couple adjacent conductive layers to form each respective one of interconnects 305, 310, and/or 315. As such, the implementation of each of interconnects 305, 310, and 315 with a single region of conductive material within this specification is provided for clarity and descriptive purposes only, and not is intended to be limiting. Isolation ring 120 is implemented within a conductive layer residing closest to substrate 420. Each contact 415 couples a segment of isolation ring 120 to an underlying area of substrate 420. Although implemented within the conductive layer nearest to substrate 420, isolation ring 220 can be implemented within any conductive layer(s) available within an IC manufacturing process. As such, the implementation of isolation ring 220 in the conductive layer closest to substrate 420 and the number of contacts 415 as illustrated within FIG. 4 is not intended to be limiting. Interconnect 410 represents a region of conductive material that couples isolation ring 220 to interconnect 310. Interconnect 410 is coupled to interconnect 310 at location 250 with one or more of vias 425. Accordingly, though illustrated as single layer of conductive material, interconnect 410 can be implemented as one or more conductive layers that couple interconnect 410 to interconnect 310. In one or more embodiments, location 250 can be located at an AC virtual ground for the circuit in which inductor structure 105 is implemented. In that case, interconnect 410 can be physically connected to isolation ring 220 at location 250 so that isolation ring 220 is symmetrically bisected at a midpoint of the length of isolation ring 220 along the centerline of inductor structure 105. Coupling isolation ring 220 to interconnect 310 in this manner can minimize the size any loop formed by isolation ring 220 that may couple to a segment of coil 205. Minimizing the coupling of loops formed by isolation ring 220 to coil 205 reduces any variability in the induction value of inductor structure 105 that may be caused by the proximity of isolation ring 220 to coil 205. FIG. 5 is a fourth block diagram illustrating a two turn center tap inductor structure 500 in accordance with another embodiment disclosed within this specification. Inductor structure 500 illustrates the use of an isolation ring feature, as described with this specification as applied to a center tap inductor structure implemented with two or more turns. Inductor structure 500 can be used within differential RF circuits as described with reference to inductor structure 105 of FIG. 1 . FIG. 5 illustrates further aspects and performance improvements that result from providing a symmetric opening within an isolation ring surrounding inductor structure 500. Inductor structure 500 can include a coil 505, a center terminal 510, differential terminals (terminals) 515 and 520, and an isolation ring 525. As inductor structure 500 is intended for use within a differential RF circuit, generally, a bias current is received at center terminal 510. The portion 560 of conductive material coupling center terminal 510 to coil 505 can be considered the return or part of the return of inductor structure 500 located on center line 535. Thus, the return shown in FIG. 5 structurally differs from return 155 of FIGs. 1 -4 and may correspond, for example, to the interconnection between elements 1 15 and 140 in FIG. 1 . A portion of the bias current is output to the differential RF circuit at each of terminals 520 and 515. Although referenced as four distinct objects for descriptive purposes within this specification, coil 505, center terminal 510, and terminals 515 and 520 are coupled together and represent one continuous area or segment of conductive material. In addition, each of coil 505, center terminal 510, and/or terminals 515 and 520 can be implemented within one or more different conductive layers of the IC. The conductive layers can be coupled together with one or more vias to create one continuous conductive pathway. Coil 505 is implemented as a symmetrical two turn coil of inductor structure 500. A centerline 535 can be determined that symmetrically bisects coil 505. Although implemented as a two turn octagonal coil within FIG. 5, coil 505 can include two or more turns implemented in any of a variety of forms or shapes allowable by an IC manufacturing process so long as the symmetry of coil 505 is retained about centerline 535. As such, the implementation of coil 505 as a two turn octagonal coil within inductor structure 500 is provided for clarity and descriptive purposes only, and is not intended to be limiting. In another embodiment, for example, the turns of coil 505 can be stacked with each turn of the coil implemented within a differing conductive layer of the IC. Each turn implemented within a different conductive layer can be coupled to a turn in an adjacent conductive layer by one or more vias to form a continuous coil. Isolation ring 525 can include one or more substrate taps coupled to a segment of conductive material residing within a conductive layer of an IC manufacturing process used to implement inductor structure 500. In another embodiment, the lowest conductive layer of the IC manufacturing process, and therefore, the conductive layer vertically closest to the substrate taps, can be used to implement the segment of conductive material that is coupled to the substrate tap(s). The conductive material of isolation ring 525 can be coupled through interconnect material within the IC to a lowest voltage potential available within the IC in which inductor structure 500 is implemented, e.g., ground. Isolation ring 525 can surround coil 505 at a constant and predetermined distance from an outer perimeter of coil 505. In another embodiment, isolation ring 525 can be coupled at the midpoint of the length of isolation ring 525 to a virtual AC ground located within the circuit in which inductor structure 500 is implemented. FIG. 5 illustrates the influence of mutual inductance between coil 505 and isolation ring 525 upon current flow within coil 505 and isolation ring 525. Within coil 505, Idiff, being a time varying differential current, flows from terminal 515 to terminal 520. As noted within this specification, unlike common mode current that flows symmetrically in either direction away from center terminal 510, Idiff flows across inductor structure 500. As such, Idiff flows asymmetrically through inductor coil 505 with respect to centerline 535. The flow of Idiff though inductor structure 500 generates a magnetic field that induces a current within isolation ring 525, denoted as lring> that flows in the opposite direction of Idiff. Current I ring generates a magnetic field that opposes the flow of current Idiff through coil 505. As previously discussed within this specification, in conventional IC inductor structures the unimpeded flow of current within an isolation ring can impact the inductive value of the conventional IC inductor structure. The opening of length 530 within isolation ring 525 serves to break the pathway for current I ring flowing within isolation ring 525. Within FIG. 5, the length of the arrows used to represent current I ring illustrates the current density of current I ring at various locations within isolation ring 525. Referring to FIG. 5, the current density of current I ring is lowest within locations nearest to the opening and highest within locations farthest from the opening. Accordingly, the magnetic field generated within isolation ring 525 by current I ring is weakest at locations nearest to the opening and strongest at locations farthest from the opening. As a result, the coupling between coil 505 and isolation ring 525 is weakest at locations nearest to the opening and strongest at locations farthest from the opening. Although the current density within isolation ring 525 decreases at locations nearest to the opening, the variation in current density is symmetric within isolation ring 525 on either side of the opening within isolation ring 525 as bisected by centerline 535. Locating the center of the opening along centerline 535 assures the variation in the current density of current I ring and, accordingly, the level of coupling between coil 505 and isolation ring 525, is symmetric within inductor structure 500 with respect to centerline 535. To illustrate the importance of the opening being symmetrically bisected by centerline 535, assume isolation ring 525 is rotated clockwise 90 degrees. As coupling between coil 505 and isolation ring 525 is weakest near the opening, the coupling between coil 505 and isolation ring 525 is weaker within the side of coil 505 that includes terminal 520 than the side of coil 505 that includes terminal 515. The asymmetry in the coupling between coil 505 and isolation ring 525 created by not centering the opening within isolation ring 525 over centerline 535 can lead to the conversion of common mode signals to differential signals. Continuing with the previous illustration in which isolation ring 525 is rotated clockwise 90 degrees, a ground potential coupled to isolation ring 525 can contain a quantity of noise. The noise signal can be coupled by isolation ring 525 to coil 505. As the rotation of the opening results in asymmetric levels of coupling between each side of coil 505 and isolation ring 525, more of the noise signal is coupled to the side of coil 505 containing terminal 515 than the side containing terminal 520. The difference in the power of the noise signal within one side of coil 505 from the other side of coil 505 appears as a differential noise signal within the circuit in which inductor structure 500 is implemented. Centering the opening of isolation ring 525 over centerline 535 assures that a noise signal is symmetrically coupled to each side of coil 505. In that case, the noise signal appears as a common mode signal which is inherently cancelled at some level by a typical differential circuit. The same coupling asymmetries can also influence the inductive value match of the dual inductors inherent within inductor structure 500. FIG. 6 is a fifth block diagram illustrating a topographical view of an inductor structure 600 for use within an IC in accordance with an embodiment disclosed within this specification. Inductor structure 600 can be implemented within an IC, e.g., as an IC inductor structure. As shown, inductor structure 600 can include a coil 605 and a patterned ground shield (PGS) structure 610. PGS structure 610 can provide isolation from substrate generated noise. Further, PGS structure 610 can serve to improve the "Q" of inductor structure 600. Coil 605 can include a terminal 615, a terminal 620, and an interconnect 625 coupled to coil 605 using a via (not shown). Coil 605 can be implemented within one or more of a variety of process layers of an IC manufacturing process containing a high conductivity material. In an embodiment, coil 605 of inductor structure 600 can be implemented within the process layers containing the most conductive material of the IC manufacturing process. For example, the metal layers of the IC manufacturing process that are farthest from substrate 655 typically are considered highly, if not the most, conductive process layers and can be used to implement coil 605. It should be appreciated that, while illustrated as being formed in a single metal layer, coil 605 can be formed of two or more stacked metal layers that are coupled to one another using one or more vias. Terminals 615 and 620 are located at distal ends of inductor structure 600. Terminals 615 and 620 can be used to couple inductor structure 600 to one or more other circuit elements within the IC in which inductor structure 600 is implemented. To make terminal 620 available outside an outer perimeter of coil 605, interconnect 625 can be formed using a metal layer that is not used to implement any turns of coil 605. Accordingly, the inner-most turn of coil 605 can be coupled to interconnect 625 using one or more vias as noted. PGS structure 610 can be characterized by fingers 640. In an embodiment, coil 605 can be concentric with isolation ring 645 and isolation wall 665. For purposes of illustration, reference to isolation ring 645 within this specification also can refer to any contacts used to couple isolation ring 645 to metal structures located above isolation ring 645 unless otherwise indicated. Within FIG. 6, isolation wall 665 is immediately adjacent to isolation ring 645 with no intervening space. In another embodiment, however, isolation wall 665 can be larger than shown so that a substantially constant distance separates an outer edge of isolation ring 645 and an inner edge of isolation wall 665. In still another embodiment, isolation ring 645 can extend beneath isolation wall 665 or be located entirely beneath isolation wall 665 so that isolation ring 645 is not visible from the viewing angle illustrated in FIG. 6. For purposes of illustration, fingers 640 are subdivided into four different groups of substantially parallel fingers illustrated as fingers 640A, fingers 640B, fingers 640C, and fingers 640D. Each finger of each group of fingers 640A- 640D can couple to isolation ring 645 via one or more contacts (not shown) on one end of each respective finger 640 and extend inward toward a center of coil 605. Fingers 640A extend down from, and are substantially perpendicular to, a top edge of isolation ring 645. Fingers 640B extend left from, and are substantially perpendicular to, a right edge of isolation ring 645. Fingers 640C extend up from, and are substantially perpendicular to, a bottom edge of isolation ring 645. Fingers 640D extend right from, and are substantially perpendicular to, a left edge of isolation ring 645. Each of fingers 640 can be formed as a metal strip using a process layer that is positioned between the process layer used to form coil 605 and substrate 655. Beneath each linear segment of coil 605, fingers 640 of PGS structure 610 that cross beneath and are in a same group are aligned in parallel with respect to one another. Also, pairs of adjacent fingers in a same group can be separated by a same predetermined distance. In an embodiment, the predetermined distance can be a minimum metal spacing allowed by the IC manufacturing process used to implement inductor structure 600. For example, fingers 640A can be substantially parallel with respect to one another and substantially perpendicular to the linear segments of coil 605 beneath which each of fingers 640A is located. Further, fingers 640A can be separated from one another by a same predetermined spacing. Appreciably, fingers 640A are not perpendicular to the segment of coil 605 that couples directly to terminal 615. Fingers 640B can be substantially parallel with respect to one another and substantially perpendicular to the linear segments of coil 605 beneath which each of fingers 640B is located. Fingers 640B can be separated from one another by a same predetermined spacing. Fingers 640C are substantially parallel with respect to one another and substantially perpendicular to the linear segments of coil 605 beneath which each of fingers 640C is located. Fingers 6140C can be separated from one another by a same predetermined spacing. Appreciably, fingers 640C are not perpendicular to the segment of coil 605 that couples directly to terminal 620. Fingers 640D are substantially parallel with respect to one another and substantially perpendicular to the linear segments of coil 605 beneath which each of fingers 640D is located. Fingers 640D can be separated from one another by a same predetermined spacing. Within inductor structure 600, current flow is indicated by arrows 660. Accordingly, each of fingers 640 is oriented substantially perpendicular to the direction of current flow within the segment of coil 605 under which each of fingers 640 is located. By positioning fingers 640 in this manner, the impact of fingers 640 upon the magnetic field generated by the flow of current through coil 605 is reduced. Positioning fingers 640 in this manner can increase the efficiency of inductor structure 600 since the energy stored within the magnetic field surrounding the turns of coil 605 is not obstructed or dissipated by PGS structure 610. In effect, fingers 640 of PGS structure 610 provide a continuous shield that resides beneath substantially all portions of coil 605. For example, PGS structure 610 can be implemented to extend to at least the outer perimeter defined by an outer edge of coil 605. In an embodiment, each of fingers 640 of PGS structure 610 can extend a predetermined distance beyond the outer perimeter of coil 605. For example, each of fingers 640 can extend a same distance or length beyond the outer perimeter of coil 605. Isolation wall 665 can be configured to encompass coil 605 and fingers 640. Isolation wall 665 can be implemented with two or more conductive process layers of the IC manufacturing process used to implement inductor structure 600. Isolation wall 665 can be implemented using process layers such as those used to implement coil 605 or fingers 640, for example. In an embodiment, each metal layer of the IC manufacturing process used to implement inductor structure 600 can be stacked vertically to form isolation wall 665. In that case, each pair of vertically adjacent metal layers used to implement isolation wall 665 can be coupled together using one or more vias to form a continuous conductive structure, e.g., wall, around fingers 640. As pictured in FIG. 6, each of fingers 640 can be coupled to isolation ring 645 via one or more contacts. In that case, isolation wall 665 can be excluded if so desired. In another embodiment, isolation wall 665 can be coupled to the end portion of each of fingers 640 that extends beyond the outer perimeter of coil 605. In that case, isolation wall 665 can be coupled to isolation ring 645 via a plurality of contacts, thereby coupling isolation wall 665 and fingers 640 to substrate 655. PGS structure 610 can be coupled to a known potential within the IC in which inductor structure 600 is implemented. In a typical P-type substrate IC process, PGS structure 610 can be coupled to a same ground potential, or most negative potential, to which substrate 655 is coupled. Implemented in this manner, PGS structure 610 can form a ground plane that shields substrate 655 from the electric fields generated by currents flowing within inductor structure 600. In addition, PGS structure 610 can isolate inductor structure 600 from noise generated within substrate 655 by other circuit blocks operating within the IC in which PGS structure 610 is implemented. FIGs. 7-1 and 7-1 are sixth and seventh block diagrams each illustrating a side view of an inductor structure 700 in accordance with another embodiment disclosed within this specification. FIGs. 7-1 and 7-1 show a side view of an inductor structure 700, which can be implemented substantially as described with reference to inductor structure 600 of FIG. 6. FIGs. 7-1 and 7-1 are provided as exemplary illustrations. As such, FIGs. 7-1 and 7-1 are not drawn to the same scale as FIG. 6. Further, FIGs. 7-1 and 7-1 illustrate various aspects of inductor structure 700 that are not visible from the topographical view presented in FIG. 6. FIG. 7-1 illustrates a side view of inductor structure 700 in which the isolation wall, e.g., isolation wall 665 of FIG. 1 , is not shown. As pictured, coil 705 of inductor structure 700 is disposed within an upper metal layer, e.g., a metal layer farther or farthest from substrate 755 of the IC manufacturing process used to implement inductor structure 700. Although pictured in FIG. 7-1 as being implemented using a single metal layer, coil 705 can be implemented using two or more vertically stacked metal layers. In that case, adjacent metal layers of coil 705 can be coupled with one or more vias. It also should be appreciated that coil 705 can be implemented within one or more metal layers located closer, or closest, to substrate 755. Typically, within an IC manufacturing process, metal layers located farther from substrate 755 can be thicker than those that are located closer to substrate 755. Thus, the metal layers farther from substrate 755 tend to have a higher or highest level of conductivity of the available process layers. Therefore, implementing coil 705 in the metal layers farthest from substrate 755 typically provides superior inductor characteristics, for example, lower series resistance for inductor structure 700. However, the implementation of coil 705 with a single conductive layer that is located farthest from substrate 755 as described within this specification is provided for purposes of illustration only, and is not intended as a limitation of the one or more embodiments disclosed herein. Interconnect 725 is coupled to coil 705 with one or more of vias 730. Interconnect 725 can be implemented in a metal layer that is different from the metal layer used to implement coil 705. Using a different metal layer for interconnect 725 allows the end portion of coil 705, i.e., the end of the inner-most turn of coil 705, to be routed out of coil 705 for coupling to additional IC circuit devices. Although pictured within FIG. 7-1 as being implemented with a single metal layer, interconnect 725 can be implemented with two or more vertically stacked layers of metal. In that case, each adjacent layer in the metal stack forming interconnect 725 can be coupled with one or more vias. It should be appreciated that interconnect 725 can be implemented within one or more metal layers located above coil 705, i.e., farther from substrate 755 than coil 705. As such, the implementation of interconnect 725 with a single conductive layer located beneath coil 705 as shown within FIG. 7-1 is provided for purposes of illustration and is not intended as a limitation of the one or more embodiments disclosed herein. Fingers 740 generally are oriented perpendicular to the direction of current flow in the segment of coil 705 under which each of fingers 740 is located. Within FIG. 7-1 , only a single group of fingers 740 is illustrated. As shown, each of fingers 740 can be implemented using the metal layer closest to substrate 755. Typically, implementing the PGS structure as close to substrate 755 as possible provides superior isolation between coil 705 and substrate 755. Although illustrated as being implemented in the metal layer closest to substrate 755, fingers 740 can be implemented within any conductive process layer residing between substrate 755 and coil 705. As such, the depiction of fingers 740 being formed in the metal layer closest to substrate 755 within this specification is for purposes of illustration only and is not intended as a limitation of the one or more embodiments disclosed. In an embodiment, each of fingers 740 can couple at one end to isolation ring 745. Isolation ring 745 can be sized to encompass the entirety of the outer perimeter of coils 705. As shown in FIG. 7-1 , each of fingers 740 can couple to isolation ring 745 through one of contacts 760. Isolation ring 745, for example, can be coupled to a ground potential of the IC to create a known constant potential within each of fingers 740 coupled thereto and the portion of substrate 755 located within isolation ring 745. As noted with respect to FIG. 6, reference to isolation ring 745 within this specification also includes contacts 760 (or contacts 705 of FIG. 7-2) unless otherwise indicated or as indicated by context. Isolation ring 745 can be implemented with a low conductivity material such as, for example, a P-type or a P-plus type of diffusion implant. In this manner, each of fingers 740 can be coupled together with low conductance material(s). FIG. 7-2 illustrates a side view of inductor structure 700 in which isolation wall 765 is shown. Isolation wall 765 can be implemented substantially as described with reference to isolation wall 665 of FIG. 6. As noted, the PGS structure can be implemented with fingers 740 being coupled to isolation ring 745 and, as a result, to substrate 755 (not shown in FIG. 7-2). The PGS structure, however, also can be implemented in a variety of other configurations. For example, within FIG. 7-2, isolation wall 765 is depicted as being coupled to isolation ring 745 using contacts 705. As such, isolation wall 765 is coupled to substrate 755. In an embodiment, fingers 740 of the PGS structure can be directly coupled to isolation wall 765 as opposed to isolation ring 745. Using this approach, the end portion of each finger 740 can be coupled together using a high conductance material of isolation wall 765, e.g., metal. As illustrated in FIG. 7-2, isolation wall 765 can include two or more metal layers 720 that are vertically stacked. Each pair of vertically adjacent metal layers 720 can be coupled together using one or more of vias 775. The inter- coupling of multiple metal layers 720 can create a high conductance layer that can be used to couple adjoining fingers 740 within the PGS structure. In an embodiment, a highest conductive layer used to form isolation wall 765, e.g., the top metal layer 720 shown in FIG. 7-2, can be located at least as far from substrate 755 as the conductive layer used to form coil 705. For example, the highest metal layer 720 can be formed using a same process layer as is used to form coil 705, but also can be built higher so that the highest process layer of isolation wall 765 is farther from substrate 755 than the process layer used to form coil 705. Further, a lowest conductive layer, e.g., the lowest metal layer 720 shown in FIG. 7-2, used to form isolation wall 765 can be located at least as close to substrate 755 as a process layer used to form fingers 740. For example, a lowest metal layer 720 of isolation wall 765 can be implemented using a same process layer as is used to form fingers 740, but also can be formed using a process layer that is located lower, e.g., closer to substrate 755, than the process layer used to form fingers 740. FIG. 8 is a graph illustrating the influence of the conductance of the material used to couple fingers of a PGS structure on the inductive and lossy characteristics of an IC inductor structure in accordance with another embodiment disclosed within this specification. FIG. 8 illustrates the effects of the conductance of the material used to couple individual ones of the fingers of the PGS structure on the inductive value of the inductor structure in which the PGS structure is incorporated, as well as the Q of the inductor structure. The graph of FIG. 8 illustrates an inductance plot 805 and a Q plot 810. The vertical axis is demarcated in nanohenries. The horizontal axis represents conductivity and is demarcated in units that have been normalized to Copper conductivity. The values illustrated by the graph of FIG. 8 are derived from three dimensional electromagnetic simulations. In a conventional IC inductor structure that utilizes a metal PGS structure, the entire PGS structure is composed of a single, uninterrupted metal layer, e.g., a metal sheet. The uninterrupted PGS structure effectively isolates the substrate under the conventional inductor structure from the electromagnetic field generated by the AC currents flowing within the coil of the conventional inductor structure. In addition, the uninterrupted PGS structure isolates the conventional inductor structure from noise that can propagate from other circuit blocks that neighbor the conventional inductor structure. Within the conventional inductor structure, however, the magnetic field created by AC currents flowing therein generate currents within the uninterrupted PGS structure. The currents induced within the uninterrupted PGS structure of the conventional inductor structure can result in energy losses that can degrade the Q of the conventional inductor structure. Referring again to FIG. 8, Q and correlated conductance of the material used to interconnect the fingers of the PGS structure is illustrated. Window 815 shows a region of FIG. 8 in which Q plot 810 is degraded. The degradation of Q associated with the PGS structure results in an inductor structure that is inadequate for radio frequency (RF) IC circuits. Q plot 810 demonstrates that two ranges of conductance for the material used to interconnect the fingers of the PGS structure can result in improved Q for an inductor structure. Window 820 illustrates that decreased conductance of the material used to interconnect the fingers of the PGS structure corresponds with an increase in Q and inductance of the inductor structure. The increase in Q demonstrated by Q plot 810 and the increase in inductance demonstrated by inductance plot 805 that occur within window 820 result from the low conductance of the material used to interconnect the fingers of the PGS structure preventing currents from flowing between the fingers. The currents that are prevented, or inhibited, from flowing between the fingers of the PGS structure are induced by the electric field that is generated by AC currents within the inductor structure. Preventing the current flow between fingers can decrease resistive losses within the PGS structure that tend to increase the Q of the inductor structure. Window 825 illustrates that increased conductance of the material used to interconnect the fingers of the PGS structure corresponds to an increase in Q and a decrease in inductance of the inductor structure. The increase in Q demonstrated by Q plot 810 and the decrease in inductance demonstrated by inductance plot 805 that occur within window 825 result from the high conductance of the material used to interconnect the fingers of the PGS structure significantly reducing the resistance between the fingers. Reducing the inter-finger resistance, e.g., the resistance between fingers, can decrease the resistive losses that occur within the PGS structure that tend to increase the Q of the inductor structure. FIG. 9 is a eighth block diagram illustrating a topographical view of an inductor structure 900 in accordance with another embodiment disclosed within this specification. FIG. 9 illustrates a physical layout of inductor structure 900 within an IC. As shown, inductor structure 900 is pictured as a two turn, center tap inductor structure. Inductor structure 900 can be implemented to utilize properties illustrated in window 820 of FIG. 8, e.g., low conductivity in the material used to interconnect fingers of the PGS structure. Inductor structure 900 can include a coil 905, a center terminal 910, differential terminals (terminals) 915 and 920, a circuit block 925, and an isolation ring 945. As shown, circuit block 925 can couple to the linear segments, e.g., "legs," of coil 905 that extend outward past isolation ring 945. Circuit block 925 can couple to legs of coil 905, e.g., to terminals 915 and 920, via one or more connections or terminals, as shown. In an embodiment, a ground metal can be located and implemented under circuit block 925 and can be formed of a low loss material. A return line 960 of conductive material can be coupled to coil 905 in a fashion similar to that shown in FIG. 5. The return line can be positioned on centerline 935. Although denoted as four distinct objects for descriptive purposes within this specification, coil 905, center terminal 910, and terminals 915 and 920 are coupled together and can represent one continuous area of conductive material. In addition, though implemented as one continuous area or segment of conductive material, coil 905, center terminal 910, and terminals 915 and 920 can be implemented within one or more different conductive, e.g., metal, process layers of the IC. The conductive layers can be coupled together using one or more vias to create one continuous conductive pathway. Coil 905 can be implemented as a symmetrical, two turn coil of inductor structure 900. A centerline 935 can be determined that symmetrically bisects, or substantially symmetrically bisects, coil 905. Although implemented as an octagonal coil within FIG. 9, coil 905 can be implemented in any of a variety of forms or shapes that can be implemented using available IC manufacturing processes, so long as the symmetry of coil 905 about centerline 935 is retained. As such, the implementation of coil 905 as an octagonal coil within inductor structure 900 is provided for purposes of illustration and is not intended as a limitation of the one or more embodiments disclosed within this specification. A PGS structure can be implemented between the conductive process layer used to implement coil 905 and substrate 955. The PGS structure can reside beneath, and extend beyond, an outer perimeter defined by an outer edge of coil 905. The PGS structure can include, and thus, be characterized by, a plurality of fingers 935 and an isolation ring 945. Each of fingers 935 can be positioned substantially perpendicular to the segment of coil 905 beneath which that finger 935 extends. As such, each finger 935 is substantially perpendicular to the flow of current through that segment of coil 905. As pictured in FIG. 9, each finger 935 of the PGS structure is coupled on one end to isolation ring 945. Coil 905 can be concentric with isolation ring 945, which is positioned to encompass coil 905 by substantially a constant distance from the outer edge of coil 905. Isolation ring 945 can be located along the distal end of each finger 935 within the PGS structure of inductor structure 900 that extends outward past the outer perimeter of coil 905. Each finger 935 can be coupled to isolation ring 945 on one end, e.g., at the "distal" end, with one or more contacts (not shown). Isolation ring 945 can be formed or composed of a low conductance material having a conductance that is within the range defined by window 820 of FIG. 8. In an embodiment, the material used to implement isolation ring 945 can be a low conductance, P-type diffusion that is implanted within substrate 955. Coupling together fingers 935 of inductor structure 900 with the high resistance material of the P-type diffusion can reduce current flow between fingers 935. The reduced current flow between fingers 935 can lead to losses within inductor structure 900. Reducing these resistive losses can improve the Q of inductor structure 900. FIG. 10 is a ninth block diagram illustrating a topographical view of inductor structure 900 of FIG. 9 in accordance with another embodiment disclosed within this specification. More particularly, FIG. 10 illustrates an embodiment having a physical refinement that can improve the Q of inductor structure 900. Referring to FIG. 10, parallel fingers 1035 are located beneath the segments of coil 905 that extend beyond the turns of coil 905 and isolation ring 945 of the PGS structure of inductor structure 900 to form differential terminal 915 and 920. Fingers 1035, which can be implemented as metal strips substantially similar to fingers 935, can be coupled to linear segments 1045. In an embodiment, linear segments 1045 can couple to isolation ring 945. Linear segments 1045 can be formed of the same low conductance material that is used to form isolation ring 945. Each of fingers 1035 can be coupled to linear segments 1045 through one or more contacts (not shown). In the example pictured in FIG. 10, fingers 1035 can be arranged in two columns, where each column is positioned beneath one leg of coil 905. Each of fingers 1035 is substantially perpendicular to the legs of coil 905. The addition of fingers 1035 and linear segments 1045 beneath the legs of coil 905 prevent the generation of eddy currents within substrate 955 from electromagnetic fields associated with currents flowing through inductor structure 900. By coupling together fingers 1035 beneath the legs of coil 905 using linear segments 1045, current is prevented from flowing between fingers 1035. Decreasing the ability to generate eddy currents within substrate 955 and preventing resistive losses within fingers 1035 beneath the legs of coil 905 can further reduce losses that can be incurred within inductor structure 900. This reduction in loss further can improve the Q of inductor structure 900. FIG. 1 1 is a tenth block diagram illustrating a topographical view of inductor structure 900 of FIG. 9 in accordance with another embodiment disclosed within this specification. FIG. 1 1 illustrates a physical refinement of inductor structure 900 that can provide additional Q improvement to inductor structure 900. FIG. 1 1 illustrates an embodiment that makes use of highly conductive material. The conductive material utilized in FIG. 1 1 , for example, can have a conductivity that is within a range defined by window 825 of FIG. 8. The conductive material can be used to couple fingers 935 and 1035 within the PGS structure of inductor structure 900. Inductor structure 900 further can include isolation wall 1 150. Referring to FIG. 1 1 , the outer perimeter of the PGS structure of inductor structure 900 is surrounded by an isolation wall 1 150. As previously described, an isolation wall such as isolation wall 1 150 can include two or more metal layers that are vertically stacked. Each pair of vertically adjacent metal layers can be coupled together using one or more vias. The inter-coupling of multiple metal layers creates a high conductance path that can be used to couple adjoining fingers 935 of the PGS structure and fingers 1035. In this manner, each pair of adjoining fingers can be coupled with a high conductance material. In an embodiment, the metal layer(s) used to implement isolation wall 1 150 can include one or more or all metal layers of the IC manufacturing process in which inductor structure 900 is implemented. In another embodiment, isolation wall 1 150 can include, at least, the metal layers used to implement coil 905 and the metal layer used to implement the PGS structure of inductor structure 900, e.g., the metal layer used to implement fingers 935 and/or 1035. In either case, each pair of vertically adjacent metal layers can be coupled by one or more vias or stacks of vias. As discussed, using a material that has a conductance within window 825 of FIG. 8 to connect fingers 935 and 1035 of the PGS structure effectively decreases the resistance of the material connecting the metal strips of the PGS structure. Decreasing this resistance decreases resistive losses in the PGS structure of inductor structure 900 and, accordingly, increases the Q of inductor structure 900. The increase in Q associated with window 825 results from the high conductance of the material connecting the fingers 935 and 1035 of the PGS structure, which greatly reduces the resistance between fingers 935 and 1035 within the PGS structure. In an embodiment, the portion of isolation wall 1 150 through which each of the legs of coil 905 crosses can be at least partially discontinuous. More particularly, one or more conductive layers used to form isolation wall 1 150 can be discontinued or interrupted so as to allow each of the legs of coil 905 to cross isolation wall 1 150. Though one or more conductive layers that form isolation wall 1 150 can have a discontinuity to allow each respective leg to pass, it should be appreciated that not all layers of isolation wall 1 150 need have a discontinuity or gap. Isolation ring 945 can be located beneath isolation wall 1 150. Isolation ring 945 can be coupled to the lowest metal layer used to form isolation wall 1 150 using one or more contacts. As noted, isolation ring 945 can be sized as shown, can extend beneath isolation wall 1 150, or can be located completely beneath isolation wall 1 150 so as to not be visible in the example shown. In an embodiment, fingers 935 and 1035 can be formed using a lowest metal layer that is used to form isolation wall 1 150. In this regard, fingers 935 and 1035 can be formed as part of the isolation wall FIG. 12 is an eleventh block diagram illustrating a topographical view of inductor structure 900 of FIG. 9 in accordance with another embodiment disclosed within this specification. FIG. 12 illustrates an embodiment in which a further physical refinement of inductor structure 900 is shown that can increase the Q of inductor structure 900. Typically, inductor structures are used as RF circuit components within an IC. For example, a center tap inductor such as inductor structure 900 is often used when implementing a differential RF voltage controlled oscillator (VCO). In that case, circuit block 925 can be a cross coupled gm cell that forms the core of the RF VCO. The physical location of the source connections of the gm cell, also representing an electrical node, can correspond to a virtual AC ground of the RF VCO circuit for differential current flowing within coil 905. As used within this specification, the term "virtual AC ground," can refer to a node of a circuit that is maintained at a steady voltage potential when sourcing or sinking AC current without being directly physically coupled to a reference voltage potential. In an embodiment, circuit block 925 can be repositioned along the linear segments of coil 905. A virtual AC ground can be located within circuit block 925, e.g., at a node where two sources of a differential transistor pair of the gm cell are coupled. The virtual AC ground within circuit block 925 can be coupled to an actual ground of the IC in the same or similar manner as isolation wall 1 150, e.g., where multiple conductive layers are vertically coupled through vias to form a low loss path, thereby providing further improvement in Q for inductor structure 900. In addition, shifting the position of circuit block 925 to a location that is substantially adjacent to a portion of the isolation wall 1 150 where legs of coil 905 extend beyond can provide additional Q improvement in inductor structure 900. Referring to FIG. 12, the location of circuit block 925 and terminals 915 and 920 has been altered from residing at or about endpoints of the legs of coil 905, e.g., farther away from coil 905 as shown in FIG. 1 1 , to the location illustrated in FIG. 12. FIG. 13 is a twelfth block diagram illustrating a topographical view of the inductor structure of FIG. 9 in accordance with another embodiment disclosed within this specification. FIG. 13 illustrates an embodiment in which isolation wall 1 150 surrounds only coils 905 of inductor structure 900. Accordingly, whereas fingers 935 can couple to isolation wall 1 150 as described with reference to FIGs. 1 1 and 12, fingers 1035 can couple to linear segments 1045 via a plurality of contacts. Linear segments 1045 can couple to isolation ring 945 disposed beneath isolation wall 1 150. In the example pictured in FIG. 13, circuit 925 is located near the ends of the legs of coil 905. In this regard, contacts 915 and 920 also are located near the ends of the legs, e.g., at approximately a farthest location on the legs away from coil 905. FIG. 14 is a thirteenth block diagram illustrating a topographical view of the inductor structure of FIG. 9 in accordance with another embodiment disclosed within this specification. FIG. 14, like FIG. 13, illustrates an embodiment in which isolation wall 1 150 surrounds only coils 905 of inductor structure 900. Fingers 935 can couple to isolation wall 1 150 as described with reference to FIGs. 1 1 and 12. Fingers 1035 can couple to linear segments 1045 via a plurality of contacts. In the example pictured in FIG. 14, circuit 925 is located near the ends of the legs of coil 905 that are closest to coil 905. In this regard, contacts 915 and 920 also are located at or about the ends of the legs, e.g., at approximately a closest location on the legs to coil 905. One or more embodiments disclosed within this specification provide a center tap IC inductor structure that demonstrates improved matching characteristics and improved immunity to coupling effects than conventional inductor structures. The IC inductor structure can be built symmetrically with respect to a centerline that bisects the center tap of the IC inductor structure. In some embodiments, an isolation ring can be built that surrounds the outer perimeter of the coils of the center tap IC inductor structure. The isolation ring can be discontinuous in that the isolation ring can include an opening centered about the centerline. The discontinuity in the isolation ring impedes induced current from flowing within the isolation ring. In the case of a single turn center tap inductor structure, a return line in a different conductive layer than the coil can be added to the inductor structure. In the case of multiple-turn center tap inductor structures, the return line can be in a same conductive layer as the coil. The return line can be centered symmetrically along the centerline and return current can be sourced from the IC inductor structure on a path that symmetrically bisects the single turn coil of the IC inductor structure. In some embodiments, the center tap inductor structure can include a patterned ground shield including a plurality of fingers implemented within an IC process layer located between the coils of the center tap IC inductor structure and a substrate of the IC. The isolation ring can be coupled to one end of each finger. In some embodiments, an isolation wall comprising a high conductive material can be formed to encompass the coil and the patterned ground shield. The isolation wall can be coupled to one end of each finger, and/or to the substrate of the IC. The isolation wall can include a plurality of vertically stacked conductive layers, where each pair of adjacent, vertically stacked conductive layers is coupled by a via. A highest conductive layer used to form the isolation wall can be implemented using a process layer at least as far from the substrate of the IC as a process layer used to form the coil. A lowest conductive layer used to form the isolation wall can be implemented using a process layer at least as close to the substrate of the IC as a process layer used to form the plurality of fingers. The terms "a" and "an," as used herein, are defined as one or more than one. The term "plurality," as used herein, is defined as two or more than two. The term "another," as used herein, is defined as at least a second or more. The terms "including" and/or "having," as used herein, are defined as comprising, i.e., open language. The term "coupled," as used herein, is defined as connected, whether directly without any intervening elements or indirectly with one or more intervening elements, e.g., circuit components such as one or more active and/or passive devices, unless otherwise indicated. Two elements also can be coupled mechanically, electrically, or communicatively linked through a communication channel, pathway, network, or system. One or more embodiments disclosed within this specification can be embodied in other forms without departing from the spirit or essential attributes thereof. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the one or more embodiments.
A circuit (102) includes: a parallel data interface (108); and transition control circuitry (1101) coupled to the parallel data interface (108). The transition control circuitry (110) is configured to: receive an input bit stream sample (S(m)); determine a bit transformation pattern (510) for the input bit stream sample (S(m)) in accordance with a target criteria; and generate an output bit stream symbol (S'(m)) from the input bit stream sample (S(m)) and the bit transformation pattern (510), wherein the output bit stream symbol (S'(m)) has more bits than the input bit stream sample (S(m)).
CLAIMSWhat is claimed is:1. A circuit, comprising: a parallel data interface; and transition control circuitry coupled to the parallel data interface, the transition control circuitry is configured to: receive an input bit stream sample; determine a bit transformation pattern for the input bit stream sample in accordance with a target criteria; and generate an output bit stream symbol from the input bit stream sample and the bit transformation pattern, wherein the output bit stream symbol has more bits than the input bit stream sample.2. The circuit of claim 1, wherein the parallel data interface includes parallel data interface lanes configured to transmit bits of the output bit stream symbol in parallel, and the target criteria minimizes bit transitions on the parallel data interface lanes.3. The circuit of claim 1, wherein the parallel data interface includes parallel data interface lanes configured to transmit bits of the output bit stream symbol in parallel, and the target criteria minimizes variance in a number of bit transitions on the parallel data interface lanes.4. The circuit of claim 1, wherein the transition control circuitry has a parallelized topology configured to generate multiple output bit stream symbols at a time.5. The circuit of claim 1, wherein the transition control circuitry has a serialized topology configured generate multiple output bit stream symbols within a clock cycle.6. The circuit of claim 1, wherein the transition control circuitry has a pipelined topology with a transition optimizer, and index remapper, and a bit stream transformer, and the transition control circuitry is configured to perform transition optimizer operations, index remapper operations, and bit stream transformer operations in different clock cycles to generate the output bit stream symbol.7. The circuit of claim 1, wherein the transition control circuitry includes an index remapper configured to generate the bit transformation pattern based on a previous bit transformation pattern.8. The circuit of claim 1, wherein the transition control circuit is configured to: generate a plurality of candidate bit transformation patterns; and select one of the candidate bit transformation patterns based on the target criteria.9. The circuit of claim 1, wherein the parallel data interface and the transition control circuit are components of an integrated circuit that includes an analog-to-digital converter (ADC) or digital-to- analog converter (DAC) adapted to be coupled to another circuit via the parallel data interface.10. The circuit of claim 1, wherein the parallel data interface and the transition control circuit are components of an integrated circuit that includes a baseband processor adapted to be coupled to another circuit via the parallel data interface.11. A system, comprising: a first electronic circuit; and parallel data interface lanes coupled to the first electronic circuit and adapted to be coupled to a second electronic circuit, wherein the first electronic circuit is configured to: receive an input bit stream sample; determine a bit transformation pattern for the input bit stream sample in accordance with a target criteria; generate an output bit stream symbol from the input bit stream sample and the bit transformation pattern, wherein the output bit stream symbol has more bits than the input bit stream sample; and provide the output bit stream symbol to the parallel data interface lanes.12. The system of claim 11, wherein the first electronic circuit includes an analog front-end (AFE) and the second electronic circuit includes a baseband processor.13. The system of claim 11, wherein the first and second electronic circuits are on different integrated circuits.14. The system of claim 11, wherein the first and second electronic circuits are on different chiplets of an integrated circuit.15. The system of claim 11, wherein the parallel data interface lanes are configured to transmit bits of the output bit stream symbol in parallel, and the target criteria minimizes bit transitions on the parallel data interface lanes.16. The system of claim 11, wherein the parallel data interface lanes are configured to transmit bits of the output bit stream symbol in parallel, and the target criteria minimizes variance in a number of bit transitions on the parallel data interface lanes.17. The system of claim 11, wherein the transition control circuitry has a parallelized topology configured to generate multiple output bit stream symbols at a time.18. The system of claim 11, wherein the transition control circuitry has a serialized topology configured generate multiple output bit stream symbols within a clock cycle.19. The system of claim 11, wherein the transition control circuitry has a pipelined topology with a transition optimizer, and index remapper, and a bit stream transformer, and the transition control circuitry is configured to perform transition optimizer operations, index remapper operations, and bit stream transformer operations in different clock cycles to generate the output bit stream symbol.20. A method, comprising: receive an input bit stream sample; determine a bit transformation pattern for the input bit stream sample in accordance with a target criteria; generate an output bit stream symbol from the input bit stream sample and the bit transformation pattern, wherein the output bit stream symbol has more bits than the input bit stream sample; and provide the output bit stream symbol to parallel data interface lanes.21. The method of claim 20, wherein the target criteria minimizes bit transitions on the parallel data interface lanes.22. The method of claim 20, the target criteria minimizes variance in a number of bit transitions on the parallel data interface lanes.
BIT STREAM TRANSFORMATION IN PARALLEL DATA INTERFACESBACKGROUND[0001 ] As new electronic devices are devel oped and integrated circuit (IC) technology advances, new IC products are commercialized. One example IC product for electronic devices includes one or more circuits configured to communicate via a parallel data interface. The parallel data interface of the IC may be used for communications between different circuits or chiplets of the IC, or between different ICs (chips). Issues resulting from parallel data interfaces include: power supply ripple may be introduced to sensitive sub-systems of an IC due to transitions on the related parallel data interface lanes; power consumption of the IC increases as the number of transitions on the parallel data interface lanes increases; and simultaneous switching related to transitions on the parallel data interface lanes negatively impact output driver performance.[0002] An example IC with a parallel data interface includes a radio frequency (RF) sampling transceiver with an analog front-end (AFE) or other components that are sensitive to power supply ripple. In one example, the parallel data interface couples the AFE to a baseband processor such as a field-programmable gate array (FPGA) or application-specific IC (ASIC). In this example, the parallel data interface transfers complex baseband I/Q samples between the AFE and the baseband processor by mapping the samples to a parallel bit stream (e.g., as 16-bit input words or symbols) and transferring the mapped input words through the parallel data interface lanes. As an example, the parallel data interface may convert 1 gigasample per second (Gsps) of I/Q samples (e.g., each sample corresponding to a 16 bit symbol or other multi -bit symbol) to a 16-bit parallel stream at 2 gigabits per second (GBPS). In this scenario, the AFE is sensitive to power supply ripple caused by data dependent transitions in complementary metal oxide semiconductor (CMOS) switches for the parallel data interface lanes. Also, the power consumption of the parallel data interface increases with the toggle factor (i.e., the average number of transitions).[0003] One conventional approach to reducing power supply ripple uses differential traces, instead of single-ended traces, for each of the parallel data interface lanes. This would partly mitigate the power supply ripple because a transition (e.g., from the power supply voltage (VDD) to ground) in one trace of a differential pair is accompanied by an opposite transition in another trace. However, differential traces double the number of traces required (e.g., 512 lanes instead of 256 lanes for 8 receiver channels), and is not practical given the large number of parallel interface lanes to be supported. Another conventional approach introduces extra logic to perform dummy transitions (e.g., from ground to VDD) at each clock edge, which cancels the switching activity in each of the interface lanes. However, this dummy transition technique may not sufficiently mitigate the power supply ripple because the dummy transitions provided by the extra logic will not be propagated to the ports and the trace load would not be matched. Also, the dummy transition technique increases power consumption.SUMMARY[0004] In one example embodiment, a circuit comprises: a parallel data interface; and transition control circuitry coupled to the parallel data interface. The transition control circuitry is configured to: receive an input bit stream sample; determine a bit transformation pattern for the input bit stream sample in accordance with a target criteria; and generate an output bit stream symbol from the input bit stream sample and the bit transformation pattern, wherein the output bit stream symbol has more bits than the input bit stream sample.[0005] In another example embodiment, a system comprises: a first electronic circuit; and parallel data interface lanes coupled to the first electronic circuit and adapted to be coupled to a second electronic circuit. The first electronic circuit is configured to: receive an input bit stream sample; determine a bit transformation pattern for the input bit stream sample in accordance with a target criteria; generate an output bit stream symbol from the input bit stream sample and the bit transformation pattern, wherein the output bit stream symbol has more bits than the input bit stream sample; and provide the output bit stream symbol to the parallel data interface lanes.[0006] In yet another embodiment, a method comprises: receiving an input bit stream sample; determining a bit transformation pattern for the input bit stream sample in accordance with a target criteria; generating an output bit stream symbol from the input bit stream sample and the bit transformation pattern, wherein the output bit stream symbol has more bits than the input bit stream sample; and providing the output bit stream symbol to parallel data interface lanes.BRIEF DESCRIPTION OF THE DRAWINGS[0007] FIG. l is a block diagram of a system in accordance with an example embodiment.[0008] FIG. 2 is a graph of bit transition distribution as a function of relative frequency for a parallel data interface in accordance with a conventional technique. [0009] FIG. 3 is a block diagram of a system in accordance with an example embodiment.[0010] FIG. 4 is a block diagram of a transmitter with transition control circuitry in accordance with an example embodiment.[0011] FIG. 5 is a block diagram of a bit stream transformer in accordance with an example embodiment.[0012] FIG. 6 is a block diagram of a transition optimizer in accordance with an example embodiment.[0013] FIG. 7 is a block diagram of an inverse bit stream transformer in accordance with an example embodiment.[0014] FIG. 8 is a block diagram of a receiver in accordance with an example embodiment.[0015] FIG. 9 is a graph of bit transition distribution as a function of relative frequency comparing raw bit transitions and coded bit transitions due to a bit stream transformer in accordance with an example embodiment.[0016] FIG. 10 is a table of bit transformation patterns in accordance with an example embodiment.[0017] FIG. 11 is a graph of bit transition distribution as a function of relative frequency comparing raw bit transitions and coded bit transitions due to a bit stream transformer in accordance with another example embodiment.[0018] FIGS. 12A and 12B are block diagrams of transmitters with parallelized implementations of transition control circuitry in accordance with example embodiments.[0019] FIG. 13 is a block diagram of a serialized implementation of transition control circuitry in accordance with an example embodiment.[0020] FIG. 14 is a block diagram of another parallelized implementation of transition control circuitry in accordance with an example embodiment.[0021] FIG. 15 is a block diagram of a modified transition optimizer in accordance with an example embodiment.[0022] FIG. 16 is a block diagram of transition control circuitry in accordance with another example embodiment.[0023] FIG. 17 is a flowchart of a bit transition control method in accordance with an example embodiment.[0024] FIG. 18 is a table of bit transformation patterns in accordance with another example embodiment.[0025] FIG. 19 is a table of bit transformation patterns in accordance with yet another example embodiment.[0026] FIG. 20 is a graph of a spectrum of bit stream switching (magnitude as a function of frequency) comparing raw bits and transformed bits due to transition control circuitry for a parallel data interface in accordance with an example embodiment.[0027] FIG. 21 is a graph of a spectrum of bit stream switching (magnitude as a function of frequency) comparing raw bits and transformed bits due to transition control circuitry for a parallel data interface in accordance with an example embodiment.[0028] The same reference numbers (or other reference designators) are used in the drawings to designate the same or similar (structurally and/or functionally) features.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0029] Some example embodiments include transition control circuitry for use with a parallel data interface between electronic circuits (e.g., integrated circuits (ICs), chiplets, multi-chip modules (MCMs), system-on-a-chips (SoCs), circuitry on a printed circuit board (PCB), combinations thereof, etc.). The transition control circuitry is configured to mitigate undesirable power supply ripple and/or power consumption due to bit transitions (changing a bit value from “0” to “1”, or from “1” to “0”) on the parallel data interface. More specifically, power supply ripple is due to variance in the number of bit transitions for each output symbol transmitted via the parallel data interface. Meanwhile, power consumption increases as the total number of bit transitions for all output symbols transmitted by the parallel data interface increases. In one example embodiment, a parallel data interface and transition control circuitry is used for communications between a first electronic circuit and a second electronic circuit. In different example embodiments, the first and second electronic circuits vary. Example first and/or second electronic circuits include baseband processors, field- programmable gate arrays (FPGAs), application-specific ICs (ASICs), memories, data samplers, transceivers, peripherals, anal og-to-digi tai converters (ADCs), digital-to-analog converters (DACs), analog front-ends (AFEs), and/or other circuits with a parallel data interface.[0030] In one example embodiment, the transition control circuitry is configured to reduce bit transition variance on the parallel data interface to mitigate power supply ripple to components of the first and/or second electronic circuits relative to an electronic circuit having a parallel data interface without the transition control circuitry. In other example embodiments, the transition control circuitry is configured to reduce the total number of bit transitions on the parallel data interface to reduce power consumption of the first and/or second electronic circuits relative to an electronic circuit without the transition control circuitry.[0031] In some example embodiments, the transition control circuitry uses a few extra parallel data interface lanes (e.g., n-k extra interface lanes) to transfer k-bit words as n-bit symbols, where k is a first integer and n is a second integer larger than k. In one example n = 20, k = 16, where n parallel data interface lanes are used to transmit the k-bit words using n-bit symbols. In this example, there are 4 (n-k) extra parallel data interface lanes, which gives the transition control circuitry flexibility to optimize bit transitions on the parallel data interface to: reduce bit transition variance on the parallel data interface lanes; or reduce the total number of bit transitions on the parallel data interface lanes.[0032] In some example embodiments, the transition control circuitry includes: a bit stream transformer configured to map the k-bit input words into n-bit output symbols; and a transition optimizer configured to select an optimal bit transformation pattern in accordance with a target criteria (e.g., reduce bit transition variance and/or reduce the total number of bit transitions). In one example embodiment, the optimal bit transformation pattern is based on a previously transferred n- bit output symbol and a current k-bit input word. On the receiver side, complementary transition control circuitry (relative to the transmitter side circuitry) receives the n-bit symbols and uses an inverse bit stream transformer to recover each k-bit input word.[0033] In some example embodiments, the transition optimizer determines the bit transformation pattern based on a figure of merit approach. As used herein, “figure of merit” refers to comparing the performance of multiple options relative to a target criteria (e.g., a minimum variance in the number of bit transitions for output symbols resulting from application of bit transformation patterns, or a minimum number of total bit transitions for output symbols resulting from application of bit transformation patterns), and selecting the option with the best performance. In one example, the figure of merit uses a target criteria that minimizes the total number of bit transitions for the parallel data interface. In another examples, the figures of merit uses a target criteria that minimizes bit transition variance (targeting the same number of transitions for each of the n-bit output symbols).[0034] In some example embodiments, the transition control circuitry prepares output symbols for parallel data transfers between first and second electronic circuits with one or more the following design targets: minimize the number of bit transitions on the parallel data interface to reduce power consumption; maintain the number of bit transitions on the parallel data interface constant (reduce the standard deviation of bit transitions) to reduce power supply ripple; coding and decoding complexity should be reasonable; and support error detection if possible (e.g., mark errors in the data stream). By maintaining the number of bit transitions constant (or reducing bit transition variance), the transition control circuitry reduces coupling of power supply ripple by reducing the variance of switching activity (e.g., the switching activity of high current complementary metal oxide semiconductor (CMOS) switches due to transitions) to analog sub-systems and/or other sensitive sub-systems of an IC. Other benefits of the transition control circuitry may include: better analog performance in terms of signal -to-noise ratio (SNR) and spurious noise reduction; reduced power supply interference; and reduced simultaneous switching noise (SSN).[0035] The described transition control circuitry options and related operations are able to reduce the variance in the number of bit transitions on the parallel data interface for each of the n-bit output symbols at the expense of some extra lanes. With the transition control circuitry, there can be a tradeoff between reducing variance in the number of bit transitions and reducing the total number of transitions (i.e., reduce noisy interference or power supply ripple at the expense of increased power consumption). In some example embodiments, an additional parity bit on the parallel data interface is used with the transition control circuitry to reduce the chances of undetected data corruption. Also, different versions of the described encoding algorithm can be developed using different transformation patterns (sometimes called “masks” herein) and/or selection criteria for bit transition targets at the transmitter. Regardless of the particular encoding algorithm used, the receiver implementation is simple and involves only an unmasking operation to recover the original input words.[0036] In the figures, blocks are sometimes used to describe circuits, components, or related operations. In different embodiments, these blocks could be combined or further divided without changing the intended functionality described. Without limitation, such blocks may represent hardware, firmware, and/or software. In some example embodiments, the blocks represent instructions stored in memory and executable by a processor to perform the functionality described. As desired, it is possible to implement described operations using different combinations of logic, hardware, firmware, and/or software.[0037] FIG. 1 is a block diagram of a system 100 in accordance with an example embodiment. As shown, the system 100 includes a first electronic circuit 102 (e.g., an IC, a chiplet, an MCM, an SoC, or circuitry on a PCB) in communication with a second electronic device (e.g., an IC, a chiplet, an MCM, an SoC, or circuitry on a PCB). In the example of FIG. 1, the first electronic circuit 102 includes an RF sampling transceiver 104. In other example embodiments, the first circuit includes a baseband processor, FPGA, ASIC, memory, data sampler, ADC, DAC, peripheral, AFE and/or other circuit with a parallel data interface.[0038] The RF sampling transceiver 104 includes an AFE 106 and a parallel data interface 108. The parallel data interface 108 includes hardware, firmware, and/or software configured to: prepare input data for transmission via parallel data interface lanes 116; and/or recover data from symbols received from the parallel data interface lanes 116. The RF sampling transceiver 104 also includes an input bit stream source 115 coupled to the parallel data interface 108 and configured to provide an input bit stream or related samples to the transition control circuitry 110 of the parallel data interface 108. Without limitation to other example embodiments, the input bit stream source 115 may provide complex baseband I/Q samples from the AFE 106 to the transition control circuitry 110 for transfer to a second electronic circuit (e.g., a baseband processor). In different example embodiments, the input bit stream source 115 may be a serial communication interface or parallel communication interface between the AFE 106 and the parallel data interface 108. In some example embodiments, the first electronic circuit 102 is in communication with one or more wireless transceivers (not shown), which provide data to the RF sampling transceiver 104 as analog signals. The RF sampling transceiver 104 converts received analog signals to digital form, resulting in an input bit stream being buffered, stored, and/or otherwise provided to the transition control circuitry 110 by the input bit stream source 115. As described herein, samples of the input bit stream are converted to output symbols, in accordance with a bit transformation pattern, and are transferred to the second electronic circuit 122 via the parallel data interface lanes 116.[0039] The second electronic circuit 122 is configured to recover the input bit stream samples from the output symbols. The recovered input bit stream samples are combined as appropriate and then processed, analyzed, stored, and/or forwarded by the second electronic circuit 122. In different example embodiments, the second electronic circuit 122 may generate input bit streams (related to and/or unrelated to a recovered input bit stream) for transmission to the first electronic circuit 122 via the parallel data interface 128 and the parallel data interface lanes 116. The first electronic circuit 102 is configured to recover input bit streams from the second electronic circuit 122, prepare the recovered input bit streams for wireless transmission as appropriate, and transmit the input bit streams as analog signals to other wireless transceivers. In other example embodiments, the first electronic device 102 and the second electronic device 122 vary from the RF transceiver and baseband processor example given.[0040] The parallel data interface 108 is coupled to or includes transition control circuitry 110, which operates to: mitigate power supply ripple due to variance in the number of bit transitions on the parallel data interface 108 relative to an electronic circuit having a parallel data interface without the transition control circuitry 110; and/or reduce power consumption due to bit transitions on the parallel date interface 108 relative to an electronic circuit having a parallel data interface without the transition control circuitry 110. In the example of FIG. 1, the transition control circuitry 110 includes a bit stream transformer 112 (labeled “BST”) configured to prepare output symbols for parallel data transmission operations. In one example embodiment, each output symbol is prepared from an input word and a bit transformation pattern. In some example embodiments, the transition control circuitry 110 also includes an inverse bit stream transformer 114 (labeled “IBST”) to recover input words from output symbols received via the parallel data interface lanes 116. In one example, the inverse bit stream transformer 114: receives or determines the bit transformation pattern used by the bit stream transformer 132 of the second electronic circuit 122; and uses the bit transformation pattern to perform an inverse transformation and recover input words from the received output symbols. As described herein, the transition control circuitry 110 may include other components such as circuitry to support extra parallel data interface lanes (n-k extra interface lanes in addition to k interface lanes) to transfer n-bit output symbols, a transition optimizer, a delay block, and/or other components.[0041] In the example of FIG. 1, the first electronic circuit 102 is coupled to a second electronic circuit 122 via parallel data interface lanes 116. As shown, the second electronic circuit 122 may include a processor 124 (e.g., a baseband processor) and a parallel data interface 128 (which, in some example embodiments, is implemented to be the same as or similar to parallel data interface 108). In other example embodiments, the second electronic circuit 122 includes a FPGA, ASIC, memory, data sampler, transceiver, peripheral, ADC, DAC, AFE, and/or other circuit with a parallel data interface. As shown, the second electronic circuit 122 also includes an input bit stream source 135 coupled to the transition control circuitry 130 and configured to provide an input bit stream or related samples to the transition control circuitry 130. Without limitation to other example embodiments, the input bit stream source 135 may provide an input bit stream generated by the processor 124 to the transition control circuitry 130 for transfer to the first electronic circuit 102. The input bit stream may be generated by the processor 124, for example, in response to instructions stored in memory and executed by the processor 124 and/or in response to data received from the first electronic device 102). In different example embodiments, the input bit stream source 135 may be a serial communication interface or parallel communication interface between the processor 124 and the transition control circuitry 130.[0042] The processor 124 may be any processing system or sub-system configured to process data transmitted to and from the AFE 106. In some example embodiments, the processor 124 is a baseband processor, FPGA, or ASIC. In different example embodiments, the partitioning between hardware and firmware for the processor 124 may vary.[0043] The parallel data interface 128 of the second electronic circuit 122 includes hardware, firmware, and/or software configured to: prepare input data for transmission via parallel data interface lanes 116; and/or recover data from symbols received from the parallel data interface lanes 116. In the example of FIG. 1, the parallel data interface 128 is coupled to or includes transition control circuitry 130. The transition control circuitry 130 includes a bit stream transformer 132 configured to encode input words with additional bits to provide output symbols (encoded input words) to the parallel data interface lines 116. In some example embodiments, the transition control circuitry 130 also includes an inverse bit stream transformer 134 configured to decode the output symbols received from the parallel data interface lanes 116 and recover the related input words. As described herein, the transition control circuitry 130 may include other components, such as circuitry to support extra parallel data interface lanes (n-k extra interface lanes in addition to k interface lanes) to transfer n-bit output symbols, a transition optimizer, a delay block, and/or other components.[0044] In some example embodiments, parallel data transmissions are one-way. As an example, if one-way parallel data transmissions are used, the transition control circuitry 110 may include only the bit stream transformer 112 (the inverse bit stream transformer 114 is omitted) while the transition control circuitry 130 includes only the inverse bit stream transformer 134 (the bit stream transformer 132 is omitted). In another example, the transition control circuitry 110 may include only the inverse bit stream transformer 114 (the bit stream transformer 112 is omitted) while the transition control circuitry 130 includes only the bit stream transformer 132 (the bit stream transformer 132 in omitted). In other example embodiments, parallel data transmissions are two-way with each electronic circuit including a respective bit-stream transformer and inverse bit-stream transformer as in FIG. 1 [0045] The RF sampling transceiver 104 of the first electronic circuit 102 may be used in a number of wireless applications (e.g., wireless communications or radar) to support high channel count (e.g., 8 transmitter and 8 receiver chains) and wide bandwidth (e.g., 1.2 GHz) operations. In one example, the RF sampling transceiver 104 samples an RF signal with multi -gigasample per second (Gsps) high-performance data converters. An example data converter includes a 14 bit, 4 Gsps analog-to- digital converter (ADC) and/or a 12 Gsps digital-to-analog converter (DAC). These example embodiments, where converters with higher sampling rates are used, allow the AFE 106 to avoid the need for mixers. With the parallel data interfaces 108 and 128, large amounts of data are transferred between the RF sampling integrated transceiver 104 and the processor 124 (e.g., a FPGA or ASIC). In one example embodiment, the system 100 transfers data using 16-bit, 1 Gsps I/Q samples (-800 MHz bandwidth or BW) for 8 receiver (RX) channels and needs 256 gigabits per second (GBPS) of equivalent data interface. The system throughput is proportional to the sampling rate (BW of interest), the bits per sample (real/Imaginary), and the number of channels. In an N channel system, there will be N streams of data to be sent/received. In different example embodiments, the number of channels, the sampling rate, and/or BW may vary.[0046] In a conventional approach, serializer/deserializer (SerDes) interfaces (e.g., JESD 204B/C) are used with the RF sampling transceiver 104 and the processor 124, but these SerDes interfaces consume significant power (e.g., a few Watts). Instead of SerDes interfaces, the described first and second electronic circuits 102 and 122 use respective transition control circuitry 110 and 130 to perform parallel data transmissions of output symbols, where the output symbols include an input word with additional bits or encoding to improve a transition performance target (e.g., reduced variance in the number of bit transitions for the output symbols and/or a reduced total number of bit transitions for the output symbols). In some example embodiments, each of the transition control circuitry 110 and/or the transition control circuitry 130 are configured to encode input words (generate output symbols) for parallel data transmissions by: obtaining k-bit input words (e.g., a 16- bit input word); and mapping each k-bit input word to a respective n-bit output symbol (e.g., a 20- bit output symbol). This process results in: n-k redundant bits (e.g., 4 redundant bits if k = 16 and n = 20), where each successive transmitted output symbol has a target number of transitions. Without the transition control circuitry, the transmission of uncoded k-bit words results in an average of k/2 bit transitions on the parallel data interface, and the standard deviation will be large. With transition control circuitry (e.g., the transition control circuitry 110 or 130), one option is to reduce the standard deviation of bit transitions for parallel data transmissions, where input words are encoded as output symbols to achieve a target number of transitions on the parallel data interface (e.g., the parallel data interfaces 108 or 128). Another option is to reduce the total number of bit transitions for parallel data transmissions, where input words are encoded as output symbols to achieve a minimum number of bit transitions on the parallel data interface (e.g., the parallel data interfaces 108 or 128). To perform parallel data recovery operations, the transition control circuitry 110 or 130 performs reverse operations (to reverse the encoding process) to decode the received output symbols and recover the intended input words.[0047] FIG. 2 is a graph 200 of bit transition distribution as a function of relative frequency for a parallel data interface in accordance with a conventional technique. For graph 200, a parallel data interface without transition control circuitry is assumed. As shown, the graph 200 shows a wide spread of bit transitions with 8 transitions as the average number of transitions. As noted herein, for a parallel data interface, a large number of bit transitions as well as a large variance (spread) in transitions is undesirable. More specifically, an increasing number of transitions increases power consumption, and an increasing variance in the number of bit transitions increases power supply ripple.[0048] FIG. 3 is a block diagram of an IC system 300 (an example of the system 100) in accordance with an example embodiment. In FIG. 3, the IC system 300 is an MCM or IC with a first electronic circuit 102A (an example of the first electronic circuit 102 in FIG. 1) and a second electronic circuit 122A (an example of the second electronic circuit 122 in FIG. 1) in communication via parallel data interface lanes 306.[0049] In some example embodiments, the first electronic circuit 102A includes an RF sampling transceiver (e.g., the RF sampling transceiver 104 in FIG. 1). In other example embodiments, the first electronic circuit 102 A includes a baseband processor, FPGA, ASIC, a memory, a data sampler, a peripheral, AFE, ADC, DAC, and/or another circuit with a parallel data interface. As shown, the first electronic circuit 102 A includes a parallel data interface 108 A (an example of the parallel data interface 108 in FIG. 1) and transition control circuitry 110A (an example of the transition control circuitry 110 in FIG. 1). In some example embodiments, the second electronic circuit 122A includes a processor (e.g., the processor 124 in FIG. 1). In other example embodiments, the second electronic circuit 122A includes a FPGA, ASIC, memory, a data sampler, a peripheral, a transceiver, AFE, ADC, DAC, and/or another circuit with a parallel data interface. As shown, the second electronic device 122A includes a parallel data interface 128A (an example of the parallel data interface 128 in FIG. 1) and transition control circuitry 130A (an example of the transition control circuitry 130 in FIG. 1). Although the first and second electronic circuits 102A and 122A are represented as side-by- side, it should be understood that other arrangements are possible (e.g., a vertical arrangement with the first electronic circuit 102A above or below the second electronic circuit 122A, or separate ICs for the first and second electronic circuit 102 A and 122 A).[0050] During operations of the IC system 300, increased switching activity related to the parallel data interface lanes 306 results in higher power consumption. Also, increased switching activity variance related to the parallel data interface lanes 306 results in higher power supply ripple, which may be propagated (e.g., via RF/analog coupling as multiplicative/additive spurs and noise) to the first or second circuits 102A and 122A, or other component of the IC system 300. In some example embodiments, this power supply ripple is due to the switching activity of high current CMOS switches for bit transitions (from 0 to 1, or from 1 to 0) on the parallel data interface lanes. In other words, as the switching activity varies on the parallel data interface lanes, the current draw on the power supply varies, resulting in the power supply voltage moving up and down. As the current draw due to bit transitions increases, the power supply voltage will decrease. As the current draw due to bit transitions decreases, the power supply voltage will increase. This power supply voltage variance over time results in a power supply ripple can affect different components of an electronic circuit that rely on the power supply voltage for their respective operations. Some components are sensitive to this power[0051] has potential to limit SNR and spurious-free dynamic range (SFDR) performance of an RF sampling transceiver or other IC sub-system. With the transition control circuitry 110A and/or BOA, one option is to provide output symbols to the parallel data interface lanes 306 with encoding that reduces power supply ripple by reducing the bit transition variance for the parallel data interface lanes 306. By reducing power supply ripple, the signal bandwidth or other performance parameters of the first electronic circuit 102 A can be improved. Another option available with the transition control circuitry 110A is to provide output symbols to the parallel data interface lanes 306 with encoding that reduces the total number of bit transitions and thus reduces power consumption for the first electronic circuit 102 A and/or the second electronic circuit 122 A.[0052] FIG. 4 is a block diagram of a transmitter 400 with transition control circuitry in accordance with an example embodiment. In some example embodiments, the transmitter 400 is part of a parallel data interface (e.g., the parallel data interface 108 or 128 in FIG. 1). As shown, the transmitter 400 includes a bit stream transformer block 404, a transition optimizer block 406, and a delay block 410. The bit stream transformer block 404 is configured to receive input words of an input bit stream 402 (e.g., the input bit stream 402 or related samples may be provided by the input bit stream source 115 or the input bit stream source 135 in FIG. 1) over time, and to provide related output symbols of an output bit steam 408 over time. In the example of FIG. 1, the operations of the bit stream transformer block 404 are based on optimizer results 414 provided by the transition optimizer block 406.[0053] In some example embodiments, the transition optimizer block 406 is configured to: receive an input word of the input bit stream 402 (e.g., the input bit stream 402 or related samples may be provided by the input bit stream source 115 or the input bit stream source 135 in FIG. 1); receive a delayed version 412 of a previous output symbol from the output bit stream 408 via the delay block 410; and determine the optimizer results 414 relative to a figure of merit or target criteria (e.g., a minimum variance in the number of bit transitions for output symbols resulting from application of bit transformation patterns, or a minimum number of total bit transitions for output symbols resulting from application of bit transformation patterns). The figure of merit or target criteria may be selected for a given scenario and programmed into the transition control circuitry (e.g., the transition control circuitry 110 or 130). In some example embodiments, it is possible to program the transition control circuitry more than once and/or to adjust a previous figure of merit or target criteria.[0054] In one example, the transmitter 400 is configured to map a k-bit input word (e.g., k = 16 or another integer smaller than n) to an n-bit output symbol (e.g., n = 20 or another integer larger than k) to facilitate keeping the number of bit transitions on a parallel data interface constant. In some example embodiments, the transition optimizer block 406: divides a set of 2noutput symbols into 2ksets of 2n'kcandidate bit transformation patterns (i.e., 2n'kcandidate bit transformation patterns are associated with each one of 2kinput words by a bijective function mapping); compares a delayed version 412 of a previous output symbol of the output bit stream 408 with the 2n'kn-bit corresponding to a current input word of the input bit stream 402; and determines the transformation index 414 (e.g., a code word) based on the comparison. In one example, k = 16 and n=20. In other examples, k and n vary, where k is smaller than n. Regardless of the values for k and n, the bit stream transformer block 404 uses the transformation index 414 from the transition optimizer block 406 to obtain an output bit stream 408 with n-bit output symbols from k-bit input words of the input bit stream 402. At a receiver, a reverse mapping is used to recover the k-bit input words of the input bit stream 402 from n-bit output symbols of the output bit stream 408. In some example embodiments, the operations of FIG. 4 are performed using hardware and/or software executed by a microcontroller/processor.[0055] FIG. 5 is a block diagram of a bit stream transformer 500 (an example of the bit stream transformers 112, 132, or 404 in FIGS. 1 and 4) in accordance with an example embodiment. In FIG. 5 and some subsequent figures, time is segmented (e.g., “m” is a given sample time, “m-1” is the previous sample time relative to m, and “m+1” is the next sample time relative to m). As shown, the bit stream transformer 500 includes a first block 504 configured to perform bit stream transformation on an input bit stream sample {S(m)} of the input bit stream 402 (each input bit stream sample is sometimes referred to herein as an input word). In some example embodiments, S(m) is a k-bit input word. Regardless of the particular length of S(m), the bit stream transformation is performed based on a bit transformation pattern 510 received from a second block 508. The second block 508 is configured to: receive the transformation index 414 of bit transformation pattern options; and select a bit transformation pattern 510 to be used with the first block 504 in accordance with a target criteria (e.g., a figure of merit or target criteria related to a bit transition target). The output of the first block 504 is an output bit stream sample {S’(m)} of the output bit stream 408 (each output bit stream sample is sometimes referred to herein as an output symbol). In some example embodiments, S’(m) is an n-bit output symbol. In the example embodiment of FIG. 5, the input bit stream is “1010010101101111” and the output bit stream is “00011111000000111010”.[0056] In the example of FIG. 5, the index 414 is an (n-k) bit index {I(m)} provided by a transition optimizer (e.g., the transition optimizer 406 in FIG. 4) for use with generating the bit transformation pattern 510 to be used for the given sample time m. In some example embodiments, the n-k bit index information {I(m)} is embedded in the output bit stream sample S’(m), either directly (e.g., as a block of bits) or indirectly (e.g., as distributed bits). In one example embodiment, the bits of the index {I(m)} are added to either end of S(m) to generate S’(m). In another example embodiment, the bits of the index {I(m)} are distributed as separated bits throughout S(m) to generate S’(m). In some example embodiments, the operations of FIG. 5 are performed using hardware and/or software executed by a microcontroller/processor.[0057] FIG. 6 is a block diagram of a transition optimizer 600 (an example of the transition optimizer block 406 in FIG. 4) in accordance with an example embodiment. As shown, the transition optimizer 600 includes a first block 604 configured to generate L candidate bit transformations 606 responsive to each input word (e.g., S(m)). The L candidate bit transformations 606 for each input word are provided to a second block 608, which is configured to perform a figure of merit computation result 610 based on the L candidate bit transformations 606 and a figure of merit or target criteria. The figure of merit result 610 is provided to a third block 612, which is configured to select a bit transformation pattern for use as the index 414 based on the figure of merit result 610. In FIG. 6, the delay block 410 receives S’(m) and provides a delayed version 412 of the previous output bit stream sample S’(m-l) to the second block 608 for use in calculating the figure of merit result 610. In some example embodiments, the operations of FIG. 6 are performed using hardware and/or software executed by a microcontroller/processor.[0058] In some example embodiments, the transition optimizer 600 applies possible transformation patterns to each input word (e.g., S(m)) to generate L candidate n-bit output symbols (e.g., 2n-k). The L candidate n-bit output symbols {Yo(m), ..., YL-i(m)} are compared against the previous n-bit output bit stream sample {S’(m-l)} to compute the total number of bit transitions, for each of the L choices. The optimal transformation pattern is selected based on a desired figure of merit or target criteria. In one example embodiment, the figure of merit or target criteria prioritizes the total number of bit transitions being maintained as close as possible to a constant target value G. In another embodiment, the figure of merit or target criteria prioritizes minimization of the total number of bit transitions.[0059] In some example embodiments, the transition optimizer 600: associates each k-bit input bit stream sample (e.g., 216input words if k=16) with sixteen (24) candidate patterns/masks; and constructs sixteen n-bit (e.g., 20 bit) output symbols for each input word. In other words, each output symbol is the result of combining an input word with a bit transformation pattern (e.g., using the pattern index { [ 16bits][4 bits]}). In different example embodiments, a bit transformation pattern is combined with an input word using a direct or indirect embedding function. For example, if the input words are k-bit words and the pattern is n bits, the input words may be zero padded to construct the n-bit output symbols. As another option, a k-bit input word may be combined with a pattern (e.g., the k-bit input word is appended with a n-k bit index). In some example embodiments, for each input word, an output symbol is selected that is closest to a target number of transitions. This can be modified to any criteria (e.g., the least amount of bit transition variance, the lowest number of total bit transitions, or some other criteria). In some example embodiments, the figure of merit or target criteria: makes the relative probability of the most frequent number of transitions as close to 1 as possible; has low standard deviation; and lowers the average number of transitions. [0060] FIG. 7 is a block diagram 700 of an inverse bit stream transformer block 704 (e.g., the inverse bit stream transformer 114 or 134 in FIG. 1) in accordance with an example embodiment. In the example of FIG. 7, the inverse bit stream transformer block 704: receives the output symbols of a received bit stream 702 (e.g., the output bit stream 408 in FIGS. 4-6) via a parallel data interface; and outputs a recovered bit stream 706 (e.g., the input bit stream 402 in FIGS. 4-6). To perform the inverse bit stream transformation and recover the input words from the output symbols, the inverse bit stream transformer block 704 includes an inverse transformation algorithm, instructions, and/or related hardware to undo the transformation previously applied to the input words. In some example embodiments, the operations of FIG. 7 are performed using hardware or software executed by a microcontroller/processor.[0061] FIG. 8 is a block diagram of a receiver 800 (an example of the receiver side of the RF sampling transceiver 104 in FIG. 1) in accordance with an example embodiment. As shown, the receiver 800 includes a first block 804 configured to: obtain the output bit stream sample S’(m) of the output bit stream 408 (e.g., an n-bit output symbol) via parallel data interface lanes; and recover S(m) of the input bit stream 402 responsive to an inverse transformation pattern 812. In the example of FIG. 8, the inverse transformation pattern 812 is obtained using a second block 806 and a third block 810. The second block 806 is configured to receive S’(m) and extract a bit transformation index {l(m)} 808 from S’(m). Note: The extracted bit transformation index may, but does not have to, vary for each output symbol depending on the target criteria. The third block 810 then generates an inverse bit transformation pattern 812 for use by the first block 804 based on the extracted bit transformation index 808. The inverse transformation pattern 812 is applied to the S’(m) by the first block 804 to undo the effect of the bit transformation performed at the transmitter and thus recover S(m). In like manner, the receiver 800 is able to obtain other output bit stream samples (e.g., S(m+1), S(m+2), etc.) of the output bit stream 408 and recover respective input words of the input bit stream 402. The operations of FIG. 8 are performed either using hardware or software executed by a microcontroller/processor.[0062] FIG. 9 is a graph 900 of bit transition distribution as a function of relative frequency comparing raw bit transitions (without encoding/transformation) for a 16-bit input word on a parallel data interface (dashed-line plots) and coded bit transitions (with encoding/transformation) for a 20- bit output symbol on a parallel data interface due to bit stream transformation operations in accordance with an example embodiment (solid-line plots). In graph 900, coded bit transitions result in 10 bit transitions 98% of the time. Also, the variance in the total number of bit transitions for coded bit transitions is approximately 0.023. By comparison, raw bit transitions result in 8 transitions 20% of the time, and the variance in the total number of bit transitions is approximately 4. The reduced variation in toggle factor (bit transition variance) provided by coded bit transitions mitigates power supply ripple and related issues relative to raw bit transitions.[0063] FIG. 10 is a table 1000 of bit transformation patterns in accordance with an example embodiment. Specifically, the index of table 1000 is 0-15 (a 4-bit value) due to the difference between n (the bit length of output symbols) and k (the bit length of input words) being 4. Depending on the values of k for input words and n for output symbols, the index may vary. The 16-bit patterns to which the 4-bit index values are applied in table 1000 are predetermined and may vary. In some example embodiments, bit transformation patterns are determined by computing the number of transitions between two successive output symbols (e.g., each n-bits in length). In one example embodiment, bit transformation patterns are determined by padding k-bit input words so that there are a total of n-bits, and then XORing the padded input words with n-bit masks.[0064] In some example embodiments, transmitter-side bit stream transformation operations include: selecting candidate bit transformation patterns or static masks (e.g., the 16-bit patterns and 4-bit index patterns as shown in table 1000); analyzing the number of transitions between a previous n-bit output symbol and the output symbols resulting from use of the candidate bit transformation patterns with a given input word; selecting a bit transformation pattern (from the candidate bit transformation patterns) that best complies with a target bit transition constraint; and using the selected bit transformation pattern to generate an output symbol for the given input word. Later, the receiver-side inverse bit stream transformation operations include: determining the (n-k) bit pattern index used for transformation of a k-bit input word to an n-bit output symbol; and unmask the k-bit input word based on the pattern index.[0065] FIG. 11 is a graph 1100 of bit transition distribution as a function of relative frequency comparing raw bit transitions for a 16-bit input word on a parallel data interface (dashed-line plots) and coded bit transitions for a 20-bit output symbol on a parallel data interface (solid-line plots). The coded bit transitions are due to operations of a bit stream transformer as described herein. In some example embodiments, the coded bit transitions in the graph 1100 are the result of a bit transformation option involving a simple XOR operation of a 16-bit input word with a 16-bit transformation pattern. In this example, the transition optimizer uses a minimal total number of transitions as the figure of merit or target criteria. Among all possible transformation patterns, the one achieving the lowest number of total transitions is selected. As shown in the graph 1100, the average number of bit transitions for coded bit transitions is approximately 6 (out of 20 parallel data interface lanes). Also, the variance of bit transitions for coded transitions of the parallel data interface lanes is approximately 1.2. By comparison, the average number of transitions for raw bit transitions is 8 (out of 16 parallel data interface lanes), and the variance in the total number of bit transitions for raw bit transitions is approximately 4. The reduced variance (approximately 25%) in toggle factor (bit transition variance) provided by coded bit transitions compared to raw bit transitions mitigates power supply ripple and related issues. Also, lowering the total number of transitions by using coded bit transitions instead of raw bit transitions reduces power consumption.[0066] In different example embodiments, the transition control circuitry (e.g., the transition control circuitry 110 and 130 in FIG. 1) and related bit stream transformation options and topologies (e.g., serial, parallel, or pipelined) may vary. Such variance may be due to the intended clock rate of a particular system, complexity considerations, speed of operations, or other considerations. In some example embodiments, the transition control circuitry 110 and 130 may use different bit stream transformation options and topologies. As an example, the transition control circuitry 110 may use a parallelization factor of 1, while the transition control circuitry 130 may use a parallelization factor of 2 with half the clock rate.[0067] FIGS. 12A and 12B are block diagrams of parallelized implementations of transition control circuitry 1200 and 1250 in accordance with example embodiments. In FIG. 12A, the transition control circuitry 1200 includes a topology that can be parallelized as desired. As shown, the transition control circuitry 1200 includes a bit stream transformer block 404 A (an example of the bit stream transformer 404 in FIG. 4), a transition optimizer block 406A (an example of the transition optimizer block 406 in FIG. 4), and an index remapper block 1202. The bit stream transformer block 404A is configured to receive S(m) of the input bit stream 402 and provide S’(m) of the output bit stream 408. The transition optimizer block 406 A (an example of the transition optimizer block 406 in FIG. 4) is configured to determine an intermediate bit transformation index(m) 1206 from S(m) and S(m-l). The index remapper block 1202 is configured to determine a final bit transformation index(m) 1208 based on the intermediate bit transformation index(m) 1206 and a final bit transformation index(m-l) 1204 (the bit transformation index for time m-1). The bit stream transformer block 404A generates S’(m) based on S(m) and the final bit transformation index(m) 1208. With the index remapper block 1202, parallelization of the transition control circuitry 1200 and determination of a final transformation index for time m is facilitated by using the previous transformation index for time m-1 .[0068] In FIG. 12B, the transition control circuitry 1250 includes transition control circuit 1200 A and 1200B (each an example of the transition control circuitry 1200 in FIG. 12A) to provide a topology with parallelization by 2. Specifically, the transition control circuit 1200A includes the bit stream transformer block 404 A, which is configured to receive S(m) of the input bit stream 402 and provide S’(m) of the output bit stream 408. The transition optimizer block 406A uses S(m) and S(m- 1) to determine an intermediate bit transformation index(m) 1206A. The transition control circuitry 1200A includes an index remapper block 1202A (an example of the index remapper 1202 in FIG. 12A) that determines a final bit transformation index(m) 1208 A based on the intermediate bit transformation index(m) 1206 and the final bit transformation index(m-l) 1204. The bit stream transformer block 404A generates S’(m) based on S(m) and the final bit transformation index(m) 1208A.[0069] The transition control circuit 1200B includes a bit stream transformer block 404B (an example of the bit stream transformer 404 in FIG. 4) configured to receive S(m+1) of the input bit stream 402 and provide S’(m+1) of the output bit stream 408. The transition optimizer block 406B (an example of the transition optimizer blocks 406 in FIG. 4) uses S(m) and S(m+1) to determine an intermediate bit transformation index(m+l) 1206B. The transition control circuitry 1200B includes an index remapper block 1202B, which determines a final bit transformation index(m+l) 1208B based on the intermediate bit transformation index(m+l) 1206B and the final bit transformation index(m) 1208A. The bit stream transformer block 404B is configured to generate S’(m+1) based on S(m+1) and the final bit transformation index(m+l) 1208B. As desired, further parallelization is possible (e.g., 4 transition control circuits 1200 in parallel, etc.).[0070] FIG. 13 is a block diagram of a serialized implementation of transition control circuitry 1300 (an example of the transition control circuitry 110 or 130 in FIG. 1, or the transition control circuitry 110A or 130A in FIG. 3) in accordance with an example embodiment. As shown, the transition control circuitry 1300 includes a first bit stream transformer block 1304 configured to receive S(m) of the input bit stream 402 and to provide S’(m) of the output bit stream 408 based on S(m) and a bit transformation index(m) 1310. In the example of FIG. 13, the bit transformation index(m) 1310 is provided by a first transition optimizer block 1308 configured to generate the bit transformation index(m) 1310 based on S(m) of the input bit stream 402 and S’(m-l) of the output bit stream 408.[0071] As shown, S’(m) of the output bit stream 408 is provided from the first bit stream transformer block 1304 to a second transition optimizer block 1314. In the example of FIG. 13, the second transition optimizer block 1314 is configured to generate a bit transformation pattern (m+1) 1316 based on S’(m) of the output bit stream 408 and S(m+1) of the input bit stream 402. The transition control circuitry 1300 also includes a second bit stream transformer block 1318 configured to generate S’ (m+1) of the output bit stream 408 based on S(m+1) and the bit transformation index(m+l) 1316. With a serial topology of FIG. 13, the transition control circuitry 1300 has reduced complexity and related benefits (smaller size, cost, etc.) compared to the parallel topologies of FIGS. 12A and 12B. In some example embodiments, the operations of FIG. 13 are performed either using hardware and/or software executed by a microcontroller/processor.[0072] In some example embodiments, a parallel data interface can support a rate of 2 GBPS. However, such high-speed digital clocks (CLKs) may not be compatible with a given FPGA or AFE. To address this issue, a parallelized implementation (e.g., parallelized by 2 as in FIG. 12B) is used, in which bit streams corresponding to two consecutive sampling instances, m and m+1, are processed in a single clock cycle. In contrast, the transition control circuitry 1300 performs bit stream transformation for two consecutive samples as shown in FIG. 13. Note: the second transition optimizer block 1314 is configured to use S(m+1) and S’(m), which may be computed in the same clock cycle, to determine an optimal bit transformation index {I(m+1)}. Hence, this transition control circuitry 1300 is not easily amenable to a parallelized implementation.[0073] FIG. 14 is a block diagram of another parallelized implementation of transition control circuitry 1400 (an example of the transition control circuitry 110 or 130 in FIG. 1, or the transition control circuitry 110A or 130A in FIG. 3) in accordance with an example embodiment. As shown, the transition control circuitry 1400 includes a first bit stream transformer block 1404, which is configured to receive S(m) of the input bit stream 402 and to provide S’(m) of the output bit stream 408 based on S(m) and a bit transformation index(m) 1410. In the example of FIG. 14, the bit transformation index(m) 1410 is provided by a transition optimizer block 1408, which is configured to generate the bit transformation index(m) 1410 based on S(m) of the input bit stream 402 and S’(m- 1) of the output bit stream 1406.[0074] The transition control circuitry 1400 also includes a second bit stream transformer block 1420 configured to provide S’(m+1) of the output bit stream based on S(m+1) of the input bit stream 402 and a next bit transformation index(m+l) 1422. In the example of FIG. 14, the next bit transformation index(m+l) 1422 is provided by a modified transition optimizer block 1414 configured to generate a next bit transformation index(m+l) 1422 based on S(m), S(m+1), and the bit transformation pattern or index(m) 1410. In some example embodiments, the operations of FIG. 14 are performed using hardware and/or software executed by a microcontroller/processor.[0075] With the parallelized implementation and modified transition optimizer 1414, the transition control circuitry 1400 generates S’(m) and S’(m+1) of the output bit stream 408 in 1 clock cycle. As S’(m) could be generated using any one of L possible transformation patterns, the modified transition optimizer block 1414 applies all possible transformation patterns to S(m) and S(m+1) to generate L candidate n-bit output symbols for each. In example embodiment embodiments, the L candidate output symbols {Y (m+1), ..., Y (m+1)} are compared against the previous L candidate output2 symbols { YQ(m), . . . , Y (m)} to compute total number of bit transitions for each of L combinations. Then L optimal transformation pattern indices for {I(m+1)}, for each possible choice of {I0(m+l), ..., IL 1(m+l)}, are selected. Once {I(m)} is determined, an L:1 MUX selects the optimal transformation pattern index {I(m+1)}. This however results in significant increase in complexity in the modified transition optimizer block 1414.[0076] In some example embodiments, a pipelined topology is used when the time for completion of the processing exceeds the period of sampling (1/Fs). In such cases, the processing is split between different sampling periods and the throughput (processing rate) is matched to the input rate at the expense of latency (delay). The latency is the result of splitting the processing across multiple time periods. Pipelined operations during different time periods are shown in Table 1 below.Table 1As shown in Table 1, the three operations that are pipelined include transition optimizer operations, index remapper operations; and bit stream transformer operations. Since the transition optimizer is the most complex operation, in some example embodiments, a full sample period (time instance m) is set apart for the transition optimizer to complete its operation. In the same time instance (m), the index remapper performs the operation for the previous symbol (corresponding to time instance m- 1) and the bit stream transformer is performing the operations for the symbol prior to m-1 (i.e., time instance m-2). Thus, the output corresponding to a symbol at time instance m is available at time instance m+2 a latency of 2 time intervals. The hardware units for a pipeline topology is similar to the parallel by 1 topology of FIG. 12A, except the sub modules are processing samples corresponding to different time instances. As one output is available every period (except for a latency of 2 units in this example), the output throughput is matched to the input.[0077] In some example embodiments, a transition control circuit with a pipelined or parallelized arrangement performs the following operations: 1) the transition weight computation can be performed on the raw input word; and 2) a temporary pattern index (Index) is selected which satisfies a target transition criteria. In some example embodiments, these two operations involve XOR operations and bit counting. For some operations, no input from the previous time instance is needed. Also, in some example embodiments, a pattern index (MaskindexT) is a function of Index and MaskindexT'1. In other words, a temporary index T+l and index T is used to determine the index for T+l. The advantage of this technique is that some related complex operations (e.g., generating L candidates and selecting the best one) can be performed in parallel, and subsequent operations to determine the index T+1 from the temporary index T+l and index T are relatively less complex. Also, the pattern index may be a bij ective function. In some example embodiments, an XOR of Index and Maskindex7'1is performed.[0078] In some example embodiments, the k-bit input word is split into (n-k) groups of bits each (G). The grouping is done such that the sum adds to k bits i.e. As an example,17 bits can be broken into 6, 6, 5 bits each. Each (n-k) groups in G is associated with a invert bit to form a padded group G’. The initial value of the invert bit is 0 (i.e., no inversion). Each group is associated with a threshold equal toIn some example embodiments, the input word is encoded as follows: 1) each of the bits in the (n-k) groups G’ are compared to the corresponding bits transmitted in the previous time instance; and 2) the number of bit transitions in each group g' G G'are computed. For each group in G’, if the number of bit transitions > threshold, then the group is inverted. This will make the invert bit in that particular group 1 (as it was initialized to 0) [0079] FIG. 15 is a block diagram of a modified transition optimizer 1500 (an example of the modified transition optimizer 1414 in FIG. 14) in accordance with an example embodiment. As shown, the modified transition optimizer 1500 includes a first block 1504 configured to generate L candidate transformations 1506 for S(m) of the input bit stream 402. At block 1508, a figure of merit computation is performed over LxL combinations. As shown, block 1508 also receives L candidate transformations 1529 from a block 1528, which is configured to generate the L candidate transformations 1529 from S(m-l) of the input bit stream 402. For example, S(m-l) is obtained by passing S(m) through a delay block 1524.[0080] The output of block 1508 is a set of candidates 1510 that best comply with the figure of merit applied at block 1508. In FIG. 15, a block 1512 receives the set of candidates 1510 and selects an L bit transformation index 1514 based on the figure of merit for each of the S(m-l) candidates. At block 1516, an index (Index(m)) 1520 is selected based on index(m-l) and the L bit transformation index 1514. In some example embodiments, the operations of FIG. 15 are performed using hardware and/or software executed by a microcontroller/processor.[0081] In some example embodiments, the modified transition optimizer 1500 constructs a set of L transformation patterns by: 1) imposing a constraint of a ‘closed set’ under the transformation operation; 2) replacing S’(m+1) with S(m+1) in the modified transition optimizer 1500 to compute an intermediate index {I (m+1)}; and 3) employing an index remapper to compute {I(m+1)} from {I (m+1), I(m)}. In one example embodiment, the L transformation patterns are generated with the following constraints: 1) XOR of any two patterns selected from the L patterns, is also a member of the set; 2){0, , L — 1}; 3) the modified transition optimizer 1500 determines the optimal intermediate index using S(m), instead of the non-available S’(m); and 4) an increase in complexity is avoided (L instead 2 of L computations). With the modified transition optimizer 1500, the previous output symbol is XORed with potential outputs for the current input word. This does not allow a parallelized implementation.[0082] In some example embodiments, a more careful selection of patterns can be performed for parallelized operations. As an example, selection of patterns to enable a parallelized implementation may involve: 1) generating 2L(L=n-k) candidate patterns; 2) selecting L ‘basis patterns’ of length k; 3) denoting the candidate patterns by Mo,4) generating 2Lpatterns as all possible linear combinations of the L masks (e.g., Maskbobi bL-iand 5) constructing the 20-bit pattern as (the pattern concatenated with abinary weight). In this example, the mask set has the property that any mask XORed with another mask is a valid mask. The masks form a set which is closed under the XOR operation (i.e., the bit stream transformation operation of choice in the current example). Hence, the distribution of number of transitions between the L candidate transformations corresponding to S(m) and S’(m-l) is identical to the distribution of the number of transitions between the L candidate transformations corresponding to S(m) and S(m-l) transformed by any arbitrary mask. In the given example mask set, the transformation pattern includes the all-zero pattern, and S(m-l) transformed by any arbitrary mask is simply a zero padded version of S(m-l).[0083] As an example, the pattern generation may be performed as follows: 1) split k into n-k groups of bits, each of width gtsuch thatgt = k; 2) have Mt=will have a binary representation of (gi l’s) followed by9 k 0’s. For example, if {9i} = {5,5,6}Mo=' 0000000000011111', M1=' 0000001111100000', M2=' 1111110000000000' In another example, the mask generation is performed by selecting the pattern using the followingAs an example, if k = 16, and L = 4,Mo=' 0000000011111111', M1=' 0000111100001111', M2=' 1100110011001100' , M3=' 0101010101010101'In another example, the pattern generation is performed by selecting L masks Mj from a list of k length orthogonal / nearly orthogonal bit patterns. As an example, if k = 17 and L = 3,Mo=' 10101010101010101' , M1=' 10100101010110100', M2=' 11000011001111001' [0084] FIG. 16 is a block diagram of transition control circuitry 1600 (an example of the transition control circuitry 110 or 130 in FIG. 1, or the transition control circuitry 110A or 130A in FIG. 3) for parallel processing of more than one symbol at a time (e.g., FIG. 16 relates to an embodiment that parallel processes 2 symbols at a time) in accordance with another example embodiment. As shown, the transition control circuitry 1600 includes a first bit stream transformer block 1604 configured to receive S(m) of the input bit stream 402 and to provide S’(m) of the output bit stream 408 based on a bit transformation index(m) 1610. In the example of FIG. 16, the bit transformation index(m) 1610 is provided by a transition optimizer block 1608 configured to generate the bit transformation index(m) 1610 based on S(m) and S’(m-l).[0085] The transition control circuitry 1600 also includes a second bit stream transformer block 1616, which is configured to receive S(m+1) of the input bit stream 402 and to provide S’(m+1) of the output bit stream 408 based on a bit transformation index(m+l) 1624. In the example of FIG. 16, providing the bit transformation index(m+l) 1624 to the second bit stream transformer block 1616 involves a transition optimizer block 1618 and an index remapper block 1622. The transition optimizer block 1618 is configured to generate an intermediate index 1620 based on S(m) and S(m+1). The index remapper block 1622 is configured to generate the bit transformation index(m+l) 1624 from the intermediate index 1620 and the bit transformation index(m) 1610. As an alternative, the upper and lower portions of the transition control circuitry 1600 may be identical to each other.[0086] In some example embodiments, the transition control circuitry 1600 determines the set of n-kL (2 ) transformation patterns satisfying the constraint of being a closed set under the XOR n-k operation. The set of L (2 ) transformation patterns can be constructed as follows: 1) select n-k candidate generator patterns... , Mo} of length k, that form a ‘minimal set’ under the XOR operation; 2) define a minimal set to be such that the XOR operation performed on any selected subset of j (1 < j < n-k) unique elements, is not an element of the set; 3) the L transformation patterns are generated as a binary weighted XOR of the generator patterns4) {Z} =the binary weightsform the n-k bit pattern index. In one example, transition control circuitry 1600 generates patterns for n=20 and k=16 as:M3=' 1111111100000000', M2=' 1111000011110000', M1=' 1100110011001100' , Mo=' 1010101010101010'[0087] In contrast to a pipelined/parallel transition control circuitry, non-pipelined transition control circuitry performs selection of the pattern index by: 1) comparing previous transmitted symbol against all candidate output symbols corresponding to SymbolT; 2) the candidate output symbol closest to the target number of transitions is sent; 3) this continues for the subsequent input word; and 4) processing starts once the previous output symbol is determined.[0088] FIG. 17 is a flowchart of a transformation pattern generation method 1700 in accordance with an example embodiment. The method 1700 is performed, for example, by transition control circuitry (e.g., the transition control circuitry 110 in FIG. 1, or the transition control circuitry 110A in FIG. 3) of a first or second electronic circuit (e.g., the first or second electronic circuit 102 and 122 in FIG. 1, or the first or second electronic circuits 102A and 122A in FIG 3). As shown, the method 1700 includes starting transformation pattern generation at block 1702. At block 1704, (n- k) generator patterns are selected. If the generator patterns do not enable compliance with the figure of merit or target criteria (by not forming a minimal set under XOR operation) (determination block 1706), the method 700 returns to block 1704. If the generator patterns enable compliance with the figure of merit or target criteria (by forming a minimal set under XOR operation) (determination block 1706), the method 700 continues to block 1708, where 2(n'k)binary weighted XOR combinations are generated. At block 1710, binary weights are inserted as the n-k bit index. At block 1712, transformation pattern generation ends.[0089] FIG. 18 is a table 1800 of transformation patterns in accordance with an example embodiment. The transformation patterns of table 1800 may be suitable for scenarios where k = 16 and n = 20 as described herein. Note: the transformation patterns of table 1800 are independent of the particular transition control circuit topology used.[0090] FIG. 19 is a table of transformation patterns in accordance with yet another example embodiment. The transformation patterns of table 1900 may be suitable for scenarios where k = 16 and n = 20 as described herein. The transformation patterns of table 1900 are independent of the particular transition control circuit topology used. One advantage of the transformation patterns shown in table 1900 is that since the patterns are generated with generator patterns that do not have l’s at a common bit position, the transition weight can be computed hierarchically. In this case the generator patterns are:M3=' 1111000000000000', M2=' 0000111100000000', Mi =' 0000000011110000' , Mo=' 0000000000001111'The number of transition can be counted as 4 sub accumulations corresponding to bits which are 1 in patterns . The 16 sums can be then obtained from these partial accumulations.[0091] While the patterns table 1900 are suitable for condition where the 16-bit input words are independent and identically distributed, further gain in performance can be obtained in scenarios where the bits are not independent and identically distributed.[0092] In some example embodiments, transition control circuitry (an example of the transition control circuitry 110 or 130 in FIG. 1, or the transition control circuitry 110A or 130A in FIG. 3) performs bit stream transformation from a k-bit input word to an n-bit output symbol, to optimize total number of bit transitions between successive output symbols, based on a figure of merit or target criteria. In some examples, the transition control circuitry uses a transition optimizer to determine the transformation pattern using the current input word and the previous output symbol. In one example, a figure of merit is used, where the total number of bit transitions is matched as closely as possible to a target value. In another example, a figure of merit is used, where the total number of bit transitions is minimized. In some example embodiments, the transition control circuitry employs XOR operation to perform the bit stream transformation by selecting from among L pre-stored transformation patterns. In some example embodiments, the transition control circuitry employs parallelized bit stream transformation, in which the computation of the optimal transformation pattern is decoupled by selecting the L pre-stored transformation patterns to be from a “closed set” under the XOR operation. In some example embodiments, the transition control circuitry employs an enhanced transition optimizer configured to determine an intermediate transformation pattern index by using the current and previous input word. In some example embodiments, the transition control circuitry employs an index re-mapper, that computes the current transformation pattern index from the current intermediate index and the previous transformation pattern index. In some example embodiments, the transition control circuitry constructs a set of L pre-stored transformation patterns to form a closed set under the XOR operation, by using weighted binary combinations (through XOR) of n-k generator patterns. In such case, design of the n-k “generator” patterns are selected to satisfy the minimal set property.[0093] In some example embodiments, the spectral properties of the digital switching activity, due to bit transitions in the parallel data interface, for an output symbol and a raw input word can be compared. To perform the comparison, a power supply ripple is modeled at digital clock edges (1 GHz), proportional to the number of bit transitions. The bit stream transformation reduces the wideband components by ~22dB. For single tone input, bit stream transformation achieves greater than 30dB suppression of worst case harmonic spurs. Significant reduction in multiplicative in-band noise and signal harmonic spurs due to digital interface data switching is achieved for two different signal input as shown in graphs 2000 and 2100 of FIGS. 20 and 21.[0094] With transition control circuitry (e.g., the transition control circuitry 110 or 130 in FIG. 1, the transition control circuitry 110A or 130A in FIG. 3, the transition control circuitry 1300 of FIG. 13, the transition control circuitry 1400 of FIG. 14, the transition control circuitry 1600 of FIG. 16, the variation in the transitions on the parallel data interface is reduced at the expense of extra lanes. Also, a tradeoff of reduced variation in transitions for lower number of transitions is possible. Also, a tradeoff of noisy interference for power is possible. The receiver implementation is simple and requires only a unmasking operation. Thus, the receiver does not need to change based on different transition selection algorithms. In some example embodiments, an additional parity bit on the interface can reduce the chances of undetected data corruption. Also, different transition control options can be developed by using different patterns or selection criteria for bit transitions at the transmitter.[0095] In some example embodiments, a transition control interface modifies a k-bit input word to an n-bit output symbol such that the output symbol has a target number of bit transitions. In one example, modification of the input word is performed by constructing 2k-nn-bit candidate output symbols for each input word (one-time operation). At the transmitter, for the input word at the current instance, the transition control circuitry selects the best output symbol from the 2k-ncandidate output symbols. In some example embodiments, the selection criteria for the best output symbol is a target number of bit transitions. The selected output symbol is then transmitted to the receiver. At the receiver, the k-bit input word is recovered by decoding the received n-bit output symbol.[0096] In some example embodiments, the output symbol is given by [[input word] XOR [Maskindex][index]], where the index is (n-k) bits long. In some example embodiments, the 2k-nmasks are generated as one of the following: 1) k-bit sequences which are mutually orthogonal / close to orthogonal; or 2) 2k-nlinear combinations of basis vectors. The basis vectors are generated as follows: split k into n-k groups of bits, each of width gtsuch that = k; and have =In this example, will have a binary representation of (gi1’s) followed by0’s. As an example, if = {5,5,6}, thenIn some examples, basis masks are generated with the following expression = times- As an example, if k = 16, and L = 4, thenIn some examples, L masks M2are selected from a list of k lengthorthogonal / nearly orthogonal bit patterns. As an example, if k = 17 and L = 3, then Mo=' 10101010101010101' , M1=' 10100101010110100' M2=' 11000011001111001'. [0097] In some example embodiments, the transition control circuitry implements sequential processing or pipelined/parallel processing. For sequential processing, the selection of an output symbol uses a pattem/mask index based on a transition weight computation that compares the previous transmitted output symbol against all candidate output symbols corresponding to Symbol7; and selects of the candidate symbol closest to the target number of bit transitions. For pipelined/parallel processing, selection of an output symbol uses the mask index based on a transition weight computation that: compares the current input word and previous input word using an XOR operation and computing the number of bit transitions; and selects a temporary mask index (Index), where the temporary mask index satisfies a target bit transition criteria; and obtains the mask index for the current time instance by applying a transformation function on the temporary mask index with the mask index selected in the previous time instance. One such transformation function is an XOR (Index, MaskindexT'1).[0098] In some example embodiments, transition control circuitry maps a k-bit input word to an n-bit output symbol by: dividing k bits into n-k groups of bits each such that = k; andforming a new group G’ by associating each of the n-k groups with an invert bit, where the initial value of the invert bit is 0 (i.e., no inversion). In this example, a threshold of may be used. Atthe transmitter, each of the bits in the (n-k) groups G’ are compared to the corresponding bits transmitted in the previous time instance. For each of the n-k groups, if the number of bit transitions is greater than the threshold, the particular group is inverted. This will make the invert bit in that particular group 1 (as it was initialized to 0). At the receiver, the invert bit for each of the (n-k) group is observed. The corresponding group of bits is inverted if the invert bit is 1. Otherwise, the bits are passed without inversion if the invert bit is 0.[0099] In this description, the term “couple” may cover connections, communications, or signal paths that enable a functional relationship consistent with this description. For example, if device A generates a signal to control device B to perform an action: (a) in a first example, device A is coupled to device B by direct connection; or (b) in a second example, device A is coupled to device B through intervening component C if intervening component C does not alter the functional relationship between device A and device B, such that device B is controlled by device A via the control signal generated by device A.[00100] A device that is “configured to” perform a task or function may be configured (e.g., programmed and/or hardwired) at a time of manufacturing by a manufacturer to perform the function and/or may be configurable (or re-configurable) by a user after manufacturing to perform the function and/or other additional or alternative functions. The configuring may be through firmware and/or software programming of the device, through a construction and/or layout of hardware components and interconnections of the device, or a combination thereof. A circuit or device that is described herein as including certain components may instead be adapted to be coupled to those components to form the described circuitry or device.[00101] As used herein, the terms “terminal”, “node”, “interconnection”, “pin” and “lead” are used interchangeably. Unless specifically stated to the contrary, these terms are generally used to mean an interconnection between or a terminus of a device element, a circuit element, an integrated circuit, a device or other electronics or semiconductor component.[00102] A circuit or function that is described herein as including certain components or functions may instead be adapted to be coupled to those components or functional blocks to form the described circuitry or functionality. While certain components or functional blocks may be described herein as being implemented in an integrated circuit or on a single semiconductor substrate (or, conversely, in multiple integrated circuits or on multiple semiconductor substrates), such implementation may be accomplished using more or less integrated circuits or more or less semiconductor substrates. The circuits/functional blocks of the example embodiments may be packaged in one or more device packages.[00103] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
The present invention provides a method for improving data throughput for a wireless USB system that includes wire adapters that wirelessly transmit data between a host system and a wired USB device. The invention also provid es a Wireless USB hub that acts as a proxy for the wired USB devices and pre sents them to a host either as unique WUSB devices with their own addresses or as separate functions on an already existing WUSB device.
The method of claim 1, wherein said packet includes a transfer result packet; and wherein said polling said downstream wire adapter for said transfer result packet is based on previously transmitting a transfer request packet to said downstream wire adapter. The method of claim 1, wherein said packet includes an incoming data packet; and wherein said polling said downstream wire adapter for said incoming data packet is based on receiving a transfer request packet from said downstream wire adapter. A method for increasing throughput for a wireless USB system having a first wire adapter and a second wire adapter, comprising: generating a transfer request to include a descriptor indicating a data transfer type; sending said transfer request to said first wire adapter; and forwarding said transfer request from said first wire adapter to said second wire adapter based on said descriptor, 2 said second wire adapter being adapted to present a wired USB enabled device as a native wireless USB enabled device to said first wire adapter. The method of claim 4, including forwarding transfer data along with said transfer request when said data transfer type is an OUT transfer. A method for increasing throughput for a wireless USB system having a forwarding wire adapter and a target wire adapter, comprising: transferring a transfer request packet from said forwarding wire adapter to said target wire adapter; and polling said target wire adapter for a transfer result packet based on said transfer request packet, said target wire adapter being adapted to present a wired USB enabled device as a native wireless USB enabled device to said forwarding wire adapter. A method for communicating between a wired USB enabled device and a first wireless USB enabled device in a wireless USB system, comprising: detecting said wired USB enabled device; and presenting said wired USB enabled device as a second native wireless USB enabled device to said first wireless USB enabled device. The method of claim 7, further comprising: reading a device descriptor from said wired USB enabled device; modifying said device descriptor so that it is consistent with a device descriptor for any wireless USB enabled device as specified by a predetermined wireless USB standard; determining an expected amount of data to be transferred from said wired USB enabled device to said first wireless USB enabled device; modifying a predetermined wireless USB protocol to include said expected amount of data; and 3 providing said expected amount of data to said first wireless USB enabled device. The method of claim 8, wherein, said device descriptor includes a standard endpoint descriptor, and said modifying said device descriptor includes setting a maximum packet size field in said standard endpoint descriptor to be consistent with wireless USB packet sizes. The method of claim 8, wherein, said predetermined wireless USB protocol includes a predetermined wireless USB wire adapter protocol, and said modifying said predetermined wireless USB protocol includes adding a field in a channel time allocation portion of said predetermined wireless USB wire adapter protocol, said field specifying said expected amount of data. The method of claim 8, wherein, said predetermined wireless USB protocol includes a predetermined wireless USB wire adapter protocol, and said modifying said predetermined wireless USB protocol includes setting a maximum packet size field in a channel time allocation portion of said predetermined wireless USB wire adapter protocol, said field specifying said expected amount of data. The method of claim 7, further comprising: negotiating and maintaining a wireless USB security connection context for communication with said wireless USB enabled device; and applying said wireless USB security connection context after detecting said wired USB enabled device. The method of claim 7, further comprising maintaining a unique wireless USB device address for said wired USB enabled device, and 4 wherein presenting said wired USB enabled device includes presenting said wired USB enabled device having said unique wireless USB device address. The method of claim 7, further comprising: mapping a first endpoint associated with said wired USB enabled device to a second endpoint associated with said first wireless USB enabled device; and informing said first wireless USB enabled device that a new function needs to be enumerated, wherein said new function is associated with said first wireless USB enabled device. The method of claim 8, further comprising: intercepting a read descriptor request from said first wireless USB enabled device; and providing said first wireless USB enabled device with a response to said read descriptor request after said modifying said device descriptor. A wireless USB enabled hub that facilitates communication between a wired USB enabled device and a first wireless USB enabled device, comprising: a first port configured to communicate with said wired USB enabled device; a second port configured to communicate with said first wireless USB enabled device; and a controller configured to: detect said wired USB enabled device; present said wired USB enabled device as a native wireless USB enabled device to said first wireless USB enabled device. The wireless USB enabled hub of claim 16, wherein, said controller is configured to present said hub as a device wire adapter to said first wireless USB enabled device, and said first wireless USB enabled device includes a wireless USB enabled host. The wireless USB enabled hub of claim 16, wherein the controller is configured to present said wired USB enabled device to said first wireless USB enabled device as a unique wireless USB enabled device having its own address. The wireless USB enabled hub of claim 16, wherein the controller is configured to present said wired USB enabled device as a separate function on the wireless USB enabled hub by mapping a wired USB enabled device endpoint to a wireless USB enabled hub endpoint. The wireless USB enabled hub of claim 16, wherein said controller is configured to: maintain a wireless USB address for each of a plurality of downstream wired USB enabled devices; and respond to a wireless USB packet directed to one of said plurality of downstream USB enabled devices. The wireless USB enabled hub of claim 16, wherein said controller is configured to: intercept a device descriptor request from said first wireless USB enabled device; read a device descriptor from said wired USB enabled device; modify said device descriptor so that it is consistent with a device descriptor for any wireless USB enabled device as specified by a predetermined wireless USB standard; and 6 present said wired USB enabled device as said native wireless USB enabled device by providing said modified device descriptor to said first wireless USB enabled device. The wireless USB enabled hub of claim 21, wherein, said device descriptor includes a standard endpoint descriptor, and said controller is configured to modify said device descriptor by setting a maximum packet size field in said standard endpoint descriptor to be consistent with wireless USB packet sizes. The wireless USB enabled hub of claim 16, wherein said controller is configured to: determine an expected amount of data to be transferred from said wired USB enabled device to said first wireless USB enabled device; and modify a predetermined wireless USB protocol to include said expected amount of data. The wireless USB enabled hub of claim 23, wherein, said predetermined wireless USB protocol includes a predetermined wireless USB wire adapter protocol, and said controller is configured to modify said predetermined wireless USB wire adapter protocol by setting a maximum packet size field in a channel time allocation portion, said maximum packet size field specifying said expected amount of data. The wireless USB enabled hub of claim 23, wherein, said predetermined wireless USB protocol includes a predetermined wireless USB wire adapter protocol, and said controller is configured to modify said predetermined wireless USB wire adapter protocol by adding a field in a channel time allocation portion of said predetermined wireless USB protocol, said field specifying said expected amount of data. 7 The wireless USB enabled hub of claim 16, wherein said controller is configured to: negotiate and maintain a wireless USB security connection context for communication with said wireless USB enabled device; and apply said wireless USB security connection context after said controller detects said wired USB enabled device.
CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 ENHANCED WIRELESS USB PROTOCOL AND HUB TECHNICAL FIELD The present invention relates generally to Certified Wireless Universal Serial Bus (WUSB) interfaces. More specifically, the present invention is related to improving the throughput of Certified Wireless USB Wire Adapter systems. BACKGROUND Universal Serial Bus (USB) is a serial bus standard for attaching electronic peripheral devices to a host computing device. It was designed for personal computers, but its popularity has prompted it to also become commonplace on video game consoles, PDAs, portable DVD players, mobile phones, and other popular electronic devices. The goal of USB is to replace older serial and parallel ports on computers, since these were not standardized and called for a multitude of device drivers to be developed and maintained. USB was designed to allow peripherals to be connected without the need to plug expansion cards into the computer's expansion bus and to improve plug-and-play capabilities by allowing devices to be hot-swapped, wherein devices are connected or disconnected without powering down or rebooting the computer. When a device is first connected, the host enumerates and recognizes it, and loads the device driver needed for that device. USB can connect peripherals such as mouse devices, keyboards, scanners, digital cameras, printers, external storage devices, etc., and has become the standard connection method for many of these devices. The Wireless Universal Serial Bus Specification, revision 1.0 (published May 12, 2005; available from the USB Implementers Forum, Inc.) describes and specifies extensions to wired USB which enable the use of wireless links in extended USB/WUSB systems. These wireless extensions to the USB specification are referred to as Certified Wireless Universal Serial Bus or simply Wireless USB (WUSB). The extensions build on existing wired USB specifications and WiMedia Alliance MAC and PHY ultra- wide-band (UWB) wireless technology. The WUSB Specification includes descriptions and specifications of devices known as Wire Adapters (WA). These devices are wired-USB-to-Wireless-USB adapters which allow "legacy" wired USB hosts and devices to be interconnected with WUSB devices in extended USB systems containing both wired and wireless links. There are two types of Wire Adapters: Host Wire Adapter (HWA) and Device Wire Adapter (DWA) which work in conjunction with each other. HWAs have a wired "upstream" USB port and a wireless "downstream" WUSB port, allowing a wired USB host to communicate with WUSB devices. CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 DWAs have a wireless "upstream" WUSB port and one or more wired "downstream" USB ports, allowing wired USB devices to communicate with a Wireless USB host. The WUSB Specification Wire Adapter Protocol is used to transfer data through WAs and to control and manage WAs. Unfortunately, the Wire Adapter Protocol as specified in the WUSB Specification in typical situations is very inefficient, resulting in unacceptably low throughput. The inefficiency of the protocol is primarily attributable to two factors: the protocol is "chatty" in that a number of non-data messages conveying control and transfer complete status information are exchanged for each block of data transferred. In addition, the protocol does not lend itself well to "pipelining" of data flow through the system, resulting in high latency during transfer of data and therefore low throughput. Therefore, it would be desirable to have a method for improving throughput for devices in USB systems containing both wired and wireless USB devices. BRIEF DESCRIPTION OF THE DRAWINGS Aspects of the present disclosure are best understood from the following detailed description when read with the accompanying figures, in which: Figure 1 shows the standard configuration of a wired USB system in accordance with the prior art; Figure 2 shows a configuration for a Wireless USB system with a "native" WUSB device directly attached to a WUSB host; Figure 3 shows a Device Wire Adapter connected to two wired USB devices; Figure 4 shows a system incorporating Device Wire Adapters and a Host Wire Adapter to provide wireless USB functionality to legacy wired USB devices in accordance with the prior art; Figure 5 shows the sequence of data packets used to communicate over the wireless USB system depicted in Figure 4; Figures 6A and 6B are sequence diagrams illustrating the process flow for an IN request using the standard wire adapter protocol in accordance with the prior art; Figures 7A and 7B are sequence diagrams illustrating the process flow for an OUT request using the standard wire adapter protocol in accordance with the prior art; Figure 8 is a sequence diagram illustrating the process flow for an IN request using the enhanced wire adapter protocol as an embodiment in accordance with the present invention; Figure 9 is a sequence diagram illustrating the process flow for an OUT request using the enhanced wire adapter protocol as an embodiment in accordance with the present invention; and Figure 10 shows a Wireless USB hub as an embodiment in accordance with the present invention. Figure 11 shows a diagram illustrating packet flow and processing for an OUT Transfer Request forwarding in an embodiment of the present invention. CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 Figure 12 shows a diagram illustrating packet flow and processing for an IN Transfer Request forwarding in an embodiment of the present invention. DETAILED DESCRIPTION An embodiment of the present invention provides an enhanced Wire Adapter Protocol for improving data throughput for a wireless USB system that includes wire adapters that wirelessly transmit data between a host system and a wired USB device. Using this protocol, wire adapters automatically segment incoming data transfers into smaller segments, wherein a wire adapter uses its buffer status to determine how much data to fetch. Data is transferred downstream without waiting to receive a complete data segment when a wire adapter receives a specified minimum amount of data from upstream. The enhanced protocol also dispenses with Transfer Complete messages and instead determines when a data transfer has completed by polling downstream for a transfer result. The wire adapters also employ forward pipe descriptors in conjunction with remote pipe descriptors to forward transfer requests downstream. Another embodiment of the present invention provides a Wireless USB (WUSB) hub that allows wireless communication between wired USB devices and a host system. The WUSB hub acts as a proxy for the wired USB devices and presents them to the host system as if they were native WUSB devices. The WUSB hub presents an attached wired USB device as a unique WUSB device with its own device address or as a separate function on an already existing device (the WUSB hub for instance, which may enumerate as a Device Wire Adapter). Embodiments of the present invention include improving the throughput of WUSB Wire Adapter systems. One embodiment includes streamlining the Wire Adapter Protocol to improve the throughout of WUSB Wire Adapter systems. Another embodiment, includes presenting wired USB devices plugged into a WUSB hub as if they are "native" WUSB devices. This embodiment is referred to as a USB Device Proxy WUSB Hub, or simply WUSB hub. Figure 1 shows the standard configuration of a wired USB system in accordance with the prior art. In this configuration, the host system 100 includes the USB root hub hardware 101, USB root hub driver 102, and a device driver 103. The external USB device 110 includes adapter hardware of its own 111 and software 112 related to its functions. The host 100 and external device 110 are connected by a wired USB connection 120 which plugs into the respective USB adapters 101, 111. Figure 2 shows a configuration for a Wireless USB system with a "native" WUSB device directly attached to WUSB host. The system depicted in Figure 2 is similar in layout to that depicted in Figure 1, with the major difference being that both the host 200 and the external USB device 210 have built-in wireless adapters 201, 211, respectively. These adapters 201, 211 communicate over a wireless signal provided by antennae 220, 221 instead of via a wired cable. The native Wireless USB system in Figure 2 represents the goal which much of the computer/electronics industry is attempting to reach. Currently, CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 however, very few devices have native wireless capability. Therefore, it is desirable for the industry to accommodate systems containing both wired and wireless USB devices and hosts. The current solution for connecting wired USB devices into WUSB systems is to plug them into a Device Wire Adapter (DWA), depicted in Figure 3. Wired USB devices 310, 320 are plugged into the DWA 300 using standard USB cables 311, 321. The DWA 300 in turn provided a wireless antenna 301 that provides a wireless link to a USB host. A corresponding Host Wire Adapter (HWA) may be used by the host system to communicate with the DWA, or the DWA may communicate with the host system through a "native" WUSB host adapter. Figure 4 shows a system incorporating Device Wire Adapters and a Host Wire Adapter to provide wireless USB functionality to legacy wired USB devices. This example shows the Host Wire Adapter (HWA) 410 connected to the host 400 as an external device, which is typical of current designs. Eventually, the HWA 410 will be replaced by a native WUSB host adapter embedded inside the host 400 system. Because the HWA 410 and DWAs 410, 420 are recognized as USB devices, the host system 400 incorporates multiple software driver layers to enable communication with wired external USB devices 421, 422, 431, 432 via the HWA 410 and DWAs 420, 430. The host 400 has a wired USB root hub 401 to which the HWA 410 is connected (whether external or internal to the host housing). Next is the root hub driver 402. The host has a separate HWA driver 403 as well as a DWA driver 404. On top of these are device drivers 405- 408 that are specific to the external USB devices 421, 422, 431, 432 at the end of the chain. Each of the device drivers 405-408 attaches to and communicates with the DWA driver 404. Data is communicated from the host 400 to the HWA 410 through a wired connection. The HWA 410 then uses a wireless protocol to transmit the data to one of the DWAs 420, 430, which in turn sends the data to the specified USB device 421, 422, 431, or 432 over a wired connection. Figure 5 shows the sequence of packets used to communicate over the wireless USB system depicted in Figure 4. Due to the presence of the HWA and DWA in the system, the packet sequence 500 includes control packets inserted ahead of the data to tell the DWA which port to route the data through and get an acknowledgement. This occurs for each HWA and DWA in the system between the external device and the host. In the example shown in Figure 5, the data packet 504 is preceded by Transfer Request 503, while Transfer Request 502 is preceded by Transfer Request 501. Transfer Requests 501 and 503 direct the HWA to send Transfer Request 502 and data packet 504 to the DWA. Transfer Request 502 directs the DWA to send data packet 504 to the USB device. A transfer request only appears as such to its CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 intended device. For example, the DWA transfer request 5021ooks like data to the HWA but looks like a transfer request to the DWA. As explained above, the current wireless USB systems use control packets (Transfer Requests) which are generated by multiple layers of drivers between the external device and its specific driver in the 5 host in order to direct the flow of data to or from the target USB device. Unfortunately, this design dramatically hampers throughput. Figures 6A and 6B are sequence diagrams illustrating the process flow for an IN request using the standard wire adapter protocol. Figures 7A and 7B are sequence diagrams illustrating the process flow for an OUT request using the standard wire adapter protocol. These sequence diagrams graphically illustrate the large number of Transfer Requests necessary under the standard protocol in order to transfer data between the external USB device and its driver. Much of this complexity comes from the fact the transfer requests intended for one layer of the system are seen as data by other layers, thereby invoking acknowledgements for data receipt at each layer of the system before finally delivering the data itself to the destination. An embodiment of the present invention includes an Enhanced Wire Adapter Protocol that improves throughput by reducing the number of messages that are exchanged as a part of data transfer, thereby reducing processing time and transfer time of the messages over the wired USB interfaces and wireless medium. Throughput is also increased by improving "pipelining" of data flow through the system, which reduces transfer latency. The enhanced Wire Adapter Protocol eliminates the Transmit Complete message and instead uses polling for Transfer Result to determine when a transfer has completed. IN data transfers may be automatically segmented ("auto-segmentation") into smaller transfers by Wire Adapters. This pushes the intelligence functions onto the Wire Adapters and away from the host software (i.e. the DWA manages the buffer). During the auto-segmentation the size of each segment may vary, whereby the Wire Adapter dynamically and adaptively adjusts the segment size in order to maximize throughput for a given situation. The Wire Adapter automatically manages its available buffers by issuing IN tokens for pending transfers based on buffers being available to accept the IN data. In an embodiment of the present invention, a Wire Adapter driver segments a transfer request and submits all of these at once. The DWA automatically manages memory to complete each segment. For IN data, the Wire Adapter checks for memory before starting the IN transfers and for OUT data, negative acknowledgements are used to backpressure a segment for which the Wire Adapter does not have enough memory. In an embodiment of the present invention, multiple transfers between a USB host and an HWA over the upstream USB interface may be aggregated into a single USB transfer in order to reduce transfer latency. In particular, a USB host may aggregate multiple OUT transfers targeted for an HWA, and an CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 HWA may aggregate multiple IN transfers targeted for a USB host. The receiver of aggregated transfers (HWAs in the case of OUT transfers and USB hosts in the case of IN transfers) de-aggregates aggregated transfers before further processing of the data. Receivers determine data boundaries in aggregated frames by parsing the content of aggregated transfers. For example, a USB host may aggregate an OUT Transfer Request with the following OUT transfer data. The HWA receiving the aggregated transfer expects the next transfer to be a Transfer Request. It examines the first byte of the aggregated transfer in order to determine the length of the Transfer Request contained in the aggregated transfer. The wRPipe field in the Transfer Request is used to locate the associated wRPipe Descriptor, which is then used to determine that the Transfer Request is an OUT Transfer Request. Because the Transfer Request is an OUT request, the HWA treats the data in the aggregation transfer following the Transfer Request as OUT Transfer Data. Hosts and HWAs may aggregate transfers up to a maximum length of wMaxPacketSize as expressed in the Standard Endpoint Descriptor for the endpoint over which the transfer occurs. Hosts and HWAs using aggregation must be prepared for every transfer to receive up to wMaxPacketSize bytes. For IN transfers hosts must issue an input request for wMaxPacketSize bytes. HWAs may de-aggregate "on-the-fly" as complete Control Transfers and data transfers are received,. "On-the-fly" de-aggregation can help with buffer management and dataflow and may reduce end-to-end latency. The decision of when to aggregate and how many transfers to aggregate is implementation dependant. Typically "opportunistic" algorithms are used to make aggregation decisions. A host or HWA aggregates available transfers up to wMaxPacketSize. For OUT transfer data packets the enhanced Wire Adapter Protocol uses packets that "cut- through" rather than being passed using "store and forward" transfer. Using this new approach, whenever some minimum amount of OUT data is received from the upstream port, the Wire Adapter may transfer the data on the downstream port, rather than wait until a complete segment of data is received. Conversely, the Wire Adapter automatically manages its available data buffers by putting "backpressure" on the upstream port by issuing negative acknowledgments (NAKs) when data buffers are not available to hold incoming data. The enhanced Wire Adapter Protocol allows forwarding of Transfer Requests by a Wire Adapter, thereby reducing the number of messages used to complete data transfer. Referring back to the example in Figure 5, under the enhanced protocol the DWA Transfer Request 5021ooks like a Transfer Request to the HWA and not data. Thus the HWA realizes that the incoming Transfer Request is really for the DWA and forwards it to the DWA. Forwarding Pipe (FPipe) descriptors are used in conjunction with Remote Pipe (RPipe) descriptors to control forwarding of Transfer Request packets. Referring to Figure 11, diagram 1100 illustrates packet flow and processing for an OUT Transfer Request forwarding in an embodiment of the present invention. Diagram 1100 focuses on DWA behavior CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 for completing a transfer when Transfer Request forwarding is implemented and the Transfer Complete message has been eliminated as discussed above. A USB application on the host presents to the DWA driver a request to transfer data that is targeted for a USB device 1102 attached to the DWA 1104. The data 1106 to be transferred is provided with the transfer request 1108. The DWA and HWA host drivers generate a Transfer Request OUT packet and en-queue the Transfer Request and Transfer Data for transfer to the HWA 1110 over the wired USB bus. The Transfer Request 1108 contains in the wRPipe field 1112 an FPipe Descriptor number 1114 (0x8001) which references the FPipe Descriptor 1116 in the HWA 1110 to be used to forward the Transfer Request 1108. The HWA 1110 receives the Transfer Request OUT packet 1108, followed by the Transfer Data 1106, from the upstream wired USB bus. The HWA parses the Transfer Request 1108 and locates the wRPipe field 1112 in the Transfer Request 1108. In this particular example, the wRPipe field 1112 contains 0x8001. The HWA 1110 determines that the wRPipe number 1114 refers to an FPipe descriptor 1116 because the most significant bit of the wRPipe number 1114 is a one. This indicates to the HWA 1110 that the corresponding pipe descriptor is found in the FPipe Descriptor table 1118 (rather than the RPipe Descriptor table), and that the Transfer Request 1108 should be forwarded. In this particular example, since the wRPipe number 1114 is 0x8001, the index of the FPipe Descriptor 1116 in the FPipe Descriptor Table 1118 is Ox0001. The HWA 11101ocates FPipe Descriptor 1116 Ox0001 in the FPipe descriptor table 1118. The wRPipelndex field 1120 in FPipe Descriptor Ox0001 (1116) is used to locate the Transfer Request RPipe Descriptor 1122. In this particular example the RPipe Descriptor index 1124 is Ox0001. The HWA 1110 determines the transfer request target device address, device endpoint and direction using the bDeviceAddress and bEndpintAddress 1126 in the RPipe Descriptor 1122. In this example, the Transfer Request 1108 is an OUT, which indicates to that the HWA 1110 to expect Transfer Data 1106 following the Transfer Request 1108 on the upstream USB bulk IN endpoint 1128, and the HWA should use OUT RPipe Descriptor Ox0001 (1122) for delivering both the forwarding Transfer Request 1130 and the Transfer Data 1132. The HWA 1110 uses the received Transfer Request 1108 to generate a forwarding Transfer Request 1130 by replacing the wRPipe field 1112 in the received Transfer Request 1108 with the wForwardRPipe value 1134 in the FPipe Descriptor 1116. The HWA 1110 en-queues on the downstream wireless interface 1136 for transfer to the DWA 1104 the forwarding Transfer Request 1130 and the Transfer Data 1132 (once the data has been received on the upstream USB Bulk OUT endpoint 1128). The HWA adds the two byte WUSB header to the beginning of the Transfer Request packet and to the beginning of each data packet in the transfer before en-queuing them on the downstream wireless interface 1136. CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 Once the forwarding Transfer Request 1130 and Transfer Data 1132 have been transferred to the DWA 1104, the HWA 1110 examines the bControl field 1138 in the OUT RPipe descriptor 1122 (Ox0001, which was used to deliver the Transfer Request 1108 and Data 1106) and determines that the Automatic Request and Forwarding of Transfer Results and Transfer Data option is enabled (bit zero in the bControl field). The HWA 1110 then uses the wTransferRPipe field 1140 in the OUT RPipe Descriptor 1122 in order to locate the RPipe Descriptor 1142 associated with the bulk IN pipe 1144 of the downstream DWA 1104. Associated with the IN RPipe Descriptor 1142 is a Pending Transfer list (not shown in the diagram 1100). The HWA 1110 adds an entry in the Pending Transfer list that indicates that a Transfer Result for an OUT transfer is expected from the downstream DWA bulk IN pipe 1144. Because of the entry in the IN RPipe Pending list indicating that a Transfer Result 1146 is expected, the HWA 1110 begins issuing IN tokens to the bulk IN endpoint 1144 on the downstream DWA 1104 in order to receive the expected Transfer Result 1146. When the Transfer Result 1146 is received from the DWA 1104, the HWA 1110 uses the SrcAddr field in the packet MAC header and the Endpoint Number field in the WUSB header in order to locate the RPipe descriptor associated with the device and endpoint from which the Transfer Result was received. In this particular example, the device is the DWA 1104, the endpoint is the DWA bulk IN endpoint, and the corresponding RPipe in the HWA for the DWA endpoint is RPipe Descriptor 0x0002 (1148). The HWA 11101ocates in the RPipe Pending Transfer list the entry corresponding with the received Transfer Result 1146, based on matching Transfer IDs in the Transfer Result 1146 and Pending Transfer list. The HWA 1110 determines that from the Pending Transfer list entry that the Transfer Result 1146 is for an OUT transfer and therefore no data follows the Transfer Result 1146. The HWA 1110 examines the bControl field 1150 in IN RPipe descriptor 0x0002 (1142) and determines that the Automatic Request and Forwarding of Transfer Results and Transfer Data option is enabled (bit zero in the bControl field). Based on this option being enabled, the HWA 1110 automatically en-queues the Transfer Result packet 1146 on the upstream USB interface bulk IN endpoint 1152 for transfer to the host. The HWA 1110 then deletes the entry in the RPipe Descriptor 0x0002 (1142) Pending Transfer list corresponding to the expected Transfer Result 1146. The HWA host driver maintains pending transfer records similar to the Pending Transfer list in the HWA 1110. Based on the pending transfer records the HWA driver expects a Transfer Result 1146 for the previously transmitted Transfer Request 1108, and therefore the HWA driver requests an IN transfer which causes IN tokens to be sent to the HWA wired USB interface bulk IN endpoint 1152. The HWA sends the Transfer Result to the host as soon as the Transfer Result 1146 comes to the top of the bulk IN queue and an IN token is received. When the Transfer Result 1146 is passed to the HWA driver CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 the HWA 1110 updates its records based on the information in the Transfer Result 1146, and the WUSB transfer is complete. Referring to Figure 12, diagram 1200 illustrates packet flow and processing for an IN Transfer Request forwarding in an embodiment of the present invention. The packet flow and processing for the IN Transfer Request forwarding is similar to the packet flow and processing for the OUT Transfer Request forwarding. The differences are as follows: The wRPipe field 1202 (0x8002) in the IN forwarding Transfer Request 1204 references an FPipe Descriptor 1206 (0x0002) containing a wRPipelndex field 1208 (0x0002) that references an IN RPipe 1210. Because the RPipe in this case is for an IN, the HWA 1212 does not expect Transfer Data to follow the Transfer Request 1204, and therefore forwards the Transfer Request 1204 with no following data. The HWA 1212 uses wTransferRPipe field 1211 in the IN RPipe 1210 (0x0002) to locate the OUT RPipe 1214 that is used to deliver the Transfer Request 1204. After the Transfer Request 1204 is transferred to the DWA 1216, the HWA 1212 adds an entry in the Pending Transfer list that indicates that a Transfer Result 1218 for an IN transfer is expected from the downstream DWA bulk IN pipe 1219. When the HWA 1212 receives the Transfer Result 1218, it then attempts to read from the downstream DWA bulk IN endpoint 1219 the number of bytes indicated in the dwTransferLength field of the received Transfer Result 1218. The HWA 1212 expects the data because the corresponding entry in the Pending Transfer list indicates an IN transfer. After the HWA 1212 receives the expected data, it en-queues the Transfer Result 1218 and Transfer Data 1220 on the upstream wired USB bulk IN endpoint 1222, for transfer to the host. If automatic segmentation is enabled, the HWA 1212 may segment the data and en- queue a Transfer Result with each data segment. The bTransferSegment field in each Transfer Result is set to the segment number. For Transfer Request forwarding the Wire Adapter host driver maintains an association between Transfer IDs in a Transfer Request accepted from another Wire Adapter host driver and the resulting transfer generated by the Wire Adapter. It then uses this association to modify the Transfer ID in Transfer Request and Transfer Result packets. Transfer IDs are 32 bit values used by Wire Adapter drivers and Wire Adapters to uniquely identify transfers and to associate packets with specific transfers. For each transfer initiated by a HWA or DWA host driver, a unique Transfer ID is generated and placed in the corresponding Transfer Request. When generating Transfer Result packets, HWAs and DWAs place in the Transfer Result the Transfer ID from the Transfer Request for the transfer. With the standard Wire Adapter Protocol, Transfer IDs are unique in the context of a DWA or HWA driver instance and corresponding Wire Adapter. Take for example the case of a DWA attached to CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 an HWA which in turn is attached to a host. When the DWA driver generates a Transfer Request it selects a unique Transfer ID to place in the Transfer Request. The HWA driver and HWA deliver the DWA drive Transfer Request to the DWA without examining the content of the Transfer Request (as far as the HWA sub-system is concerned, the Transfer Request is data to be transferred). When the Transfer 5 Request is delivered to the DWA the DWA parses the Transfer Request in order to determine what to do, and uses the Transfer ID in the resulting Transfer Response. However, in order for the HWA driver to deliver the DWA Transfer Request, the HWA driver generates its own Transfer Request for the HWA. The Transfer ID placed in the HWA Transfer Request that is used to deliver the DWA Transfer Request is unrelated to the Delivery ID in the DWA Transfer 10 Request. Stated differently, each Wire Adapter driver "layer" maintains its own set of unique Transfer IDs and is unaware of the Transfer IDs generated and used by other Wire Adapter drivers. However, when Enhanced Wire Adapter Protocol Transfer Request Forwarding is being used handling of Transfer IDs is necessarily different than with the standard protocol. When a Transfer Request is forwarded from one Wire Adapter to another, the Transfer Request is processed by both the forwarding and target Wire Adapters, and the Transfer ID is used by both Wire Adapters. In this case both Wire Adapters are working with the same set of Transfer IDs. The same is true of forwarded Transfer Result packets, the same Transfer ID is used by one or more Wire Adapters. Forwarding does not require special handling of Transfer IDs by the Wire Adapters, but does require specific processing by the host drivers. If forwarding is being used by a host driver and its corresponding Wire Adapter, then when the host driver accepts a Transfer Request from another ("upstream") host driver it parses the Transfer Request to locate the Transfer ID. The host driver then generates a new Transfer ID that is unique within the scope of the driver and Wire Adapter and places the new Transfer ID in the Transfer Request before forwarding it to the next driver. The host driver also creates a record that provides the association between the Transfer ID provided by the upstream Wire Adapter and the Transfer ID generated by the host driver. Then, when the host driver receives a Transfer Result, it looks up the Transfer Result Transfer ID it its Transfer ID association table and determines that the corresponding Transfer Request was forwarded. It then replaces the Transfer ID with the value from the Transfer ID association table (i.e. with the original Transfer ID) before passing the Transfer Result to the original requesting driver. In this way the effect of forwarding on Transfer IDs by a"downstream" driver is made transparent to a host "upstream" driver. Alternately, implementations are possible where a single host driver manages two or more serially connected Wire Adapters (for example a DWA connected to a HWA). In this case the single host driver is fully aware of the effect of forwarding and can account for it by generating Transfer IDs that are used by both the DWA and HWA. CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 Under the enhanced protocol a Wire Adapter automatically polls a downstream Wire Adapter for Transfer Result packets based on a previous Transfer Request to that downstream Wire Adapter. Similarly, the Wire Adapter forwards any received Transfer Result to the upstream interface rather than generating and forwarding a Transmit Complete and new Transfer Result. Rather than the poll for downstream data initiated by receiving a Transfer Request from the host driver, the Wire Adapter automatically polls a downstream device for IN data based on previously receiving a Transfer Result for an IN data transfer. Similarly, rather than generating and forwarding a Transmit Complete and new Transfer Result, the Wire Adapter simply forwards IN Transfer Data to the upstream interface after reception. In one embodiment of the present invention Multiple Wire Adapter drivers (HWA and DWA) may be combined into a single Wire Adapter driver in order to reduce the number of Application Programming Interfaces (APIs) and therefore the latency incurred by passing of messages across the APIs. In addition, combining multiple Wire Adapter drivers allows consolidation in the assignment of Transfer IDs, such that when Transfer Request forwarding is being used the need to maintain the association between Transfer IDs in Transfer Request presented from an upstream driver and the Transfer IDs in issued Transfer Request is eliminated. Figure 8 is a sequence diagram illustrating the process flow for an IN request using the enhanced wire adapter protocol as an embodiment in accordance with the present invention. Figure 9 is a sequence diagram illustrating the process flow for an OUT request using the enhanced wire adapter protocol as an embodiment in accordance with the present invention. Figures 8 and 9 illustrate the reduction in control overhead provided by the enhanced Wire Adapter Protocol in comparison to the sequence diagrams shown in Figures 6A, 6B, 7A, and 7B depicting the existing protocol. Figure 10 shows a Wireless USB hub as an embodiment in accordance with the present invention. As explained above, the current Wire Adapter protocol is relatively inefficient, whereas the WUSB protocol for "native" WUSB devices is relatively efficient. The Proxy WUSB Hub 1000 takes advantage of this efficiency by presenting wired USB devices 1010, 1020 as if they are "native" WUSB devices. The Proxy WUSB Hub 1000 is similar to a DWA in that it has a wireless upstream port 1001 and one or more wired USB downstream ports 1002, 1003, wherein wired USB devices 1010, 1020 may be plugged into the downstream wired USB ports. The WUSB Hub differs from a DWA in that the wired USB devices 1010, 1020 appear to the host system as if they are native WUSB devices. This is accomplished by having the WUSB Hub 1000 "proxy" the attached downstream wired devices on the wireless interface. Therefore, the WUSB hub 1000 appears to the host as one or more WUSB devices, not a DWA, which eliminates much of the control packet overhead used by the standard Wire Adapter Protocol. CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 An attached wired USB device either 1) is presented as a unique WUSB device with its own device address, or 2) is presented as a separate function on an already existing device (the WUSB hub for instance, which may enumerate as a DWA). With this latter approach the wired device endpoints are mapped into WUSB hub endpoints. The WUSB Hub uses various mechanisms in order to properly proxy a USB device to the host as if it is a WUSB device. The following description applies to the case when the WUSB hub presents downstream USB devices as WUSB devices, rather than as functions on the DWA. Each WUSB device maintains a security connection context. The connection context is negotiated during first-time connection. The WUSB Hub negotiates the security connection context for the USB devices without the knowledge or involvement of the downstream devices and stores the security connection context. A sufficient number of unique security connection contexts are negotiated and maintained in order to support the maximum number of downstream USB devices that the WUSB hub is capable of simultaneously proxying. A specific connection context is not tied to a particular downstream USB device, rather the connection contexts are applied as needed when USB devices are attached. The WUSB hub maintains a unique WUSB device address for each attached proxy USB device, and it participates in the WUSB protocol as if it were the WUSB devices it is proxying, rather than appearing as an intervening device (like a DWA). The WUSB Hub detects USB attachment either directly or by intercepting interrupt packets from attached downstream hubs. Upon detecting attachment of downstream devices the WUSB hub does not forward interrupt packets to the host or directly inform the host of USB attachment. Instead, the WUSB hub performs a WUSB device connection procedure on behalf of the USB device. The WUSB hub reads the descriptors of the USB devices and modifies them so that they are consistent with descriptors for WUSB devices. For example, the maximum packet size field in the Standard Endpoint Descriptor is modified so that it is consistent with WUSB packet sizes. The bmAttributes field in the Standard Configuration Descriptor is set to indicate that the device is self- powered. For some USB devices which do not use zero-packet-length semantics to indicate the end of transfers, the lengths of IN transfers are used so that the correct number of bytes are read. However, in the case of a WUSB hub, the length of IN transfers is not available with the WUSB protocol. Without transfer requests, there is no previously declared limit on the amount of data to read. Fortunately, the upstream wireless device is provided with the expected length of transfers. The WUSB protocol is modified slightly in order to support the WUSB hub. Two options are available. In the first option, the maximum packet size field in IN Channel Time Allocation (CTE), Information Elements (lEs) sent in WUSB host Micro-scheduled Management Control (MMC) can be CA 02671610 2009-06-04 WO 2008/080107 PCT/US2007/088664 used to indicate the expected transfer length. Alternatively, a field may be added to CTAs to indicate the expected transfer length. In an embodiment of the present invention, the WUSB hub includes a controller 1004 to perform the functions and operations of the WUSB hub, discussed above. For the case in which a WUSB hub proxies one or more downstream USB devices as functions on the WUSB hub/DWA, the above description generally applies, except that the security connection context and WUSB device address is shared with the WUSB hub. In addition, when a USB device attaches, the WUSB hub maps WUSB hub wireless endpoints one-for-one for USB device endpoints and treats the collection of endpoints associated with a particular USB device as a function on the WUSB hub. The WUSB hub informs the host that a new function needs to be enumerated in order to activate support for a newly attached device. Although embodiments of the present disclosure have been described in detail, those skilled in the art should understand that they may make various changes, substitutions and alterations herein without departing from the spirit and scope of the present disclosure.
A device (102) is provided that includes a chip (110) having a processor (112) and wake -up logic (116). The device also includes power management circuitry (108) coupled to the chip. The power management circuitry selectively provides a core power supply and an input/output (I/O) power supply to the chip. Even if the power management circuitry cuts off the core power supply to the chip, the wake-up logic detects and responds to wake-up events based on power provided by the I/O power supply.
CLAIMS What is claimed is: 1. A device, comprising: a chip having a processor and wake-up logic; and power management circuitry coupled to the chip, the power management circuitry selectively provides a core power supply and an input/output (I/O) power supply to the chip; wherein, even if the power management circuitry cuts off the core power supply to the chip, the wake-up logic detects and responds to wake-up events based on power provided by the I/O power supply. 2. The device of Claim 1, wherein the chip comprises a voltage converter that provides a reduced I/O power supply voltage for use by the wake-up logic. 3. The device of Claim 2, wherein the chip comprises an input/output (I/O) ring that supplies the reduced I/O power supply voltage to circuitry of the wake-up logic positioned around the chip. 4. The device of Claim 1, wherein the wake-up logic captures and stores a plurality of pad states before the core power supply is cut off. 5. The device of Claim 4, wherein, while the core power supply is cut off, one or more of the following occurs: a) the wake-up logic compares stored pad states with new pad states to detect pad state changes; b) the wake-up logic propagates a pad state change using daisy chained logic; c) circuitry of the wake-up logic is polled to determine an origin of a pad state change propagated using the daisy chained logic; d) the wake-up logic correlates detected pad state changes with different wake-up events; e) the wake-up logic selectively executes wake-up routines. 6. The device of any of Claims 1 - 5, wherein at least one of the wake-up routines is performed without providing the core power supply to the chip. 7. The device of any of Claims 1 - 5, wherein at least one of the wake-up routine causes the core power supply to be provided to some, but not all core components of the chip. 8. A chip, comprising: a core with a processor; and wake-up logic coupled to core, wherein the core operates based on a core power supply; wherein, even if the core power supply is cut off externally to the chip, the wake-up logic operates based on an I/O power supply. 9. The chip of Claim 8, further comprising a voltage converter that reduces the I/O power supply voltage for use by the wake-up logic. 10. The chip of Claim 8 or 9, further comprising a plurality of pads and circuitry associated with each pad, wherein, if the core power supply is on, the circuitry propagates a signal state from the core and wherein, if the core power supply is cut off, the circuitry propagates a signal state captured prior to the core power supply being cut off. 11. The chip of Claim 8 or 9, further comprising a plurality of pads and circuitry associated with each pad, wherein the circuitry is daisy chained to propagate a pad state change for use by the wake-up logic. 12. The chip of Claims 8 or 9, further comprising a plurality of pads and circuitry associated with each pad, wherein the circuitry detects and stores pad state changes for use by the wake-up logic. 13. A method, the method comprising: cutting off a core power supply to a chip; monitoring wake-up events for the chip using circuitry powered based on an input/output (I/O) power supply. 14. The method of Claim 12, further comprising selectively propagating, for at least one pad of the chip, a current core signal state and a stored core signal state. 15. The method of Claim 12, wherein said monitoring wake-up events comprises detecting and storing, for at least one pad of the chip, a pad state change. 16. The method of Claim 12, wherein said monitoring wake-up events comprises propagating a pad state change using daisy chained logic. 17. The method of Claim 16, further said monitoring wake-up events comprising polling components of the circuitry after receiving the propagated pad state change.
DETECTING WAKE-UP EVENTS FOR A CHIP BASED ON AN I/O POWER SUPPLYThe disclosure is directed to mobile devices or other battery operated devices, and more particularly, but not by way of limitation, to devices which selectively shut off power to internal circuitry to minimize power consumption. BACKGROUNDMobile devices and other battery operated devices are dependent on a limited power supply. In order to increase the operational duration of such devices without recharging or replacing batteries, efforts are made to minimize unnecessary power consumption. For example, if an integrated circuit of a device is not continuously needed, the integrated circuit can be selectively powered on and off. Many integrated circuits are selectively powered on and off locally (e.g., by switches within the integrated circuit). This method suffers from undesirable leakage current (power consumption) due among other things to imperfect switches. SUMMARYIn at least some embodiments, a device comprises a chip having a processor and wake- up logic. The device further comprises power management circuitry coupled to the chip. The power management circuitry selectively provides a core power supply and an input/output (I/O) power supply to the chip. Even if the power management circuitry cuts off the core power supply to the chip, the wake-up logic detects and responds to wake-up events based on power provided by the I/O power supply.In at least some embodiments, a chip comprises a core with a processor. The chip further comprises wake-up logic coupled to core. The core operates based on a core power supply. If the core power supply is cut off externally to the chip, the wake-up logic operates based on an I/O power supply.In at least some embodiments, a method comprises cutting off a core power supply to a chip. The method further comprises monitoring wake-up events for the chip using circuitry powered based on an input/output (I/O) power supply. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 shows a device in accordance with embodiments of the disclosure; FIG. 2 illustrates an integrated circuit in accordance with embodiments of the disclosure;FIG. 3 illustrates circuitry to selectively store a pad state in accordance with embodiments of the disclosure; FIG. 4 illustrates circuitry to propagate a wake-up event signal in accordance with embodiments of the disclosure;FIG. 5 illustrates circuitry to detect changes to a pad state in accordance with embodiments of the disclosure;FIG. 6 illustrates a method in accordance with embodiments of the disclosure; FIG. 7 illustrates another method in accordance with embodiments of the disclosure; andFIG. 8 illustrates another method in accordance with embodiments of the disclosure. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTSExample embodiments of the disclosure involve semiconductor chips having core logic and wake-up logic. To conserve power, a core power supply to the core logic is selectively cut off from a location external to the chip. Unlike the core logic, the wake-up logic is powered by an input/output (I/O) power supply that always remains on. In at least some embodiments, the I/O power supply is modified prior to powering the wake-up logic. The wake-up logic detects and responds to wake-up events even if the core power supply has been cut off. FIG. 1 shows a device 102 in accordance with embodiments of the disclosure. The device 102 may be a cell phone, a smart phone, a personal digital assistant (PDA), an MP3 player, or another battery operated device now known or later developed. As shown in FIG. 1, the device 102 comprises an LCD panel 104 coupled to a multi-function chip 110. The multifunction chip 110 comprises a processor 112 having various embedded functions 114 and wake-up logic 116 having at least one wake-up routine 118. In at least some embodiments, the multi-function chip 110 represents a "system on a chip" (SoC) and may comprise other components now known or later developed.As shown in FIG. 1, the device 102 further comprises power management circuitry 108 coupled to the multi-function chip 110. The power management circuitry 108 selectively provides a core power supply and an input/output (I/O) power supply to the multi-function chip 110. In at least some embodiments, the I/O power supply has a higher voltage level than the core power supply. As an example, the core power supply may be 0.9 Volts and the I/O power supply may be 1.8 Volts. As shown, the multi-function chip 110 also comprises a DC/DC converter 120 such as a low dropout converter (LDO) that reduces the voltage level of the I/O power supply for use by the wake-up logic 116. The device 102 also comprises a user input device 106 coupled to the multi-function chip 110. The user input device 106 enables a user to interface with the device 102 and may correspond to a keyboard, a keypad, a touchpad, buttons, switches or other input devices now known or later developed.In at least some embodiments, the power management circuitry 108 receives a request to cut off the core power supply. For example, this request may be in response to user input provided via the user input device 106. Alternatively, the request may be based on inactivity of the device 102 or of the multi-function chip 110. Alternatively, an embedded function 114 or other routine may cause the processor 112 to issue the request to the power management circuitry 108. In response to the request, the power management circuitry 108 selectively cuts off the core power supply to the multi-function chip 110, but continues to provide the I/O power supply.The wake-up logic 116 also receives notification of the request to cut off the core power supply. In response to receiving the request, the wake-up logic 116 stores the current pad state (low or high) of some or all pads of the multi-function chip 110. The timing or order in which the power management circuitry 108 and the wake-up logic 116 receive and process the request can vary as long as the wake-up logic 116 is able to store the current pad states before the core power supply is cut off. As an example, if the wake-up logic 116 receives the request to cut off the core power supply, the wake-up logic 116 responds by storing the current pad states. Once storage of the current pad states is complete, the request to cut off the core power supply could be forwarded or confirmed to the power management circuitry 108 which then cuts off the core power supply.While the core power supply is cut off, the wake-up logic 116 continues to function based on the reduced I/O power supply and monitors the occurrence of wake-up events. For example, the wake-up logic 116 may comprise circuitry that detects and stores changes to the stored pad states. Such changes could be caused, for example, by external signals being received by the multi-function chip 110. In at least some embodiments, the external signals are activated by a user interacting with the user input device 106 (e.g., by touching a keyboard, a keypad, a touchpad, a button, a switch or other input devices). If the wake-up logic 116 detects changes to a stored pad state, the wake-up logic 116 responds by polling its circuitry to determine which pad was affected. The wake-up logic 116 then correlates the affected pad with one of various wake-up events and responds accordingly. For example, the wake-up logic 116 may respond to a particular wake-up event by executing one or more wake-up routines 118.The wake-up routines 118 enable various tasks to be performed such as refreshing an image on the LCD panel 104, restoring the core power supply to parts of the multi-function chip 110 or restoring the core power supply to all of the multi-function chip 110. In at least some embodiments, one more of the wake-up routines 118 cause the multi-function chip 110 to perform a temporary task, after which the multi-function chip 110 returns to an "off state. The temporary task may or may not require restoring the core power supply to the multifunction chip 110. FIG. 2 illustrates an integrated circuit 200 (e.g., the multi-function chip 110 of FIG. 1) in accordance with embodiments of the disclosure. As shown in FIG. 2, the integrated circuit 200 comprises a core 202 powered by a core power supply. The core 202 represents, for example, the processor 112, the embedded functions 114, communication logic, or other core logic. The integrated circuit 200 further comprises an I/O ring 208 which runs around the perimeter of the core 202. The I/O ring 208 receives power from an LDO 206 which supplies a reduced I/O power supply ("VDDU") for use by circuitry located along the I/O ring 208. Embodiments of the circuitry are described in FIGS. 3-5. As will later be described, the circuitry is able to monitor wake-up events by detecting external signals received by the pads 210 of the integrated circuit 200. The LDO 206 also provides VDDU to the wake-up logic 204. The wake-up logic 204 together with the circuitry of FIG. 3-5 can perform, for example, the functions described for the wake-up logic 116 discussed in FIG. 1. In at least some embodiments, VDDU is provided from the LDO 206 to the I/O ring 208 and the wake-up logic 204 even if the core power supply (VDD) to the integrated circuit 200 has been cut off. FIG. 3 illustrates circuitry 300 to selectively store a pad state in accordance with embodiments of the disclosure. The circuitry 300 can be implemented for some or all pads 210 of the integrated circuit 200. As shown in FIG. 3, the circuitry 300 comprises logic 302 (e.g., a level shifter) that receives the signal A from the core 202. If VDD and the VDDS are high and the logic 302 is enabled, the signal A is propagated to a pad 210 via buffers 308 and 310. In at least some embodiments, the logic 302 is enabled/disabled based on an isolation control signal ("ISO") which is propagated by logic 304 when VDD and VDDU are high. In the embodiment of circuitry 300, the logic 302 is only disabled when ISO is propagated by the logic 304 and is high. As an example, ISO may be high when VDD is cut off or is going to be cut off and may be low otherwise.The circuitry 300 also comprises logic 306 (e.g., a latch) which selectively captures and propagates the signal A. As an example, if ISO is low, the logic 306 may capture but does not propagate the signal A (i.e., ISO is used to enable/disable propagation of the signal captured by the logic 306). If ISO is high and VDD is off, the logic 306 maintains and propagates the last captured state of the signal A (i.e., while VDD is off, new states for the signal A are not captured). In this manner, a valid state for the signal A can be provided to the given pad 210 of the integrated circuit 200. The stored state of the signal A can be propagated to other components of a device (e.g., the device 102) or can be compared with external signals being received at the given pad 210 to detect wake-up events.While the circuitry 300 shows one embodiment that selectively stores a pad state, other embodiments are possible. In general, embodiments such as the circuitry 300 propagate a signal ("A") from the core 202 to a given pad 210 if the core power supply ("VDD") is on. If VDD is cut off or is going to be cut off, the circuitry 300 stores and propagates the last state of the signal A (before VDD is cut off) to the given pad 210.FIG. 4 illustrates circuitry 400 to propagate a wake-up event signal in accordance with embodiments of the disclosure. The circuitry 400 can be implemented for some or all pads 210 of the integrated circuit 200. In at least some embodiments, the circuitry 400 comprises logic powered by the VDDU domain 404 A, 404B. For example, a daisy chain of OR gates 406 A, 406B can be used to propagate a local wake-up event from any pad of interest to a power controller (PC) 402 associated with the wake-up logic 204.In at least some embodiments, the VDDU domain 404A, 404B corresponds to the I/O ring 208 discussed in FIG. 2. In other words, the circuitry 400 can be positioned around the perimeter of the core 202 and can be powered by the I/O ring 208. For example, the VDDU domain 404A and the OR gate 406A can be positioned near the pad 210A and a filter 408 A which handles spikes in power. Similarly, the VDDU domain 404B and the OR gate 406B can be positioned near the pad 210B and a filter 408B. While the circuitry 400 shows one embodiment that propagates local wake-up event signals to the wake-up logic 204, other embodiments are possible. FIG. 5 illustrates circuitry 500 to detect changes to a pad state in accordance with embodiments of the disclosure. The circuitry 500 can be implemented for some or all pads 210 of the integrated circuit 200. In at least some embodiments, the circuitry 500 is separate from the circuitry 300. As shown in FIG. 5, the circuitry 500 may include a daisy chained OR gate 406 such as the OR gates 406A and 406B discussed for FIG. 4. In FIG. 5, the logic 502 propagates a pad state. The pad state is received by an XOR gate 506 and a latch 504 which is selectively enabled by a control signal ("WUCLKIN"). Changes to the pad state causes the XOR gate 506 to output a "1". The output of the XOR gate 506 is input to an AND gate 508 which also receives a control signal ("WUEN") as input. WUEN selectively permits the output of the XOR gate 506 to be propagated to the RS latch 514. As shown, WUEN is provided to the AND gate 508 via logic 510 and 512 based on an enable signal ("WUCLKIN"). In at least some embodiments, WUEN is high if VDD has been cut off or is going to be cut off. In other words, changes to a pad state are monitored while VDD is cut off.If the AND gate 508 outputs a "1" (indicating a pad state change), the RS latch 514 captures this output and asserts a local wake-up event signal ("WUEVNT_U") to the daisy chained OR gate 406 (e.g., 406A or 406B) and to logic 516. The OR gate 406 outputs a wake- up signal ("WUOUT") to the next component of the daisy chained logic (e.g., 406A to 406B) and so on until the wake-up logic 204 receives notification that a pad state change has occurred. In response, the wake-up logic 204 polls the circuitry 500 (e.g., the output "WUEVNT" of the logic 516) of each pad of interest to determine which pad experienced the state change. The wake-up logic 204 is able to interpret pad state changes and react accordingly.As shown in FIG. 5, the daisy chained OR gate 406 may also receive an external wake- up signal ("WUIN") as input. WUIN is received from a previous component of the daisy chained logic of which the OR gate 406 is a part. If WUIN is high, the OR gate 406 propagates a high signal to the next component of the daisy chained logic (i.e., WUOUT may be asserted due to the local wake-up event signal WUEVNT_U or the external wake-up signal WUIN). In either case, an asserted WUOUT signal is propagated through the daisy chained logic to the wake-up logic 204. In FIG. 5, all logic is powered by VDDU with logic 510 and 516 also capable of being powered by VDD. While the circuitry 500 shows one embodiment that detects changes to a pad state, other embodiments are possible. In general, embodiments such as the circuitry 500 are powered by VDDU (the reduced I/O power supply) and can detect and store pad state changes that occur after VDD (the core power supply) is cut off. The circuitry 500 stores information regarding which pad experienced the state change and enables the wake-up logic 204 to correlate a pad state change with a wake-up event and respond accordingly.For example, the wake-up logic 204 may respond to a particular wake-up event by executing one or more wake-up routines (e.g., the wake-up routines 118 of FIG. 1). The wake- up routines enable various tasks to be performed such as refreshing an image on an LCD panel, restoring the core power supply to parts of the integrated circuit 200 or restoring the core power supply to all of the integrated circuit 200. In at least some embodiments, one more of the wake-up routines cause the integrated circuit 200 to perform a temporary task, after which the integrated circuit 200 returns to an "off state. The temporary task may or may not require restoring the core power supply to the integrated circuit 200.FIG. 6 illustrates a method 600 in accordance with embodiments of the disclosure. As shown in FIG. 6, the method 600 comprises providing a core power supply and an I/O power supply to a chip (block 602). At block 604, the core power supply is cut off from a location external to the chip (e.g., from the power management circuitry 108). At block 606, wake-up events are monitored based on a reduced I/O power supply while the core power supply is off. If a wake-up event is not detected (determination block 608), the method 600 returns to block 606. If a wake-up event is detected (determination block 608), a determination is made as to the wake-up event type (block 610). An action is then performed based on the wake-up event type (block 612).FIG. 7 illustrates another method 700 in accordance with embodiments of the disclosure. As shown in FIG. 7, the method 700 comprises receiving a power off request for a chip (block 702). In response, pad states are stored and the core power supply to the chip is shut off (block 704). Changes to the pad states are then monitored based on an I/O power supply provided to the chip (block 706). If a pad state does not change (determination block 708), the method 700 returns to block 706. If a pad state changes (determination block 708), a wake-up event related to the pad state change is determined (block 710). An action is then performed based on the wake-up event (block 712). FIG. 8 illustrates another method 800 in accordance with embodiments of the disclosure. As shown in FIG. 8, the method 800 comprises providing logic to detect and store a pad state change for each of multiple chip pads (block 802). At block 804, the method 800 daisy chains the logic for each pad to propagate a pad state change. At block 806, logic of each pad is polled to determine the origin of the pad state change. The origin of the pad state change is then correlated with a wake-up event (block 808). Finally, an action is performed based on the wake-up event (block 810).Those skilled in the art to which the invention relates will appreciate that the foregoing described embodiments are merely illustrative embodiments that are representative of the many embodiments and variations of embodiments within the scope of the claimed invention.
A method and apparatus for improving performance of mass storage class devices accessible via a Universal Serial Bus (USB) is presented. Performance is improved by providing support in a USB host to allow command queuing and First-Party DMA (FPDMA) to be supported in the mass storage class devices.
CLAIMS 1. A method comprising: allocating a plurality of lists of buffers in a host to store data to be moved over a stream bulk pipe between the host and a device, each list of buffers associated with a stream identifier; and enabling transfer of data over the stream bulk pipe between the device and the list of buffers associated with the stream identifier. 2. The method of claim 1, further comprising: allowing the stream identifier for the data transfer to be selected by either the host or the device. 3. The method of claim 2, further comprising: transferring a command from the host to the device over a bulk logical pipe; and transferring status from the device to the host over a status stream bulk pipe. 4. The method of claim 3, wherein the stream bulk pipe is a data-in stream bulk pipe to transfer data from the device to the host. 5. The method of claim 3, wherein the stream bulk pipe is a data-out stream bulk pipe to transfer data from the host to the device. 6. The method of claim 1, wherein the device is a mass storage class Universal Serial Bus device. 7. An apparatus comprising: a plurality of lists of buffers in a host to store data to be moved over a stream bulk pipe between the host and a device, each list of buffers associated with a stream identifier; and control logic to enable transfer of data over the stream bulk pipe between the device and the list of buffers associated with the stream identifier. 8. The apparatus of claim 7, wherein the control logic to allow the stream identifier for the data transfer to be selected by either the host or the device. 9. The apparatus of claim 8, wherein the control logic to transfer the command from the host to the device over a standard bulk logical pipe and to receive status from the device over a status stream bulk pipe. 10. The apparatus of claim 7, wherein the stream bulk pipe is a data-in stream bulk pipe to transfer data from the device to the host. 11. The apparatus of claim 7, wherein the stream bulk pipe is a data-out stream bulk pipe to transfer data from the buffer to the device. 12. The apparatus of claim 7, wherein the device is a mass storage class Universal Serial Bus (USB) device. 13. A system comprising : a dynamic random access memory; a plurality of lists of buffers in a host to store data to be moved over a stream bulk pipe between the host and a device, each list of buffers associated with a stream identifier; and control logic to enable transfer of data over the stream bulk pipe between the device and the list of buffers associated with the stream identifier and the control logic to allow the stream identifier for the data transfer to be selected by either the host or the device. 14. The system of claim 13 wherein, the host is a Universal Serial Bus (USB) host and the device is a mass storage class USB device. 15. A method comprising : receiving an Advanced Technology Attachment (ATA) command from a host over a command pipe, the ATA command encapsulated in a universal serial bus protocol packet, the universal serial bus protocol packet including an identifier identifying a list of buffers allocated in the host to store data associated with the ATA command; storing the ATA command and associated identifier; processing the ATA command by transferring data over a data stream bulk pipe between a storage medium in the device and the list of buffers selected by the device or the host, the list of buffers associated with the stream identifier; and forwarding ATA command status and the associated identifier to the host encapsulated in a universal serial bus protocol packet over a status stream bulk pipe. 16. The method of claim 15, wherein another received ATA command is processed prior to processing the stored ATA command. 17. The method of claim 15, wherein first-party direct memory access is used to transfer the data over the data stream bulk pipe. 18. The system of claim 15, wherein the host is a Universal Serial Bus (USB) host and the device is a mass storage class USB device. 19. The method comprising: receiving a Small Computer Systems Interface (SCSI) command from a host over a command pipe, the SCSI command encapsulated in a universal serial bus protocol packet, the universal serial bus protocol packet including an identifier identifying a list of buffers allocated in the host to store data associated with the SCSI command; storing the SCSI command and associated identifier; processing the SCSI command by transferring data over a data stream bulk pipe between a storage medium in the device and the list of buffers selected by the device or the host, the list of buffers associated with the stream identifier; and forwarding SCSI command status and the associated identifier to the host encapsulated in a universal serial bus protocol packet over a status stream bulk pipe. 20. The method of claim 19, wherein another received SCSI command is processed prior to processing the stored SCSI command.
METHOD AND APPARATUS FOR UNIVERSAL SERIAL BUS (USB) COMMAND QUEUING FIELD This disclosure relates to Universal Serial Bus (USB) and in particular to mass storage class USB devices. BACKGROUND The Universal Serial Bus (USB) is a serial bus standard that supports data exchange between a host computer and a plurality of simultaneously accessible devices such as peripherals which may be external to the host computer. USB devices include human interface devices, for example, mouse, keyboard, tablet and game controller, imaging devices, for example, scanner, printer and camera and storage devices, for example, Compact-Disk Read Only Memory (CD-ROM), floppy drive and Digital Video Disk (DVD). A USB host initiates all data transfers to/from the USB devices accessible via the physical USB. A data transfer (transaction) is initiated when the host controller sends a USB packet that identifies the type and direction of the data transfer, the address of the USB device and an endpoint number in the device. An endpoint is a uniquely identifiable portion of a USB device that is the terminus of a communication flow between the USB host and the USB device. The endpoint direction may be IN (to host) or OUT (from host). Data and control exchange between the USB host and the USB device is supported as a set of either unidirectional or bidirectional logical pipes. A logical pipe is a logical abstraction between the USB host and an endpoint in a USB device to exchange data and control packets between the USB host and the USB device. The USB device may transfer data over a plurality of logical pipes (pipe bundle) to a host, for example, there may be a separate unidirectional logical pipe for transporting data to an OUT endpoint in the USB device and another unidirectional logical pipe for transporting data to the USB host from an IN endpoint in the USB device. Command sets from existing industry-standard storage protocols may be used to communicate between a USB host and a mass storage class USB device, for example, the Small Computer System Interface (SCSI) protocol. The SCSI protocol is a set of standards for transferring data between host systems and devices, such as, storage devices. SCSI defines communication between an initiator (for example, a host) and a target (for example, a device), with the initiator sending a command to the target. SCSI commandsare sent from the initiator to the target encoded in a Command Descriptor Block (CDB). The CDB includes an operation code and command-specific parameters. The SCSI commands include read commands and write commands. After completion of a data transfer (for example, a transfer of write data to the target or a transfer of read data to the initiator), the target returns a status code indicating whether the command was successfully completed. USB communicates with mass storage class USB devices by encapsulating SCSI commands in a USB wrapper (header) of a USB packet. For example, the command sets used by the USB host may be those defined by SCSI Primary Commands - 2 (SPC-2). A pair of unidirectional logical pipes are configured for transferring the SCSI CDB, SCSI status code and the data exchanged between the host and the mass storage class USB device. An IN pipe (between an endpoint in the USB device and the USB host) is configured to transfer data and command to the device. An OUT pipe (between the endpoint in the USB device and the USB host) is configured to transfer data and status from the device to the USB host. Each of the logical pipes is associated with a logical buffer in the host (initiator) for storing data to be transferred over the USB. BRIEF DESCRIPTION OF THE DRAWINGS Features of embodiments of the claimed subject matter will become apparent as the following detailed description proceeds, and upon reference to the drawings, in which like numerals depict like parts, and in which: Fig. 1 is a block diagram of an embodiment of system that includes a USB host and a mass storage class USB device that provides support for command queuing and/or out- of-order command processing in the mass storage class USB device according to the principles of the present invention; Fig. 2 is a block diagram of the client side of a standard bulk pipe in the pipe bundle shown in Fig. 1; Fig. 3 is a block diagram of an embodiment of a stream bulk pipe according to the principles of the present invention; Fig. 4 is a state diagram that illustrates an embodiment of a stream protocol state machine for IN or OUT stream endpoint stream servicing for a stream bulk pipe; Fig. 5 is a block diagram of an embodiment of system that includes a USB host and a mass storage class USB device communicating via a pipe bundle including stream pipesto provide support for command queuing and/or out-of-order command processing in the mass storage class USB device; Fig. 6 is a flow chart illustrating an embodiment of a method implemented in the device for transferring data on a logical pipe between the device and the host; and Fig. 7 is a block diagram of a system that includes a USB host and a USB device that provides support for command queuing and/or out-of-order command processing in the USB device. Although the following Detailed Description will proceed with reference being made to illustrative embodiments of the claimed subject matter, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art. Accordingly, it is intended that the claimed subject matter be viewed broadly, and be defined only as set forth in the accompanying claims. DETAILED DESCRIPTION An example of a mass storage class USB device is a hard disk drive (hard drive, hard disk or fixed disk drive). A hard disk drive includes a spindle which holds one or more flat circular disks (platters) with magnetic surfaces that store digitally encoded data. As the platters rotate at a rotational speed, for example, 4200 revolutions per minute (RPM), 3600 RPM, 7200 RPM or 5400 RPM, a read/write head is moved along the platters to read/write data from/to the platter. A hard disk drive is a block storage device with data transferred to/from the platters (storage media) in blocks of one or more sectors. A sector stores a fixed number of bytes, for example, 512 bytes or 1024 bytes. There are many serial storage protocol suites such as, Serial Attached SCSI (SAS) and Serial Advanced Technology Attachment (SATA). A version of the SATA protocol is described in "Serial ATA: High Speed Serialized AT Attachment," Revision 1.0a, published on January 7, 2003 by the Serial ATA Working Group. A version of the SAS protocol is described in "Information Technology—Serial Attached SCSI-1.1," Working Draft American National Standard of International Committee For Information Technology Standards (INCITS) TlO Technical Committee, Project T10/1562-D, Revision 1, published Sep. 18, 2003, by American National Standards Institute (ANSI). Typically, storage devices such as hard disk drives support queuing of multiple commands in the hard disk drive. As multiple commands received from the host may be queued in the hard disk drive, the hard disk drive may reorder the execution of the queued commands. For example, read commands and write commands stored in the commandqueue may be serviced in order of proximity to the current read/write head position instead of in the order that the commands are received. The ability to re -order commands stored in the command queue in the hard disk drive may result in a reduction of average access time. A pair of unidirectional logical pipes is associated with a mass storage class USB device, an OUT logical pipe and an IN logical pipe. The SCSI Commands and Write Data are sent from the host to the device on the OUT logical pipe, and SCSI Completion Status and Read Data are sent from the device to the host on the IN logical pipe. Each logical pipe is associated with a single logical buffer in memory in the host controller. Thus, a USB host may only issue a single command at a time to a hard disk drive. As only one command can be outstanding at a time, the command queuing and/or out-of-order command functions supported by the storage protocol in the hard disk drive cannot be used. In an embodiment of the present invention, a USB host controller in a host computer supports management of a plurality of logical buffers per logical pipe to allow mass storage device command queuing and/or out-of-order command processing in a mass storage class USB device that is accessible over the USB physical bus. Fig. 1 is a block diagram of an embodiment of system 100 that includes a USB host 102 and a mass storage class USB device 104 that provides support for command queuing and/or out-of-order command processing in the mass storage class USB device 104 according to the principles of the present invention. The USB host 102 includes a USB host controller 106, USB system 108 and a client 110. The USB device 104 includes a USB bus interface 112, a USB logical device 114 and a function 116. The USB logical device 114 includes one or more USB endpoints. In an embodiment the USB device 104 is a mass storage class USB device, for example, a disk drive, Digital Video Disk (DVD) drive, compact disk (CD) drive, Redundant Array of Independent Disks (RAID), tape drive or other mass storage class storage device. The USB host 102 communicates with the USB logical device 114 in the USB device 104 over the USB physical bus 128 via one of a set of communication flows (logical pipes) 126 also referred to a pipe bundle 124. As discussed earlier, a logical pipeis a logical abstraction between the USB host and an endpoint in a USB device to exchange data and control packets between the USB host and the USB device. The fundamental element of communication on the USB physical bus 128 is a USB packet that includes a start, information and an end. The packet information may include 1 to 1025 bytes with the first byte storing a packet identifier. There are four categories of packet identified by the packet identifier: token, data, handshake and special. Token packets are used to set up data packets which are acknowledged by handshake packets. A handshake packet indicates the status of a data transaction. Handshake packet types include ACK which indicates successful reception of token or data packet and NACK which indicates that the receiver cannot accept the packet because, for example, the receiver may be busy or may not have resources to handle the packet. A USB logical device 114 has a plurality of independent endpoints. An endpoint in the USB logical device 114 is the terminus of a communication flow between the USB host 102 and the USB device 104. Each endpoint may have one direction of data flow (that is, IN from device to host or OUT to device from host). A USB pipe (logical pipe) 126 is an association between an endpoint on a USB device and the client 110 in the USB host 102. A logical pipe 126 represents the ability to move data between a buffer in the client 110 in the USB host 102 and the endpoint in the USB device 104. The client 110 in the USB host 102 includes memory, a portion of which is allocated for each endpoint to provide a logical buffer to store data to be transferred (moved to or from) the USB device 104. There is a 1 : 1 mapping of a logical buffer to a USB endpoint associated with a pipe 124. As shown in Fig. 1, the functions performed by the system 100 may be divided into a plurality of layers: a USB bus interface layer 122, a USB device layer 120 and a USB function layer 118. The USB bus interface layer 122 provides physical connectivity between the USB host 102 and the USB device 104. The USB device layer 120 performs generic USB operations. The USB function layer 118 provides additional capabilities to the USB host 102 and the USB device 104. At the function layer 118, the USB provides a communication service between the client 110 in the USB host 102 and the function 116 in the USB device 104. The client 110 requests that data be moved across the USB over one of the set of logical pipes ("pipebundle") 124. The data is physically transferred over the physical USB bus 128 by the interface layer 122. At the interface layer 122, dependent on the direction of transfer, either the host controller 106 or the USB bus interface 112 packetizes the data prior to sending it across the USB physical bus 128. In an embodiment of the invention, at the logical layer 118, a pipe bundle 124 having four logical pipes 126 is defined by the USB device 104 to move SCSI commands, data and status between the client 110 and the function 114 in the USB device 104. Each of the logical pipes in the pipe bundle 124 is configured as a "bulk transfer" type pipe (bulk pipe), that is, a pipe that supports the transfer of large amounts of data at variable times that can use any available bandwidth. A bulk pipe is a unidirectional stream pipe (IN or OUT). A standard bulk pipe is described in the USB 2.0 standard. A stream bulk pipe will be described later in conjunction with Fig. 3. A stream bulk pipe provides the ability to move a stream of data between the host and a device using a stream protocol that allows out-of-order data transfers for mass storage device command queuing. The four logical pipes 126 in the pipe bundle 124 are defined as follows: (1) a logical pipe between the client and a standard bulk OUT endpoint for sending commands to the mass storage class USB device 104; (2) a logical pipe between the client and a stream bulk IN endpoint for data transfers associated with read commands from the mass storage class USB device 104 to the USB host 102; (3) a logical pipe between the client and a Stream Bulk OUT endpoint for data transfers associated with write commands from the USB host to the mass storage class USB device, and (4) a logical pipe between the client 110 and a Stream Bulk IN endpoint for transfers associated with command completions. In one embodiment, the logical pipe associated with command completions is a standard Bulk IN pipe. Fig. 2 is a block diagram of the client 110 side of a standard bulk pipe 200 in the pipe bundle 124 shown in Fig. 1. The device 104 sees a single logical buffer in the host 102. A buffer list 204 allows the logical buffer to be defined as a plurality of physical buffers 206. The standard bulk pipe 200 includes an endpoint 202 that includes a buffer list 204, which defines a set of buffers 206 in memory in the host 102 that data transferred over the USB physical bus 128 is moved to or from. The standard bulk pipe 200 is used to transfer data to/from (between) a standard endpoint 202 in the USB device 104 from/to (between) one or more buffers 206 in the buffer list 204 in the client 110 in the USB host102. The buffer list 204 provides pointers to each of the buffers 206 allocated for the standard bulk pipe 200. The data stored in buffers 206 referenced by the buffer list 204 is transferred sequentially. The direction of transfer of standard bulk pipe 200 is initialized as either IN or OUT and the standard bulk pipe 200 uses the buffer list 204 to transfer data sequentially in the respective direction, that is, IN or OUT to/from the buffers 206 in memory in the host 102. In an embodiment, the standard bulk pipe 200 is used to transfer SCSI commands to the mass storage class USB device 104. In another embodiment, the standard bulk pipe 200 is used to transfer Advanced Technology Attachment (ATA) commands to the mass storage class USB device 104 In an embodiment of the invention, in addition to a standard bulk pipe that defines a single buffer list 204 per USB endpoint, a stream bulk pipe may be defined between a USB device 104 and a USB host 102 to define multiple buffer lists per USB endpoint, which enables out-of-order command processing in the USB device 104. Fig. 3 is a block diagram of an embodiment of a stream bulk pipe 300 according to the principles of the present invention. The stream bulk pipe 300 includes a stream array 304 and a stream endpoint 302. The stream array 304 includes a set of buffer lists 204 associated with a stream endpoint 302. A stream defines an association between a USB data transfer and a buffer list 204 in a stream array 304 and has an associated stream identifier. Referring to Fig. 2, the standard bulk pipe 200 is associated with a single buffer list 204 in memory in the host 102, that is, all data transferred over the standard bulk pipe 200 is moved to or from a single set of memory buffers. Returning to Fig. 3, in contrast to the standard bulk pipe 200 shown in Fig. 2, a stream array 304 associated with the stream bulk pipe 300 allows a plurality of memory buffer lists 204 to be associated with a single stream endpoint 302, and thus allows the USB device to route data to/from a particular memory buffer list 204. Each buffer list 204 in the stream array 304 is identified by a unique stream identifier. The stream identifier allows multiple USB host processes to access the same USB endpoint, and also allows the host controller 106 in the host 102 and the stream endpoint 114 in the device 104 to transfer data directly between their respective address spaces (endpoint to/from host buffer(s)). The stream identifier may be stored in a stream identifier (SID) field in a USB packet header to distinguish the stream that is associated with the USB packet. The streamidentifier may also used as an index into the stream array 304 to select one of the buffer lists 204 from the stream array 304. One entry in the stream array 304 is provided for each stream supported by a stream endpoint 304. Through the use of a stream pipe and associated stream identifiers, a USB device 104 can effectively re -order commands because the USB device can select one of the plurality of buffer lists associated with the stream endpoint based on a particular stream identifier to transfer to/from. In an embodiment, both the host 102 and the device 104 include a stream protocol state machine (SPSM) for managing stream endpoints in stream pipes. The stream protocol state machine will be discussed in conjunction with Fig. 4. Fig. 4 is a state diagram that illustrates an embodiment of a stream protocol state machine 400 for IN or OUT stream endpoint stream servicing for a stream bulk pipe 300. As discussed earlier, the data transfer direction for an IN data transfer is to the host, that is, physical buffers in memory in the host 102 receive function data from the device 104 and the data transfer direction for an OUT data transfer is from the host 102. As discussed earlier, data packets transferred between the host 102 and a stream endpoint in the device 104 include a stream identifier field in a header of the packet to identify a stream associated with the data transfer. In addition, transaction packets are defined to pass stream identifiers between the host 102 and the device 104. In an embodiment, four packet types are provided: DP, ACK, NRDY and ERDY. For an IN pipe, the host generates ACK packets and the device generates DP packets. For an OUT pipe, the host generates DP packets and the device generates ACK packets. A "host transaction packet" refers to an ACK or a DP packet depending on whether the pipe is an IN pipe or an OUT pipe, respectively. A "device transaction packet" refers to a DP or an ACK packet depending on whether the pipe is IN pipe or an OUT pipe, respectively. A "USB packet" is a generic reference to any packet type. As shown in Fig. 4, the state machine has five states labeled as follows: disabled, prime pipe, idle, start stream and move data. The disabled state is the initial state of the stream protocol state machine after the stream endpoint in the device 104 has been configured by the host 102. A transition to another state may be initiated by either the host 102 or the device 104 through the use of a USB packet. The prime pipe state is initiated by the host 102 to inform the device 104 that a buffer list 204 has been added or modified by the host 102. After posting a buffer list 204 to the stream endpoint 302, the host 102 sends a host transaction packet, that results in atransition of the stream protocol state machine 400 to the prime pipe state. The device 104 responds with an NRDY packet that results in a transition to the idle state. While the stream protocol state machine 400 is in the idle state there is no stream identifier selected. In the idle state, the stream protocol state machine 400 waits for a host initiated transition to prime pipe state or move data state, or for a device initiated transition to the start stream state. As discussed earlier, the host initiated and device initiated transitions are performed through the use of transaction packets that originate in either the host 102 or the device 104. The host and device initiated transitions provide the current stream identifier so that the stream pipe can begin moving data to/from physical buffers 206 in memory in the host 102 that are associated with the stream identifier. The current stream identifier is used to identify the buffer list that is used by the host to move data. Thus, the current stream identifier may be selected by either the host or the device dependent on whether the data transfer is initiated by the host or the device. While in the idle state, the device 104 may issue an ERDY packet to transition the stream protocol state machine to the start stream state. The stream identifier that is passed to the host in the ERDY packet is used by the host to initiate a data transfer (IN or OUT) to/from the buffer(s) 206 associated with the stream identifier. If a valid buffer list 204 exists for the stream identifier, the host 102 initiates the data transfer, and the stream protocol state machine 400 transitions to the move data state. If a valid buffer list does not exist for the stream identifier, the host 102 sends a transaction packet to the device 104 indicating that the data transfer has been rejected and the stream protocol state machine returns to the idle state. If the device 104 is ready to transfer data for the stream identifier, the device 104 acknowledges the transfer with a transaction packet, and the stream protocol state machine 400 transitions to the move data state. In the move data state, the transfer continues until the host 102 exhausts the buffer list 204 associated with the stream identifier, or the device 104 completes the data transfer associated with the stream identifier. In either case, the stream protocol state machine 400 transitions to the idle state. In contrast to the host initiated data transfer that results in a transition from the idle state to the move data state, the start stream state is initiated by the device 104 to select a stream (list of buffers associated with the stream endpoint) and start a data transfer. If the device selected stream is accepted by the host; the stream protocol state machine transitions to the move data state to transfer data for the current stream identifier. If thedevice selected stream is rejected by the host 102, the stream protocol state machine 400 returns to the idle state. In the move data state, data for a particular stream associated with a stream endpoint 302 in a device 104 is transferred between the host 102 and the device 104. A current stream identifier that is used to select the stream for which data is being moved is maintained by both the host 102 and the device 104. Valid stream identifier values are encoded by the host 102 and passed to the device 104. These values may be passed to the host through an out-of-band, device class defined method. For example, valid Stream identifier values are passed to the device 104 with the SCSI command that is sent over the logical bulk pipe in the pipe bundle 124 allocated for sending SCSI commands to the mass storage class device 104. These values are out-of-band because they are not passed to the device on the stream bulk pipe. In other embodiments, the stream identifiers may be passed to the device on an OUT stream bulk pipe. The data is transferred to the buffer 206 associated with the stream identifier. The use of the stream identifier allows data to be transferred for streams in any order because there is a separate buffer list 204 assigned per stream identifier. Thus, two processes A and B may each be assigned a unique stream identifier allowing the commands for each of the respective processes to be completed out-of-order. In the move data state, data associated with the currently selected stream is transferred IN or OUT. If the move data state is entered by a host initiated stream selection, the stream is selected by the host 102. If the move state is entered from the start Stream state, the stream is selected by the device 104. The stream protocol state machine 400 transitions back to the idle state after the data transfer for the stream is complete, or if the host 102 or the device 104 terminates the data transfer for the stream. The transition to idle state invalidates the selected stream for the data transfer over the stream bulk pipe. A stream is selected by a stream identifier value that is stored in a stream identifier field. The stream identifier field may be included in a header of a data packet or an in- band control packet. For example, the stream identifier may be stored in a field (one or more bits) in the packet header that is not currently used by the USB 3.0 protocol, or an Extended transaction format USB 2.0 packet. In an embodiment, the stream identifier value is associated with the SCSI command. Each SCSI command sent to the device includes a stream identifier and the device 104 uses that stream identifier to identify the data packets and status packets moved to/from the host 102 that are associated with theSCSI command. In an embodiment for the ATA storage protocol, the stream identifier is associated with an ATA command. Reserved values of the stream identifier may indicate that the stream identifier is not valid, for example, the Prime identifier is reserved for transitioning in and out of a prime pipe state, or the No Stream identifier indicates the host rejection of a device initiated stream selection. The stream protocol state machine 400 may transition to the disabled state upon detection of an error in any another state. This error condition may be handled in the disabled state prior to transitioning to another state. Fig. 5 is a block diagram of an embodiment of system 500 that includes a USB host 102 and a mass storage class USB device 104 communicating via a pipe bundle 124 including stream pipes to provide support for command queuing and/or out-of-order command processing in the mass storage class USB device 104. The pipe bundle 124 includes a standard bulk pipe ("Command OUT") 508 between the client 102 and a standard bulk OUT endpoint 510 for sending commands to the mass storage class USB device 104. The pipe bundle 124 also includes a first stream bulk pipe ("Data IN") 506-1 between the client 102 and a Stream Bulk IN endpoint 512 for transferring data associated with read commands from the mass storage class USB device to the host 102 and a second stream bulk pipe ("Data OUT") 506-2 between the host 102 and a Stream Bulk OUT endpoint 514 for transferring data associated with write commands from the USB host 102 to the mass storage class USB device 104. The pipe bundle 124 also includes a third stream bulk pipe ("Status IN") 506-3 between the host 102 and a Stream Bulk IN endpoint 516 for transferring response data associated with command completions. Each of the respective stream bulk pipes 506-1, 506-2, 506-3 has a corresponding stream array 520-1, 520-2, and 520-3 in memory in the host 102. Upon receiving a command (for example, a SCSI command or an ATA command) from a host process to read data stored in the mass storage class USB device 104, the host 102 assigns a stream identifier to the command. The stream identifier is used to identify the command, and the response (for example, SCSI status) and data buffers in memory in the host 102 allocated for the command. The command is stored in a buffer in the buffer list 518 associated with the standard bulk pipe 508 assigned to logically transport commands from the host 102 to the mass storage class USB device 104.Dependent on the command type, (that is, whether the command results in a data transfer to or from the mass storage class USB device 104), a list of buffers associated with an entry corresponding to the stream identifier is allocated for the command in one of the stream arrays 520-1, 520-2 associated with a stream bulk pipe for data transfer. An entry in the stream array 520-3 associated with a stream bulk pipe for responses 506-3 is also allocated for the command. The mass storage class USB device 104 may queue commands received from the host 102 and may process the queued commands in any order. When the mass storage class USB device 104 is ready for data transfer (IN or OUT) over one of the stream bulk pipes ("Data IN, Data OUT) 506-1, 506-2, the data is transferred to/from the data buffer(s) in the data buffer list in the stream array allocated for the particular command based on the stream identifier provided by the command. The use of stream identifiers and stream arrays 520-1, 520-2, 520-3 allows a plurality of host software processes to queue commands in the same stream bulk endpoint, and also allows the host controller 106 in the host 102 and an endpoint in the device 104 to transfer data using direct memory accesses (DMA) between their respective address spaces. In addition, a device may initiate a host DMA operation for a data transfer without any host intervention. For example, the device may select the DMA context (that is, buffer list) by sending a transaction packet with a stream identifier to the host to change state to the start stream state. The stream identifier specifies the location of the data in the host to/from which the data is to be transferred. Based on the Stream identifier value, the host controller 106 loads the appropriate buffer list pointer (address) in the stream array into a DMA controller in the host and the DMA transfer proceeds without any processor intervention. This mechanism provides a means by which a USB device can effectively re-order commands because it can select a specific buffer in the host to transfer data to, based on a stream identifier. For example, in an embodiment for a mass storage class device that supports the ATA or SATA storage protocol, the stream identifier provides support for first-party direct memory access (FPDMA) to allow the device to transfer data directly between the device and buffers in the host. In addition to allowing support for command queuing in the mass storage class USB device 104, the use of stream identifiers also allows a specific core, in a system having a plurality of cores, to be selected for processing the completion of the command.A variety of host and device stream service algorithms known to those skilled in the art, for example, stream prioritization schemes, may be used to select the next stream to be serviced. These schemes are beyond the scope of the present invention. Fig. 6 is a flow chart illustrating an embodiment of a method implemented in the device for transferring data on a logical pipe between the device 104 and the host 102. The data transfer may be over an IN pipe or an OUT pipe. In an IN pipe, endpoint buffers 206 in a host receive data from the device. In an OUT pipe endpoint buffers 206 in a host store data to be transferred to the device. At block 600, the device 104 waits for the initial endpoint buffer to be configured (assigned) for the logical pipe by the host 104. After an endpoint buffer is configured, the host issues a transaction packet to the device 104 with the stream identifier in the host transaction packet set to "prime" to inform the target that the endpoint buffer is assigned. The "prime" host transaction packet is issued the first time that endpoint buffers are assigned to the pipe by the host 102. Upon receiving the "prime" host transaction packet, processing continues with block 602. At block 602, in response to receiving the "prime" host transaction packet, indicating that the endpoint buffer has been assigned by the host 102, the device 104 sends an NRDY packet to the host with the stream identifier set to "prime" indicating that it has recorded that the host has a stream ready to transfer data. Processing continues with block 604. At block 604, if the device 104 has a stream ready to transfer data, the device 104 may propose a stream selection for which a data transfer may be initiated by host 102. If the stream selection is initiated by the device 104, processing continues with block 606. If not, processing continues with block 605. At block 605, a stream selection for which to transfer data may be initiated by host 102. To initiate a stream selection, the host 102 issues a host transaction packet to the device 104 with stream identifier set to "stream n". The host 102 may initiate a stream selection if the endpoint buffer that was assigned is for the last proposed stream identifier, for example, to resume a data transfer for a stream identifier that needs an additional endpoint buffer to be allocated by the host 102. If the stream selection is initiated by the host 102, processing continues with block 607. If not, processing continues with block 602.At block 606, the device 104 either requests that it start a stream transfer to the host or that the host 102 start a stream transfer to the device. To request a stream transfer, the device 104 issues an ERDY packet to the host with stream identifier set to "stream n" (valid stream) and a stream number greater than 0. Processing continues with block 608. At block 607, the device 104 determines if the stream proposed by the host is ready to transfer data. If the stream is ready, processing continues with block 610. If not, processing continues with block 602. At block 608, the device 104 waits for the host 102 to accept or reject the stream selection proposed by the device. A host transaction packet with stream identifier set to "stream n" indicates that the host 102 has accepted the proposed stream. If the stream is accepted, processing continues with block 610. If the stream is not accepted, the device 104 receives a host transaction packet with stream identifier field set to "no stream". In an embodiment, the host 102 rejects a stream selection by the device 104 if there are no endpoint buffers available for the device selected stream identifier. The host 102 may initiate a previously proposed stream selection after assigning endpoint buffers for the stream identifier. If the stream selection is rejected by the host, processing continues with block 602. At block 610, the stream identifier is set at both the host end and the device end, and data is moved from/to the device to the endpoint buffers allocated for the stream on the host for the pipe. Data is moved while endpoint buffers are available for the stream in the host and the device has data to be moved to the host for the stream or the host has data to be moved to the device for the stream. The "idle" loop is comprised of the blocks 602, 604, and 605. In this loop the device checks for a prime pipe transition 602, a device initiated transfer 605, or a host initiated transfer 605. Fig. 7 is a block diagram of a system that includes a USB host 710 and a USB device 104 that provides support for command queuing and/or out-of-order command processing in the USB device 104. The USB device 104 includes a USB interface 730, a storage protocol function 732 and a storage medium 734. In an embodiment, the storage protocol function 732 performs SATA protocol related storage functions. In another embodiment, the storage protocol function 732 performs SCSI protocol related functions. The storage medium 734 includes one or more platters (storage medium) to store data.The system 700 includes a processor 701, a Memory Controller Hub (MCH) 702 and an Input/Output (I/O) Controller Hub (ICH) 704. The MCH 702 includes a memory controller 706 that controls communication between the processor 701 and memory 708. The processor 701 and MCH 702 communicate over a system bus 716. In an alternate embodiment, the functions in the MCH 702 may be integrated in the processor 701 and the processor 701 coupled directly to the ICH 704. The processor 701 may be any one of a plurality of processors such as a single core Intel® Pentium IV ® processor, a single core Intel Celeron processor, an Intel® XScale processor or a multi-core processor such as Intel® Pentium D, Intel® Xeon® processor, Intel® Core® Duo processor, or any other type of processor. The memory 708 may be Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), Synchronized Dynamic Random Access Memory (SDRAM), Double Data Rate 2 (DDR2) RAM, Rambus Dynamic Random Access Memory (RDRAM), or any other type of memory. The ICH 704 may be coupled to the MCH 702 using a high speed chip-to-chip interconnect 714 such as Direct Media Interface (DMI), or any other type of chip to chip interface. DMI supports 2 Gigabit/second concurrent transfer rates via two unidirectional lanes. The ICH 704 may include a Universal Serial Bus (USB) host controller 710 for controlling communication with at least one USB mass storage class USB device 712 coupled to the ICH 704. The ICH 704 may communicate with the USB mass storage class USB device 712 over a USB physical bus 718 using a storage protocol such as, Small Computer System Interface (SCSI) or ATA by encapsulating SCSI/ ATA commands, data and SCSI/ATA status in USB packets. An embodiment of the invention has been described for the Universal Serial Bus. However, the invention is not limited to the Universal Serial Bus; an embodiment of the invention may be used by any bus protocol that supports command queuing and out-of- order completions, or any master/slave bus protocol that supports slave initiated selection of host buffer lists/buffers. An embodiment of the invention may also be used for core targeting of completion interrupts. For example, in an embodiment for a system that uses the Peripheral Component Interconnect (PCI), an interrupt vector may be allocated for each buffer list204, and the Buffer List selected using a stream identifier. A PCI MSI-X interrupt vector may specify a core and a vector on that core. In a multi-core system, a SCSI command is constructed on a particular core, which means that the core's cache stores information related to the specific command. Core Targeting allows the host controller to interrupt the core that initiated the command with the completion for the command, so the information stored in cores' cache can be reused. If the completion is sent to another core, additional system memory activity is incurred while the "other" core loads the information related to the command. Core Targeting reduces memory and power utilization. Alternative embodiments of the invention also include machine-accessible media containing instructions for performing the operations of the invention. Such embodiments may also be referred to as program products. Such machine-accessible media may include, without limitation, storage media such as floppy disks, hard disks, Compact Disk- Read Only Memories (CD-ROMs), Read Only Memory (ROM), and Random Access Memory (RAM), and other tangible arrangements of particles manufactured or formed by a machine or device. Instructions may also be used in a distributed environment, and may be stored locally and/or remotely for access by single or multi-processor machines. While embodiments of the invention have been particularly shown and described with references to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of embodiments of the invention encompassed by the appended claims.
A total estimated occupancy value of a first data on a first data block of a plurality of data blocks is determined. In order to determine the total estimated occupancy value of the first data block, a total block power-on-time (POT) value of the first data block is determined. Then, a scaling factor is applied to the total block POT value to determine the total estimated occupancy value of the first data block. Whether the total estimated occupancy value of the first data block satisfies a threshold criterion is determined. Responsive to determining that the total estimated occupancy value of the first data block satisfies the threshold criterion, data stored at the first data block is relocated to a second data block of the plurality of data blocks.
1.A system including:A memory component, the memory component including a plurality of data blocks; andA processing device operatively coupled with the memory component, the processing device being used for:Determining the total block power-on time POT value of the first data block among the plurality of data blocks;Applying a scaling factor to the total block POT value to determine the total estimated occupancy value of the first data block;Determining whether the total estimated occupancy value of the first data block meets a threshold criterion; andIn response to determining that the total estimated occupancy value of the first data block satisfies the threshold criterion, relocating the data stored at the first data block to the second data block of the plurality of data blocks .2.The system according to claim 1, wherein to determine the total block POT value of the first data block, the processing device is configured to:In response to receiving a request to write data on the initial page of the first data block, the initial block POT value of the first data block is determined, and the first data block includes a plurality of pages including the initial page.3.The system according to claim 2, wherein to determine the total block POT value of the first data block, the processing device is further configured to:Determine the difference between the current system POT value of the system and the initial block POT value of the first data block.4.The system according to claim 3, wherein the current system POT value of the system is incremented when the system is powered on and not incremented when the system is powered off.5.The system of claim 1, wherein:The scaling factor includes the ratio of the total amount of time in a day to the expected amount of time that the system is powered on during the day; andIn order to apply the scaling factor to the total block POT value, the processing device is configured to:The scaling factor for the plurality of data blocks is multiplied by the total block POT value of the first data block.6.The system of claim 1, wherein:The scaling factor includes the ratio of the expected amount of time that the system is powered on during the day to the total amount of time in the day; andIn order to apply the scaling factor to the total block POT value, the processing device is configured to:The total block POT value of the first data block is divided by the scaling factor for the plurality of data blocks.7.The system according to claim 1, wherein in order to relocate the data stored at the first data block to the second data block of the plurality of data blocks, the processing device is configured to:Identify a subset of the plurality of data blocks, each block in the subset has a total estimated occupancy value that meets the threshold criterion, and the subset of the plurality of data blocks includes the first data block ;Relocating the first part of the subset of the plurality of data blocks in the first relocating operation; andRelocating the second part of the subset of the plurality of data blocks in a second relocating operation.8.A system including:A memory component, the memory component including a plurality of data blocks; andA processing device operatively coupled with the memory component, the processing device being used for:Detecting power-on events in the system;In response to detecting the power-on event in the system, executing a POT timer that increments the system POT value of the system;Receiving a request to write the first data on the initial page of the first data block of the plurality of data blocks;In response to receiving the request to write the first data on the initial page of the plurality of pages from the first data block, determining the initial block POT value of the first data block from the POT timer;Determining the total block POT value of the first data block by determining the difference between the initial block POT value of the first data block and the current system POT value of the system from the POT timer;The total estimated occupancy value of the first data block is determined by applying a scaling factor to the total block POT value of the first data block.9.The system according to claim 8, wherein:The scaling factor includes the ratio of the total amount of time in a day to the expected amount of time that the system is powered on; andTo apply the scaling factor to the total block POT value to determine the total estimated occupancy value of the first data block, the processing device is configured to:The scaling factor for the plurality of data blocks is multiplied by the total block POT value of the first data block.10.8. The system of claim 8, wherein the current system POT value of the system is incremented when the system is powered on and not incremented when the system is powered off.11.The system according to claim 8, wherein:The scaling factor includes the ratio of the expected amount of time that the system is powered on to the total amount of time in a day; andTo apply the scaling factor to the total block POT value to determine the total estimated occupancy value of the first data block, the processing device is configured to:The total block POT value of the first data block is divided by the scaling factor for the plurality of data blocks.12.The system according to claim 8, wherein the processing device is further configured to:In response to determining that the total estimated occupancy value of the first data block satisfies a threshold criterion, the data stored at the first data block is relocated to a second data block of the plurality of data blocks.13.The system according to claim 12, wherein in order to relocate the data stored on the first data block to the second data block of the plurality of data blocks, the processing device is configured to:Identify a subset of the plurality of data blocks, each block in the subset has a total estimated occupancy value that meets the threshold criterion, and the subset of the plurality of data blocks includes the first data block ;Relocating the first part of the subset of the plurality of data blocks in the first relocating operation; andRelocating the second part of the subset of the plurality of data blocks in a second relocating operation.14.A method including:Determining the total estimated occupancy value of the first data block among the multiple data blocks in the system, the determining of the total estimated occupancy value of the first data block includes:Determine the total block POT value of the first data block, andApplying a scaling factor to the total block POT value to determine the total estimated occupancy value of the first data block;Determining whether the total estimated occupancy value of the first data satisfies a threshold criterion; andIn response to determining that the total estimated occupancy value of the first data block satisfies the threshold criterion, relocating the data stored at the first data block to the second data block of the plurality of data blocks .15.The method according to claim 14, wherein the determination of the total block POT value of the first data block comprises:In response to receiving a request to write data on the initial page of the first data block, the initial block POT value of the first data block is determined, and the first data block includes a plurality of pages including the initial page.16.The system according to claim 15, wherein the determination of the total block POT value of the first data block comprises:Determine the difference between the current system POT value of the system and the initial block POT value of the first data block.17.The method of claim 16, wherein the current system POT value of the system is incremented when the system is powered on and not incremented when the system is powered off.18.The method of claim 14, wherein:The scaling factor includes the ratio of the total amount of time in a day to the expected amount of time that the system is powered on; andApplying the scaling factor to the total block POT value includes:The scaling factor for the plurality of data blocks is multiplied by the total block POT value of the first data block.19.The method of claim 14, wherein:The scaling factor includes the ratio of the expected amount of time that the system is powered on to the total amount of time in a day; andApplying the scaling factor to the total block POT value includes:The total block POT value of the first data block is divided by the scaling factor for the plurality of data blocks.20.The method of claim 14, wherein relocating the data stored in the first data block to the second data block of the plurality of data blocks comprises:Identify a subset of the plurality of data blocks, each block in the subset has a total estimated occupancy value that meets the threshold criterion, and the subset of the plurality of data blocks includes the first data block ;Relocating the first part of the subset of the plurality of data blocks in the first relocating operation; andRelocating the second part of the subset of the plurality of data blocks in a second relocating operation.
Data relocation based on power-on timeTechnical fieldEmbodiments of the present disclosure generally relate to memory subsystems, and more specifically, to relocate data stored on data blocks of the memory subsystem based on power-on time.Background techniqueThe memory subsystem may be a storage system, such as a solid state drive (SSD) or a hard disk drive (HDD). The memory subsystem may be a memory module, such as a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile dual in-line memory module (NVDIMM). The memory subsystem may include one or more memory components that store data. For example, the memory component may be a non-volatile memory component and a volatile memory component. Generally speaking, the host system can utilize the memory subsystem to store data at and retrieve data from the memory component.Description of the drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the present disclosure.Figure 1 illustrates an example computing environment including a memory subsystem according to some embodiments of the present disclosure.FIG. 2 is a diagram illustrating an example of power-on and power-off states of a memory subsystem and corresponding system power-on time values according to some embodiments of the present disclosure.3 is a flowchart of an example method for relocating data stored on a data block based on a power-on time value of a memory subsystem according to some embodiments of the present disclosure.4 is a flowchart of an example method for determining the total estimated occupancy value of a data block according to some embodiments of the present disclosure.Figure 5 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAspects of the present disclosure relate to relocating data stored on data blocks based on the power-on time of the memory subsystem. The memory subsystem is also referred to as "memory device" hereinafter. An example of a memory subsystem is a storage device coupled to a central processing unit (CPU) via peripheral interconnects (e.g., input/output bus, storage area network). Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, and hard disk drives (HDD). Another example of a memory subsystem is a memory module coupled to the CPU via a memory bus. Examples of memory modules include dual in-line memory modules (DIMMs), small outline DIMMs (SO-DIMMs), non-volatile dual in-line memory modules (NVDIMMs), and the like. In some embodiments, the memory subsystem may be a hybrid memory/storage subsystem. Generally speaking, a host system can utilize a memory subsystem that includes one or more memory components. The host system can provide data to be stored at the storage subsystem and can request data to be retrieved from the storage subsystem.The memory component in the memory subsystem may include a memory unit, which may include one or more memory pages (also referred to herein as "pages") for storing one or more binary data bits corresponding to data received from the host system . One or more memory cells may be grouped together to form a data block to store data in the memory component. Since the memory cell stores data in the form of electric charge, it may be due to changes in the external environment (for example, temperature) and/or several programs and erase cycles operating on the corresponding memory cell (hereinafter referred to as "P/E cycle"). ) And lose charge over time. This loss of charge may cause a shift in the threshold voltage, and therefore cause incorrect reading of the stored data in the memory cell and therefore the data block in the memory component.Conventionally, the memory subsystem will periodically (for example, whenever the memory subsystem is powered on or every day) scan the data stored in each data block to determine whether the data needs to be relocated (for example, in the corresponding data block). Whether to leave enough space) to the new data block of the memory subsystem. The higher the frequency of scanning operations, the higher the probability that the data will be relocated. Therefore, this data relocation plan will unnecessarily involve a large number of P/E cycles. In addition, this data relocation plan does not accurately consider how long the data has been stored. This is because a large amount of data can be written to the data block at once (in this case, the charge loss experienced by the memory cell may be longer after a longer period of time). It will become a problem after a period of time). Therefore, a conventional memory subsystem that scans data every time the memory subsystem is powered on or scans data every day and relocates the data when conditions are met may result in a lot of wasted P/E cycles every year. Therefore, conventional methods for data relocation may be undesirable because each memory component has a limited number of available P/E cycles. Each P/E cycle may consume a certain amount of memory components, and after a certain number of cycles, the memory components may wear out and become unusable. Therefore, the higher the frequency of the P/E cycle for data relocation, the shorter the life of the memory subsystem becomes.Aspects of the present disclosure solve the above and other deficiencies by having a memory subsystem that uses the power-on time of the memory subsystem based on the history of data stored on a page (for example, the first or last page) of the data block. Estimate the time period to relocate the data to minimize the number of P/E cycles used for data relocation. In one embodiment, the memory subsystem may determine the total estimated occupancy value of data blocks in one or more data blocks in the memory subsystem. The total estimated occupancy value may also be referred to as the total estimated occupancy time, the total estimated lifetime value, or any other suitable name referring to the estimated occupancy time period of the data stored on the page of the data block. To determine the total estimated occupancy value of the data block, the memory subsystem may determine the total block power-on time value of the data block, and apply the scaling factor to the total block power-on time value. Then, the memory subsystem may determine whether the total estimated occupancy value of the data block meets the threshold criterion (for example, whether the total estimated occupancy value of the data block exceeds a predetermined threshold (for example, 2 months)), and if so, it will store The data on the data block (ie, the entire data) is relocated to another data block. By being able to estimate how long the data has been stored and setting a limit corresponding to the frequency of performing data relocation (for example, 2 months), this memory subsystem can significantly reduce the number of P/E cycles used to relocate data from Approximately 300 P/E cycles or more (when relocating data every day) is reduced to, for example, about 6 P/E cycles per year (ie, every 2 months in 12 months). Therefore, when compared with conventional memory subsystems, the memory subsystem according to the present disclosure can extend the life of the memory subsystem, and reduce and/or save the amount of processing resources for data relocation.Figure 1 illustrates an example computing environment 100 including a memory subsystem 110 according to some embodiments of the present disclosure. The memory subsystem 110 may include media such as memory components 112A to 112N. The memory components 112A to 112N may be volatile memory components, non-volatile memory components, or a combination thereof. In some embodiments, the memory subsystem is a storage system. An example of a storage system is SSD. In some embodiments, the memory subsystem 110 is a hybrid memory/storage subsystem. Generally speaking, the computing environment 100 may include a host system 120 that uses the memory subsystem 110. For example, the host system 120 may write data to the memory subsystem 110 and read data from the memory subsystem 110.The host system 120 may be a computing device such as a desktop computer, a laptop computer, a web server, a mobile device, or such a computing device including a memory and a processing device. The host system 120 may include or be coupled to the memory subsystem 110 such that the host system 120 can read data from the memory subsystem 110 or write data to the memory subsystem 110. The host system 120 may be coupled to the memory subsystem 110 via a physical host interface. As used herein, "coupled to" generally refers to a connection between components, which can be a wired or wireless indirect communication connection or a direct communication connection (for example, without intermediate components), which includes, for example, electrical, optical, and magnetic And other connections. Examples of physical host interfaces include (but are not limited to) Serial Advanced Technology Attachment (SATA) interface, Peripheral Component Interconnect Express (PCIe) interface, Universal Serial Bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS) )Wait. The physical host interface can be used to transfer data between the host system 120 and the memory subsystem 110. The host system 120 may further utilize an NVM Express (NVMe) interface to access the memory components 112A to 112N when the memory subsystem 110 is coupled with the host system 120 through a PCIe interface. The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory subsystem 110 and the host system 120.The memory components 112A to 112N may include different types of non-volatile memory components and/or any combination of volatile memory components. Examples of non-volatile memory components include NAND type flash memory. Each of the memory components 112A to 112N may include one or more memory cell arrays, such as single-level cells (SLC) or multi-level cells (MLC) (e.g., three-level cells (TLC) or four-level cells). Unit (QLC)). In some embodiments, a particular memory component may include both the SLC portion and the MLC portion of the memory cell. Each of the memory units can store one or more data bits (eg, blocks of data) used by the host system 120. Although a non-volatile memory component such as a NAND-type flash memory is described, the memory components 112A to 112N may be based on any other type of memory, such as a volatile memory. In some embodiments, the memory components 112A to 112N may be (but not limited to) random access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM) ), phase change memory (PCM), magnetic random access memory (MRAM), "NOR" (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and interleaving of non-volatile memory cells Point array. The cross-point array of the non-volatile memory can perform bit storage in combination with a stackable cross-grid data access array according to changes in body resistance. In addition, in contrast to many flash-based memories, cross-point non-volatile memories can perform in-place write operations, in which non-volatile memory cells can be programmed without first erasing the non-volatile memory cells. In addition, the memory cells of the memory components 112A to 112N may be grouped into memory pages or data blocks, which may refer to the cells of the memory components for storing data.The memory system controller 115 (hereinafter referred to as "controller") may communicate with the memory components 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A to 112N, and other such operations . The controller 115 may include hardware such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The controller 115 may be a microcontroller, a dedicated logic circuit system (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The controller 115 may include a processor (processing device) 117 configured to execute instructions stored in the local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control the operation of the memory subsystem 110, The operations include handling the communication between the memory subsystem 110 and the host system 120. For example, the local memory 119 may store any value determined when calculating the total occupancy value of the data block. In some embodiments, the local memory 119 may include memory registers that store memory pointers, retrieve data, and the like. The local memory 119 may also include read-only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been illustrated as including the controller 115, in another embodiment of the present disclosure, the memory subsystem 110 may not include the controller 115, and may instead rely on external control (e.g., Provided by an external host or by a processor or controller separate from the memory subsystem).In general, the controller 115 can receive commands or operations from the host system 120, and can convert the commands or operations into instructions or appropriate commands to achieve desired accesses to the memory devices 112A to 112N. The controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical block addresses and physical blocks associated with the memory components 112A to 112N Address translation between addresses. The controller 115 may further include a host interface circuit system to communicate with the host system 120 via a physical host interface. The host interface circuitry may convert commands received from the host system into command instructions to access the memory components 112A to 112N, and convert responses associated with the memory components 112A to 112N into information for the host system 120.The memory subsystem 110 may also include additional circuitry or components that are not illustrated. In some embodiments, the memory subsystem 110 may include a cache or buffer (e.g., DRAM) and address circuitry (e.g., Row decoder and column decoder).The memory subsystem 110 includes a power-on time (POT) tracking component 113 that can be used to relocate the data stored at the data block across one or more memory components 112A to 112N based on the POT value of the memory subsystem 110. In some embodiments, the controller 115 includes at least a part of the POT tracking component 113. For example, the controller 115 may include a processor 117 (processing device) that is configured to execute instructions stored in the local memory 119 for performing the operations described herein. In some embodiments, the POT tracking component 113 is part of the host system 120, application program, or operating system.The POT tracking component 113 may determine the total estimated occupancy value of the first data block of the plurality of data blocks on one of the memory components 112A to 112N. The POT tracking component 113 may determine whether the total estimated occupancy value of the first data block meets a threshold criterion (for example, exceeds a predetermined threshold). In response to determining that the total estimated occupancy value of the first data block satisfies the threshold criterion, the POT tracking component 113 may relocate all data stored at the first data block to the second data block of the plurality of data blocks. Additional details regarding the operation of the POT tracking component 113 are described below.FIG. 2 is a diagram illustrating an example of the power-on and power-off states of the memory subsystem 110 and corresponding system POT values according to some embodiments of the present disclosure. The graph 200 represents the power-on and power-off states of the memory subsystem 110, and the graph 210 depicts the change of the system POT value according to the change of the power-on and power-off state. When a power source (for example, an AC power source, a battery, or the like) starts to supply power to the memory subsystem 110 (ie, when a power-on event occurs), the memory subsystem 110 may be in a power-on state. On the other hand, when the power supply stops supplying power to the memory subsystem 110 (ie, when a power failure event occurs), the memory subsystem 110 may be in a power-off state. For example, the memory subsystem 110 may be powered on during normal operating hours (e.g., 9 am to 5 pm), and then powered off. As another example, the memory subsystem 110 may be powered on and off during normal operating hours. For example, as illustrated in Figure 2, the memory subsystem can be powered on for one hour (e.g., hour 0 to hour 1), powered off for another hour (e.g., hour 1 to hour 2) due to a meeting, and powered on again for two hours ( For example, hour 2 to hour 4), power off for one hour (for example, hour 4 to hour 5), and power on again (for example, hour 5 to hour 6).The system POT value for the diagram 210 is a value in units of time (for example, hours), and is determined from a POT timer. The POT timer can calculate the time period (for example, seconds, minutes, hours, etc.) for supplying power from the power supply to the memory subsystem 110 according to the system POT value illustrated in FIG. 210. The POT timer will only increment the system POT value while the memory subsystem 110 is in a powered-on state (for example, when power is supplied to the memory subsystem 110). The POT timer can be maintained by the POT tracking component 113, and can be connected to the memory subsystem 110 and the power source (for example, an AC power source or the like) of the memory subsystem 110. When power is supplied from the power source (including when the memory subsystem 110 is operating in a low power mode), the POT tracking component 113 may continuously operate the POT timer. When the power source no longer supplies power (ie, in the power-off state or shutdown mode), the POT tracking component 113 can determine the current system POT value measured or counted at the POT timer, and store the system POT value in the data In the storage area (for example, the local storage 119). On the other hand, when power is being supplied by the power source (ie, in the power-on state), the POT tracking component 113 can access the data storage to determine the latest system POT value stored, and enable the POT timer to recover from all The latest system POT value provided is counted or incremented. In another embodiment, the POT timer can be connected to the power supply of each of the memory components 112A to 112N and each of the memory components 112A to 112N. Therefore, the system POT value may represent the POT value for each of the memory components 112A to 112N.Continuing from the above example and as illustrated in FIG. 2, when the memory subsystem 110 is in a powered-on state for one hour (eg, hour 0 to hour 1), the system POT value may increase from POT hour 0 to POT hour 1. When the memory subsystem is powering up, the memory subsystem can cause the POT timer to count. When the memory subsystem 110 is powered off due to a meeting (for example, during hour 1 to hour 2), the system POT value remains unchanged. However, when the memory subsystem 110 returns to the power-on state during hours 2 to 4, the system POT value will increase proportionally from POT hour 1 to POT hour 3. In the event that the memory subsystem 110 is powered off during the interruption between hour 4 and hour 5, the system POT value remains at POT hour 3. When the memory subsystem 110 is re-powered within hours 5 to 6, the system POT value increases from POT hour 3 to POT hour 4 again. Therefore, the system POT value of the memory subsystem 110 is incremented (ie, counted) when the system is powered on, and is not incremented (ie, remains the same) when the system is powered off.The memory subsystem 110 can use the system POT value to estimate how long data (for example, the data written on the data block for the first time (that is, on the first page of the data block)) has been stored on the data block (that is, , The total estimated occupancy value) in order to determine whether to relocate the data. In one embodiment, the memory subsystem 110 may determine the total length of POT hours of stored data (ie, the total block POT value (TBLOCKTOTAL)) and multiply it by a scaling factor that represents a typical POT of the memory subsystem 110. For example, the memory subsystem 110 may determine that the data on the data block has been stored for 2 POT hours. Assuming that the memory subsystem 110 is likely to be powered on for 8 hours a day, the memory subsystem 110 can use a scaling factor of 3 (e.g., the total amount of time in a day (ie, 24 hours) and the expected amount of time during which the memory subsystem 110 is powered on (e.g., 8 hours) ratio—that is, the ratio of 24 to 8 is 3). Therefore, the memory subsystem 110 can approximately determine that the data has been stored on the data block for about 6 hours (ie, 2 POT hours multiplied by the scaling factor 3)—the total estimated occupancy of the data block is 6 hours. As another example, the scaling factor may be the ratio of the expected amount of time that the system is powered on during a day to the total amount of time in the day (ie, the ratio of 8 to 24 is 1/3). To determine the total estimated occupancy value of the data block, the total block POT value can be divided by the scaling factor (ie, 2 POT hours divided by the scaling factor 1/3 is 6 hours). Continuing from the above example, in one embodiment, the memory subsystem 110 may use a threshold criterion that requires the total estimated occupancy value of a data block to be at least 1440 hours (ie, 2 months), or use 1440 hours as a predetermined threshold ( That is, the minimum threshold) to determine whether to relocate the data. In this case, the memory subsystem 110 may determine not to relocate the data because the total estimated occupancy value of the data (ie, 6 hours) does not meet the threshold criterion (ie, 1440 hours). On the other hand, if the total estimated occupancy value is 1500 hours, the memory subsystem 110 may relocate the data (ie, all the data stored in the current data block) to the new data block.When determining the total block POT value (TBLOCKTOTAL) for the total estimated occupancy value, the memory subsystem 110 may determine how much POT has passed since the data was written to or stored at the data block (for example, assuming no data Write to any other page of the data block, the POT time that has elapsed since the data was first written on the page of the data block). In an embodiment, the memory subsystem 110 may determine the current system POT hour (TCURRSYS) (for example, POT hour 2.5 in FIG. 2) and subtract the POT hour of the stored data (ie, the initial block POT value; TBLOCKINITIAL) (for example, POT hour 0.5 in FIG. 2) to calculate the total block POT value (TBLOCKTOTA1) (for example, POT hour 2).3 is a flowchart of an example method 300 for relocating data stored at a data block based on a system POT value of a memory subsystem according to some embodiments of the present disclosure. The method 300 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute) or a combination thereof. In some embodiments, the method 300 is performed by the POT tracking component 113 of FIG. 1. Although shown in a specific sequence or order, unless otherwise specified, the order of the process can be modified. Therefore, the illustrated embodiments should only be understood as examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are possible.At operation 310, the processing device determines the total estimated occupancy value of the first data block among the plurality of data blocks in the memory subsystem 110. The processing device may periodically determine the total estimated occupancy value. The total estimated occupancy value is a value in units of time and an estimated time period that indicates how long data has been stored at the data block. The total estimated occupancy value may indicate an estimated time period for how long the data has been written to the first page of the data block (ie, how long the data has been written to the data block for the first time). In another example, the total estimated occupancy value may indicate an estimated time period of how long data has been written to the last page of the data block (ie, how long data has been last written to the data block). Therefore, although this article refers to the total estimated occupancy value according to the data block, the value itself indicates how long the data has been stored on the first or last page (among multiple pages) of the data block. However, the total estimated occupancy value is used to decide whether all data stored on the corresponding data block should be relocated. To determine the total estimated occupancy value of the first data block, the processing device may determine the total block POT value of the first data block, and apply a scaling factor to the total block POT value.The total block POT value is a value in units of time, and POT represents the length of time regarding how long the data has been stored on the data block (ie, how much the system POT value has increased since the data was stored on the data block).The processing device may determine the total block POT value of the first data block from the initial block POT value of the first data block and the current system POT value of the memory subsystem. As described above, the POT tracking component 113 counts the system POT value when the system is powered on but not when the system is powered off. Therefore, the total block POT value should be less than the total estimated occupancy value. The processing device may periodically determine the total block POT value of the first data block (for example, the system POT value every 12 hours or 24 hours, or every time there is a power-on event).The initial block POT value is a value in units of time, and represents the system POT value when data is written on the corresponding data block. The processing device may determine the initial block POT value of the first data block in response to receiving a request to write data on the initial page of the first data block. The initial page of the first data block may be one page among all pages in the first data block for which data is requested to be written for the first time. Therefore, the processing device can recognize the system POT value when writing data on the first data block for the first time. In another example, the processing device may determine the initial block POT value of the first data block as the POT value when the memory subsystem 110 writes the last data on the first data block (that is, assuming that all other pages have been written When there is data, when the data is being written on the last page of the first data block). In this case, the processing device will respond to receiving the last request to write data on the first data block (ie, a request to write the last data, or a request to write data on the last page of the first data block) , To determine the initial block POT value of the first data block. In addition, the processing device may store the initial block POT value associated with the first data block at a buffer or a data storage library (for example, the local memory 119), so that the initial block POT can be queried later when determining the total block POT value value. In addition, the processing device can store the initial block POT value of each page in the first data block, so that the processing device can determine the time period based on how long the data has been stored on each page and whether the time period meets the threshold criterion. Reposition page by page. In this case, the processing device can track the time when it is requested to write data on each page for the first time as the initial block POT value.The processing device may calculate the difference between the current system POT value of the system and the initial block POT value of the first data block to determine the total block POT value of the first data block. The processing device may be configured to periodically determine the total block POT value for each data block.The processing device may apply the scaling factor to the total block POT value to determine the total estimated occupancy value of the first data block. The scaling factor may correspond to the ratio of the total amount of time in a day to the expected amount of time that the memory subsystem 110 is powered on. For example, it can be assumed that the memory subsystem 110 is powered on for 8 hours a day. Then, the scaling factor can be calculated as 3 (ie, the ratio of 24 to 8). Thus, the scaling factor may be determined based on the usage pattern of the memory subsystem 110. In another embodiment, the scaling factor may be determined based on the usage type of the memory subsystem 110. For example, there may be two types of usage-for servers at data centers or client computers in business settings. In the case where the storage subsystem 110 is used as part of a server system, the storage subsystem 110 will be powered on for 24 hours a day. Therefore, the memory subsystem 110 may determine the scaling factor to be 1 (ie, the ratio of 24 to 24). On the other hand, for the client computer, the memory subsystem 110 will be powered on for 8 hours a day, that is, the average business hours. In this case, the memory subsystem 110 may determine the scaling factor to be 3 (ie, a ratio of 24 to 8). The same value can be used for the scaling factor across multiple data blocks in the memory subsystem 110. As an example, the processing device may multiply the scaling factor by the total block POT value of the first data block to determine the total estimated occupancy value of the first data block, as described above with reference to FIG. 2. As another example, the processing device may divide the total block POT value of the first data block by the scaling factor, as described above with reference to FIG. 2.At operation 320, the processing device may determine whether the total estimated occupancy value of the first data block meets a threshold criterion (for example, a minimum of 1440 hours (2 months)). The processing device may determine the value used in the threshold criterion from a pre-configured default value. In another embodiment, the processing device may determine the value used in the threshold criterion for the number of P/E cycles required to optimize the relocation of the data block. In response to determining that the total estimated occupancy value of the first data block does not meet the threshold, the processing device proceeds to operation 310.On the other hand, in response to determining that the total estimated occupancy value satisfies the threshold criterion, at operation 330, the processing device may relocate the data stored at the first data block (ie, the entire data) to the first data block among the plurality of data blocks. Two data blocks. The processing device may select a new or empty data block (ie, a data block that does not store any data) as the second data block for relocation. In addition, the processing device can select the data block that has been written the least number of times.In another example, there may be several data blocks (including the first data block) whose data should be relocated because their corresponding total estimated occupancy value has been determined to meet the threshold criterion. In this case, the processing device can relocate the data (ie, the entire data stored in the corresponding data block) from a part (for example, 5%) of these data blocks each time it operates. For example, the processing device may identify a subset of data blocks from a plurality of data blocks in the memory subsystem 110. Each block in the subset will have a total estimated occupancy value that meets the threshold criterion. The subset of data blocks will contain the first data block. Then, the processing device may relocate all the data blocks stored in the first part of the subset (ie, the data blocks from the first group (for example, the top 5%) of the subset) in the first relocation operation. Data, and relocating all stored in the second part of the subset (ie, the data block from the second group (for example, the next 5%) of the subset) in the second relocation operation data. By relocating the data in this segmented manner, when compared with relocating the data from all the data blocks in the subset at one time, the processing resources required to move the data block in each relocation operation are much less.4 is a flowchart of an example method 400 for determining the total estimated occupancy value of a data block according to some embodiments of the present disclosure. The method 400 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute) or a combination thereof. In some embodiments, the method 400 is performed by the POT tracking component 113 of FIG. 1. Although shown in a specific sequence or order, unless otherwise specified, the order of the process can be modified. Therefore, the illustrated embodiments should only be understood as examples, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are possible.At operation 410, the processing device may detect a power-on event in the memory subsystem 110. The power-on event is an event that occurs when power is supplied to the memory subsystem 110 via the power supply. At operation 420, the processing device may increment a POT timer representing the system POT value of the memory subsystem 110 in response to detecting the power-on event. As an example, the processing device may trigger the POT timer to start counting from 0 in the case of the first power-on event, or continue counting from a previously interrupted value. In this way, the POT timer can continue to count the system POT value since the last power-on event occurred.At operation 430, the processing device may receive a request to write the first data on the initial page (ie, the first page of the plurality of pages) of the first data block among the plurality of data blocks in the memory subsystem 110. As described above with reference to operation 310, the request may be a request to write data on the data block for the first time. At operation 440, the processing device may determine the initial block POT value of the first data block from the POT timer in response to receiving the request to write the first data on the first data block. The processing device can access the POT timer to obtain the current value of the POT value for the initial block. As described above, when the memory subsystem 110 is powered on, the current system POT value of the system is incremented, but when the memory subsystem 110 is powered off, the current system POT value of the system is not incremented.At operation 450, the processing device may determine the total block POT value of the first data block by determining the difference between the initial block POT value of the first data block and the current system POT value of the system from the POT timer. The difference may indicate how much POT has passed since the initial block POT value of the first data block was determined. Therefore, the total block POT value will indicate how long the first data has been stored at the first data block in terms of POT.At operation 460, the processing device may determine the total estimated occupancy value of the first data block by applying the scaling factor to the total block POT value of the first data block. As described above with reference to operation 310, the scaling factor will correspond to the ratio of the total amount of time in a day to the expected amount of time that the memory subsystem 110 is powered on. As an example, the processing device may apply the scaling factor by multiplying the scaling factor by the total block POT value of the first data block. As another example, the scaling factor may correspond to the ratio of the expected amount of time that the memory subsystem 110 is powered on to the total amount of time in a day. In this example, the processing device may apply the scaling factor by dividing the total block POT value of the first data block by the scaling factor.In addition, in response to determining that the total estimated occupancy value of the first data block satisfies the threshold criterion, the processing device may relocate the data stored on the first data block to the second data block of the plurality of data blocks, as described in the above operation 330 description.Figure 5 illustrates an example machine of a computer system 500 within which a set of instructions can be executed to cause the machine to perform any one or more of the methods discussed herein. In some embodiments, the computer system 500 may correspond to a host system (for example, the host system 120 of FIG. 1), which includes, is coupled to, or utilizes a memory subsystem (for example, the memory subsystem 110 of FIG. 1), or may be used to The operation of the controller is executed (for example, the operating system is executed to execute the operation corresponding to the POT tracking component 113 of FIG. 1). In alternative embodiments, the machine may be connected (eg, networked) to other machines in the LAN, intranet, extranet, and/or the Internet. The computer can operate as a server or a client machine in a client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server in a cloud computing infrastructure or environment Or client machine operation.The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular phone, a network device, a server, a network router, a switch, or a network bridge, or it can perform a specific task to be taken by the machine Any machine that acts as a set of instructions (sequentially or otherwise). In addition, although describing a single machine, the term "machine" should also be considered to include any machine that individually or collectively executes a set (or sets of) instructions to perform any one or more of the methods discussed herein gather.The example computer system 500 includes a processing device 502, a main memory 504 (for example, read only memory (ROM), flash memory, dynamic random access memory (DRAM), such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), The static memory 506 (eg, flash memory, static random access memory (SRAM), etc.) and the data storage device 518 communicate with each other via the bus 530.The processing device 502 represents one or more general processing devices, such as a microprocessor, a central processing unit, or the like. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or other instruction set processing Processor, or a processor that implements a combination of instruction sets. The processing device 502 may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, or the like. The processing device 502 is configured to execute instructions 526 for performing the operations and steps discussed herein. The computer system 500 may further include a network interface device 508 for communicating through the network 520.The data storage system 518 may include a machine-readable storage medium 524 (also referred to as a computer-readable medium) on which one or more sets of instructions 526 or 526 embodying any one or more of the methods or functions described herein are stored. software. During the execution of the instructions 526 by the computer system 500, the instructions 526 may also reside in the main memory 504 and/or the processing device 502 in whole or at least in part, and the main memory 504 and the processing device 502 also constitute machine-readable storage media. The machine-readable storage medium 524, the data storage system 518, and/or the main memory 504 may correspond to the memory subsystem 110 of FIG.In one embodiment, the instructions 526 include instructions to implement the functionality corresponding to the POT tracking component (eg, the POT tracking component 113 of FIG. 1). Although the machine-readable storage medium 524 is shown as a single medium in the example embodiment, the term "machine-readable storage medium" should be considered to include a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" should also be considered to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Therefore, the term "machine-readable storage medium" should be considered to include (but not limited to) solid-state memory, optical media, and magnetic media.Some parts of the foregoing detailed description have been presented based on algorithms and symbolic representations of operations on data bits in computer memory. These algorithm descriptions and representations are the most effective way for those skilled in the data processing field to convey the essence of their work to other technicians in the field. Here, the algorithm is generally considered to be a sequence of self-consistent operations leading to the desired result. The operations are operations that require physical manipulation of physical quantities. Usually, but not necessarily, these quantities take the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, items, numbers, or the like, mainly for general reasons.However, it should be borne in mind that all these and similar terms should be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may involve the actions and processes of computer systems or similar electronic computing devices, which manipulate and transform data expressed as registers and physical (electronic) numbers in the memory of the computer system into similarly expressed as computer system memory or registers or other The physical quantity in this type of information storage system.The present disclosure also relates to equipment for performing the operations herein. This apparatus may be specially constructed for the intended purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. This computer program can be stored in a computer-readable storage medium, such as (but not limited to) any type of magnetic disk, including floppy disk, optical disk, CD-ROM and magneto-optical disk, read only memory (ROM), random access memory (RAM) , EPROM, EEPROM, magnetic or optical card or any type of media suitable for storing electronic instructions, each of which is coupled to the computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized devices to perform the methods. The structure of a variety of these systems will appear as set out in the description below. In addition, the present disclosure is not described with reference to any specific programming language. It will be understood that various programming languages can be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium having instructions stored thereon, and the instructions may be used to program a computer system (or other electronic device) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, machine-readable (e.g., computer-readable) media includes machine (e.g., computer)-readable storage media, such as read-only memory ("ROM"), random access memory ("RAM"), disk Storage media, optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific example embodiments of the present disclosure. It is obvious that various modifications can be made without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be regarded as explanatory rather than restrictive.
A memory bank redistribution based on power consumption of a plurality of memory banks of a memory die may provide for an overall reduced power consumption of the memory device. A respective power consumption for each bank may be determined and memory operations may be distributed to the banks based on the determined power consumption. The memory die may include an interface coupled to each bank. The control circuitry may remap logical-to-physical addresses of each bank based on one or more parameters, such as power consumption of each bank, memory operation count of each bank, and/or relative physical distance of each bank.
1.A method comprising:determining a respective power consumption for each of the plurality of banks of the memory device; andDistributing memory operations to the plurality of memory banks based on the respective power consumptions.2.10. The method of claim 1, wherein determining the respective power consumption comprises determining the number of memory banks based on an application to be executed using data read from or written to the memory device the corresponding power consumption of each of the .3.10. The method of claim 1, further comprising determining a respective relative memory operation load for each of the plurality of memory banks when initializing the memory device.4.10. The method of claim 1, wherein distributing memory operations comprises, prior to executing an application to be executed using data read from or written to the memory device, based on data in the plurality of memory banks The addresses of the plurality of memory banks are remapped for each respective relative memory operation load.5.10. The method of claim 1, wherein distributing memory operations includes remapping respective addresses of each of the plurality of memory banks such that applications are based on respective distances from each of the plurality of memory banks from an interface Each of the plurality of memory banks is addressed in proportion to the relative memory operation of each of the plurality of memory banks.6.The method of claim 1, further comprising operating the memory device with default addressing of the plurality of memory banks during an initial cycle;wherein determining the respective power consumption includes retrospectively determining the respective power consumption for each of the plurality of memory banks during the initial period.7.7. The method of claim 6, wherein distributing memory operations comprises distributing memory operations to the plurality of memory banks based on the determined respective power consumption after the initial period.8.6. The method of claim 6, wherein retrospectively determining the respective power consumption comprises performing at least one of a set of operations including access, read, and write of each of the plurality of memory banks count; andwherein the determined respective power consumption is proportional to the count.9.9. The method of claim 8, wherein distributing memory operations includes re-running based on a respective operation count of each of the plurality of memory banks proportional to a respective distance of each of the plurality of memory banks from an interface The addresses of the plurality of memory banks are mapped.10.9. The method of claim 9, wherein remapping addresses is performed in response to any of the plurality of memory banks reaching a threshold count.11.A device comprising:A memory die, which includes:interface; anda plurality of memory banks, each coupled to the interface; andcontrol circuitry coupled to the memory die, wherein the control circuitry is configured to:receiving an indication of a memory operation load from an application program to execute for at least one of a plurality of logical addresses corresponding to the plurality of memory banks; andThe logical-to-physical addressing of at least one of the plurality of memory banks is remapped based on the indication.12.The apparatus of claim 11, wherein the indication has a specific logical address that will have a maximum memory operation load from the application; andwherein the control circuitry is configured to remap the physical address of a particular memory bank closest to the interface to the particular logical address.13.13. The apparatus of claim 12, further comprising a respective fuse or antifuse associated with each of the plurality of memory banks; andwherein the control circuitry is configured to activate the respective fuse or antifuse associated with a different memory bank originally mapped to the particular logical address to remap the physical address of the different memory bank to a different logical address originally mapped to that particular memory bank.14.The apparatus of claim 11, further comprising a mode register coupled to the control circuitry;wherein the control circuitry is configured to remap the physical address of the particular memory bank to a logical address stored in the mode register;wherein the indication has a specific logical address that will have a maximum memory operation load from the application;wherein the apparatus is configured to write the particular logical address to the mode register.15.11. The apparatus of claim 11, wherein the control circuitry is configured to receive an indication of a respective memory operation load for each of the plurality of logical addresses; andThe respective logical to physical addressing of each of the plurality of memory banks is remapped based on the indication such that the relative memory operation load of each of the plurality of memory banks is the same as in the plurality of memory banks Each is proportional to the corresponding distance from the interface.16.A device comprising:A memory die, which includes:interface; anda plurality of memory banks, each coupled to the interface; andcontrol circuitry coupled to the memory die, wherein the control circuitry is configured to:operating the memory die with default logical-to-physical address translation during an initial cycle;counting memory operations for each of the plurality of memory banks during the initial period; andAfter the initial period, a default logical address of at least one of the plurality of memory banks is remapped to a physical address of a different one of the plurality of memory banks based on the count.17.17. The apparatus of claim 16, wherein the at least one of the plurality of banks includes a particular bank having a maximum count associated therewith; andwherein the different memory banks are closest to the interface.18.The apparatus of claim 16, further comprising a respective fuse or antifuse associated with each of the plurality of memory banks; andwherein the control circuitry is configured to activate the corresponding fuse or antifuse in response to the count associated with the corresponding memory bank reaching a threshold.19.19. The apparatus of claim 18, wherein the control circuitry is configured to remap the default logical address of the corresponding memory bank to a location closest to the The physical address of the memory bank of the interface that has not yet been remapped.20.17. The apparatus of claim 16, wherein the initial period comprises a predefined length of time, a predefined total count for all of the memory banks, or a predefined threshold count for any of the plurality of memory banks one of.
Bank redistribution based on power consumptiontechnical fieldThe present disclosure relates generally to memory devices, and more particularly, to apparatus and methods related to power consumption-based bank redistribution.Background techniqueMemory devices are typically provided as internal semiconductor integrated circuit devices in computers or other electronic devices. There are many different types of memory, including volatile and non-volatile memory. Volatile memory may require power to hold its data and includes random access memory (RAM), such as dynamic random access memory (DRAM), examples of which are synchronous dynamic random access memory (SDRAM) and the like. Non-volatile memory can provide persistent data by retaining stored data when not powered, and can include NAND flash memory, NOR flash memory, read only memory (ROM), electrically erasable programmable ROM (EEPROM), Erasable Programmable ROM (EPROM), and Resistive Variable Memory, such as Phase Change Random Access Memory (PCRAM), Resistive Random Access Memory (RRAM), and Magnetoresistive Random Access Memory (MRAM), etc. Wait.Memories are also used as volatile and nonvolatile data storage devices for various electronic applications. Non-volatile memory can be used, for example, in personal computers, portable memory sticks, digital cameras, cellular telephones, portable music players such as MP3 players, movie players, and other electronic devices.SUMMARY OF THE INVENTIONIn one aspect, the present application provides a method comprising: determining a respective power consumption for each of a plurality of banks of a memory device; and distributing memory operations to the plurality of banks based on the respective power consumption.In another aspect, the present application provides an apparatus comprising: a memory die comprising: an interface; and a plurality of memory banks, each coupled to the interface; and control circuitry coupled to the memory a die, wherein the control circuitry is configured to: receive an indication of a memory operation load from an application program to execute for at least one of a plurality of logical addresses corresponding to the plurality of memory banks; and remapping logical-to-physical addressing of at least one of the plurality of memory banks based on the indication.In another aspect, the present application provides an apparatus comprising: a memory die comprising: an interface; and a plurality of memory banks, each coupled to the interface; and control circuitry coupled to the memory a die, wherein the control circuitry is configured to: operate the memory die with default logical-to-physical address translation during an initial cycle; memory for each of the plurality of memory banks during the initial cycle operations count; and after the initial period, remap a default logical address of at least one of the plurality of memory banks to a physical address of a different one of the plurality of memory banks based on the count.Description of drawings1 is a block diagram of an apparatus in the form of a memory device according to the present disclosure.2A is a graph of power consumption of different memory banks operating at different speeds during a write operation.2B is a graph of power consumption of different memory banks operating at different speeds during a read operation.3A is a block diagram of a memory bank with default logical-to-physical addressing.3B is a block diagram of a memory bank with remapped logical-to-physical addressing in accordance with the present disclosure.4A is a block diagram of a memory bank with default logical-to-physical addressing.4B is a block diagram of a memory bank with remapped logical-to-physical addressing in accordance with the present disclosure.5 is a flowchart illustrating a method for power consumption based bank redistribution in accordance with the present disclosure.6 is a flowchart illustrating an example of a memory bank remapping operational process in accordance with the present disclosure.Detailed waysThe present disclosure includes apparatus and methods related to power consumption-based bank redistribution. Bank redistribution refers to changing the distribution of memory operations to the physical banks of memory within a memory device. Power consumption refers to the power used by each of the memory banks when performing memory operations. Power consumption may be measured and/or modeled for each memory bank, or may be based on test or historical data of memory device operation. Various parameters can be used as surrogates for the relative power consumption of the memory banks (eg, instead of directly measuring power consumption). Examples of such parameters include the relative number of memory operations per memory bank, the distance of each memory bank from the interface, and the like.Additional power savings in memory operations are required in memory solutions that support a wide variety of applications. Such power savings may be emphasized for automotive use of mobile devices and/or memory devices. For example, with regard to the use of memory devices in the car space, software updates may be implemented when the car is not connected to an external power source (eg, when the car is draining its battery). Any power savings that can be achieved (even for the memory components of an automobile) are desirable to improve overall battery life and thus improve system performance.For different speeds and technologies of memory devices, some memory devices have a common or similar physical layout of memory banks. For example, subsequent generations of low power DRAM (LPDRAM) memory devices may use the same physical layout of the memory banks, even if the process of fabricating the memory banks changes between generations. As another example, different memory devices operating at different speeds may have a common or similar physical layout of memory banks.Various solutions have been proposed or used to reduce power consumption. Some approaches designed to achieve power savings in the operation of memory devices sacrifice performance (eg, reduce speed) to produce less power consumption. For example, operating voltage or frequency can be reduced, delay locked loops (DLLs) can be removed, and the like. Some approaches seek reduced power consumption by adding functionality to memory devices (eg, temperature compensated self-refresh, partial array self-refresh, deep power down, etc.). Some approaches reduce drive strength (eg, reducing alternating current (AC) switching current or providing an interface with lower power consumption). Some approaches attempt to reduce bus loading (eg, stacked package structures, data bus indexing, lower common input/output (CIO), etc.).To address these and other problems associated with some previous approaches, at least one embodiment of the present disclosure alters the distribution of memory operations provided to the banks of a memory device such that banks with smaller relative power consumption receive more traffic (memory operations ). The memory device may model or track or learn the relative power consumption of the memory banks. For a particular application using a memory device, the logical address of the bank of most operations can be determined and then remapped to the physical address of the bank with the least relative power consumption. A logical address may also be referred to as a host address. Memory devices can be remapped without changing the addressing used by the host (logical addressing). In some embodiments, memory operations may be distributed to each bank in proportion to the power consumption of each bank. Testing of implementations of various embodiments of the present disclosure has shown that the memory device operates with power savings of approximately 5% to 15% for different applications executed using the memory as the primary storage device. Additional embodiments and advantages are described in more detail below.As used herein, the singular forms "a/an" and "the" include both single and multiple references unless the context clearly dictates otherwise. Furthermore, throughout this application the word "may" is used in a permissive sense, (ie, has the potential, can), rather than in a mandatory sense (ie, must). The term "comprising" and its derivatives mean "including but not limited to". The term "coupled" means directly or indirectly connected.The figures herein follow a numbering convention, wherein the first one or more numerals correspond to the figure number and the remaining numerals identify elements or components in the figure. Similar elements or components between different figures can be identified through the use of similar numbers. For example, 330 may refer to element "30" in Figure 3B, and a similar element may be represented as 430 in Figure 4B. Similar elements within the figures may be referred to using hyphens and additional numbers or letters. Such similar elements may generally be referred to without hyphens and additional numbers or letters. For example, elements 332-0, 332-1, . . . , 332-Y in FIG. 3A may be collectively referred to as 332. As will be appreciated, elements shown in the various embodiments herein may be added, exchanged, and/or removed in order to provide various additional embodiments of the present disclosure. Additionally, as should be understood, the proportions and relative scales of elements provided in the figures are intended to illustrate certain embodiments of the invention and should not be construed in a limiting sense.1 is a block diagram of an apparatus in the form of a memory device 104 in accordance with the present disclosure. The memory device 104 is coupled to the host 102 via an interface. As used herein, for example, host 102, memory device 104, or memory array 120 may also be individually considered "devices." The interface may communicate control, address, data, and other signals between the memory device 104 and the host 102 . The interfaces may include a command bus (eg, coupled to control circuitry 106), an address bus (eg, coupled to address circuitry 108), and a data bus (eg, coupled to input/output (I/O) circuitry 110). In some embodiments, the command bus and address bus may comprise a common command/address bus. In some embodiments, the command bus, address bus, and data bus may be part of a common bus. The command bus may communicate signals between the host 102 and the control circuitry 106, such as clock signals for timing, reset signals, chip selects, parity information, alarms, and the like. The address bus may communicate signals, such as logical addresses of memory banks in memory array 120 for memory operations, between host 102 and address circuitry 108 . The interface may be a physical interface using a suitable protocol. This protocol may be custom or proprietary, or the interface may employ a standardized protocol such as Peripheral Component Interconnect Express (PCIe), Gen-Z Interconnect, Cache Coherent Interconnect for Accelerators (CCIX), and the like. In some cases, control circuitry 106 is a register clock driver (RCD), such as the RCD employed on RDIMMs or LRDIMMs.Memory device 104 and host 102 may be personal laptop computers, desktop computers, digital cameras, mobile phones, memory card readers, or Internet of Things (IoT) capable devices, automobiles, and various other types of systems. For the sake of clarity, the system has been simplified to focus on features of particular relevance to the present disclosure. Host 102 may include multiple processing resources (eg, one or more processors, microprocessors, or some other type of control circuitry) capable of accessing memory device 104 .Memory device 104 may provide main memory for host 102 or may serve as additional memory or storage for host 102 . By way of example, memory device 104 may be a dual in-line memory module (DIMM) that includes memory that operates as a double data rate (DDR) DRAM such as DDR5, a graphics DDR DRAM such as GDDR6, or another type of memory system device 108 . Embodiments are not limited to a particular type of memory device 104 . Other examples of memory device 104 include RAM, ROM, SDRAM, LPDRAM, PCRAM, RRAM, flash memory, three-dimensional junctions, and the like. Cross-point arrays of non-volatile memory can be combined with stackable cross-grid data access arrays for bit storage based on changes in bulk resistance. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations, where non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells .Control circuitry 106 may decode signals provided by host 102 . Control circuitry 106 may also be referred to as command input and control circuitry, and may represent the functionality of different discrete ASICs or portions of different ASICs depending on the implementation. The signals may be commands provided by the host 102 . These signals may include chip enable signals, write enable signals, address latch signals, etc., for controlling operations performed on the memory array 120 . Such operations may include data read operations, data write operations, data erase operations, data move operations, and the like. Control circuitry 106 may include a state machine, a sequencer, and/or some other type of control circuitry, which may be implemented in hardware, firmware, or software, or any combination of the three.Data may be provided to and/or from memory array 120 via data lines that couple memory array 120 to input/output (I/O) circuitry 110 via read/write circuitry 116 . I/O circuitry 110 may be used for bidirectional data communication with host 102 through an interface. Read/write circuitry 116 is used to write data to or read data from memory array 120 . As an example, read/write circuitry 116 may include various drivers, latch circuitry, and the like. In some embodiments, the data path may bypass control circuitry 106 .Memory device 104 includes address circuitry 108 to latch address signals provided through the interface. Address signals may be received and decoded by row decoder 112 and column decoder 114 to access memory array 120 . Data can be read from memory array 120 by sensing voltage and/or current changes on the sense lines using sense circuitry 118 . Sensing circuitry 118 may be coupled to memory array 120 . Memory array 120 may represent multiple banks of the memory illustrated in more detail in Figures 3A-4B. For example, sense circuitry 118 may include sense amplifiers that may read and latch pages (eg, rows) of data from memory array 120 . Sensing (eg, reading) a bit stored in a memory cell may involve sensing a relatively small voltage difference on a pair of sense lines, which may be referred to as digit lines or data lines.The memory array 120 may include rows arranged to be coupled by access lines (which may be referred to herein as word lines or select lines) and columns coupled by sense lines (which may be referred to herein as digit lines or data lines) memory unit. Although memory array 120 is shown as a single memory array, memory array 120 may represent multiple memory arrays arranged in banks of memory device 104 . The memory array 120 may include a plurality of memory cells, such as volatile memory cells (eg, DRAM memory cells, and other types of volatile memory cells) and/or non-volatile memory cells (eg, RRAM memory cells, and other types of non-volatile memory cells).Control circuitry 106 may also include a number of registers 122 (eg, mode registers), fuse options 124, and/or an on-die memory array (not specifically illustrated) that stores default settings of memory array 120 that may be changed through its operation . Registers 122 may be read and/or written based on commands from host 102 , controller and/or control circuitry 106 . Registers 122 may include individual registers that are "reserved for future use" (RFU) as part of the device specification. The RFU register may be used to populate the roles described herein for register 122 . For example, register 122 may at least initially store a value (generally indicated by memory array 120) that indicates the default logical-to-physical address mapping of a memory bank. Those values can be changed by rewriting register 122 .In some embodiments, memory device 104 may be configured to write logical addresses to particular registers 122 . The logical address may be the logical address with the largest memory operation load from the application program to be executed by the host 102 using the memory device 104 as the primary storage device. In some embodiments, control circuitry 106 may be configured to write logical addresses to particular registers 122 . The logical address may be the logical address with the maximum count of memory operations during the initial cycle. In either case, the control circuitry 106 may be configured to remap the physical address of the memory bank closest to the interface to the logical address stored in the particular register 122 .The fuse option 124 block represents a fuse or antifuse (eg, a read-only memory with bits set or changed by operation of the fuse or antifuse). In some embodiments, fuse option 124 may be coupled to register 122 . In some embodiments, there is a corresponding fuse option 124 associated with each memory bank. The control circuitry 106 may be configured to activate the corresponding fuse option 124 associated with a particular memory bank with a particular physical address to remap it from the originally mapped logical address to a different logical address, as described herein. For example, control circuitry 106 may be configured to activate a corresponding fuse option in response to a count associated with a particular memory bank reaching a threshold count. In some embodiments, control circuitry 106 is configured to remap the default logical address closest to the interface among the memory bank physical addresses of the banks by activating the sequence of corresponding fuse options, as described in more detail herein.In at least one embodiment, control circuitry 106 may be configured to receive an indication of memory operating load from host 102 . The memory operation load is based on the application to be executed. The memory operation load may be derived from the memory usage model applied to the application. The memory operation load may be an amount of memory operations corresponding to each of a plurality of logical addresses of a memory bank. In some embodiments, a bank logical address is a less specific portion of a more specific (eg, longer) logical address. In some embodiments, the bank logical address includes the entire logical address. Through the correlation between the logical addresses received from the host 102 and the physical addresses of the memory banks according to the logical-to-physical translation table stored in the memory device 104 , the memory operation load can be decomposed into the load of memory dies comprising the memory array 120 . Relative load per memory bank.2A is a graph of power consumption of different memory banks operating at different speeds during a write operation. The vertical axis of the graph is current. For each bank (BANK0, BANK1, BANK2, BANK3, BANK4, BANK5, BANK6, BANK7), a bar graph along the horizontal axis represents the The current used under each. Each of the memory banks is part of the same memory device and shares the same interface of the memory device with other memory banks. Different speeds may represent different operating speeds (eg, frequencies) of the same memory device, or different operating speeds of different memory devices having the same physical arrangement of banks relative to the interface of the memory device. The speed associated with the corresponding bar in the bar graph for each bank is the same for the different banks. The first (leftmost) bar graph is the same for each bank for each of BANK0, BANK1, ..., BANK7. The same is true for the second bar chart, the third bar chart, and so on.As shown in FIG. 2A and as can be expected, the banks of memory operating at higher speeds have greater power consumption (as evidenced by the greater current according to the bar graph). However, different banks of memory operating at the same speed have different power consumption. In this example, for each speed, the power consumption of BANK6 and BANK7 is greater than that of BANK4 and BANK5, the power consumption of BANK4 and BANK5 is greater than that of BANK2 and BANK3, and the power consumption of BANK2 and BANK3 is greater than that of BANK0 and BANK1 consume. For each speed, the power consumption of BANK6 is roughly equal to that of BANK7. For each speed, the power consumption of BANK4 is roughly equal to the power consumption of BANK5. For each speed, the power consumption of BANK2 is roughly equal to the power consumption of BANK3. For each speed, the power consumption of BANK0 is roughly equal to the power consumption of BANK1. However, the power consumption of each bank of the memory may be different for a different layout of the banks of the memory than shown.2B is a graph of power consumption of different memory banks operating at different speeds during a read operation. The graph is similar to the graph shown in Figure 2A, except that the power consumption data is used for read operations instead of write operations. Similar to Figure 2A, the power consumption differs for different banks of memory operating at the same speed. In this example, for each speed, the power consumption of BANK6 and BANK7 is greater than that of BANK4 and BANK5, the power consumption of BANK4 and BANK5 is greater than that of BANK2 and BANK3, and the power consumption of BANK2 and BANK3 is greater than that of BANK0 and BANK1 consume. For each speed, the power consumption of BANK6 is roughly equal to that of BANK7. For each speed, the power consumption of BANK4 is roughly equal to the power consumption of BANK5. For each speed, the power consumption of BANK2 is roughly equal to the power consumption of BANK3. For each speed, the power consumption of BANK0 is roughly equal to the power consumption of BANK1. As will be illustrated in Figures 3A-4B, the illustration of the power consumption of the different memory banks can be explained by the physical layout of the memory device. However, the power consumption of each bank of the memory may be different for a different layout of the banks of the memory than shown.3A is a block diagram of a memory bank 332 with default logical-to-physical addressing. The memory banks 332 may collectively form the memory array 320, although embodiments are not so limited. Memory bank 332 is on the same die with interface 330 . Interface 330 may be referred to in the art as a contact pad or a DQ bus. As illustrated, each memory bank 332 may be individually coupled to the interface 330 . In some embodiments, memory banks are individually addressable and individually operable.Each bank 332 has a physical address and a logical address. Specifically, bank 332-0 has physical address "PA 0" and logical address "LA 0". The memory bank 332-1 has a physical address "PA 1" and a logical address "LA 1". The memory bank 332-2 has a physical address "PA 2" and a logical address "LA 2". The memory bank 332-3 has a physical address "PA 3" and a logical address "LA 3". The memory bank 332-4 has a physical address "PA 4" and a logical address "LA 4". The memory bank 332-5 has a physical address "PA5" and a logical address "LA5". Bank 332-6 has physical address "PA 6" and logical address "LA 6". The memory bank 332-7 has a physical address "PA 7" and a logical address "LA 7". The example illustrated in FIG. 3A represents default logical-to-physical addressing.3B is a block diagram of a memory bank 332 with remapped logical-to-physical addressing in accordance with the present disclosure. Array 320, interface 330, and bank 332 are similar to those illustrated in Figure 3A, except that at least one logical address has been remapped. Specifically, the logical addresses of bank 332-0 and bank 332-6 have been remapped compared to FIG. 3A. Bank 332-0 has physical address "PA 0" and remapped logical address "LA 6", while bank 332-6 has physical address "PA 6" and remapped logical address "LA 0". In other words, logical address "LA 0" previously mapped bank 332-0 has been remapped to bank 332-6, and logical address "LA6" previously mapped bank 332-6 has been remapped to bank 332-6 332-0. The remaining logical-to-physical mappings of banks 332-1, 332-2, 332-3, 332-4, 332-5, 332-7 remain in the default state illustrated in Figure 3A.For example, control circuitry of a memory device may receive an indication from an application to be executed that the load of memory operations is greatest for a particular logical address (eg, logical address "LA 6"). The control circuitry may remap the physical address "PA 0" of the memory bank 332-0 closest to the interface 330 to the logical address "LA 6". In some embodiments, remapping may include controlling circuitry to perform a one-time fuse or antifuse activation, eg, as described above with respect to FIG. 1 . To maintain the overall logical address structure, the control circuitry may also remap the physical address "PA 6" of memory bank 332-6 to the logical address previously mapped to bank 332-0, as illustrated. As another example, the same remapping may occur based on counting operations on bank 332 during the initial cycle and determining that bank 332-6 has the maximum count associated with it during the initial cycle.4A is a block diagram of a memory bank 432 with default logical-to-physical addressing. The memory banks 432 may collectively form the memory array 420, although embodiments are not so limited. Memory bank 432 is on the same die with interface 430 . As illustrated, each bank 432 is individually coupled to interface 430 . Each bank 432 has a physical address and a logical address. Specifically, bank 432-0 has physical address "PA 0" and logical address "LA 0". The memory bank 432-1 has a physical address "PA 1" and a logical address "LA 1". The memory bank 432-2 has a physical address "PA 2" and a logical address "LA 2". The memory bank 432-3 has a physical address "PA 3" and a logical address "LA 3". The memory bank 432-4 has a physical address "PA 4" and a logical address "LA 4". The memory bank 432-5 has a physical address "PA 5" and a logical address "LA 5". Bank 432-6 has physical address "PA 6" and logical address "LA 6". The memory bank 432-7 has a physical address "PA 7" and a logical address "LA 7". The example illustrated in Figure 4A represents default logical-to-physical addressing.4B is a block diagram of a memory bank with remapped logical-to-physical addressing in accordance with the present disclosure. Array 420, interface 430, and memory banks 432 are similar to those illustrated in Figure 4A, except that the logical addresses have been remapped. Bank 432-0 has a remapped logical address "LA 4". Bank 432-1 has a remapped logical address "LA6". Bank 432-2 has a remapped logical address "LA 7". Bank 432-3 has a remapped logical address "LA 0". Bank 432-4 has a remapped logical address "LA 1". Bank 432-6 has a remapped logical address "LA 2". Bank 432-7 has a remapped logical address "LA 3". Bank 432-5 maintains its default logical address "LA 5".For example, the control circuitry of the memory device may receive an indication of the respective memory operation load from the application to execute for each logical address (eg, "LA 0" through "LA 6"). The control circuitry may remap the physical addressing of each memory bank 332 based on the instructions such that the relative memory operation load of each memory bank is proportional to the respective distance of each of the plurality of memory banks from the interface. For example, the following logical addresses may be ordered from largest access load to smallest access load (target or count): "LA 4", "LA 6", "LA 7", "LA 0", "LA" 1", "LA 5", "LA 2", "LA 3". Thus, as illustrated in the example of Figure 4B, "LA 4" is remapped to "PA 0" corresponding to bank 432-0, and "LA 6" is remapped to "PA" corresponding to bank 432-1 1", remaps "LA 7" to "PA 2" corresponding to bank 432-2, remaps "LA 0" to "PA 3" corresponding to bank 432-3, remaps "LA 1" Remap to "PA 4" corresponding to bank 432-4, remap "LA 5" to "PA 5" corresponding to bank 432-5, remap "LA 2" to corresponding to bank 432 -6 "PA 6" and remap "LA 3" to "PA 7" corresponding to bank 432-7. In this example, although bank 432-5 retains its default logical address "LA 5", it is considered to have been remapped because logical address "LA 5" is reassigned to the bank based on bank access load body.5 is a flowchart illustrating a method for power consumption based bank redistribution in accordance with the present disclosure. The method described in FIG. 5 may be performed by, for example, a memory device, such as memory device 104 illustrated in FIG. 1 . At block 550, the method may include determining a respective power consumption for each of the plurality of banks of the memory device. Determining the respective power consumption may include proactively determining the power consumption of each memory bank based on an application to be executed using the memory device. The host 102 illustrated in FIG. 1 may execute application programs using the memory device 104 as the primary storage device for executing the application programs. Applications can target logical addresses for memory operations. A logical address may contain, point to, or be associated with a memory bank (eg, by way of a logical address that is more specific than a bank, but indicates a portion of memory corresponding to a memory bank). Based on the application program instructions, it may be determined which memory addresses will be accessed by executing the application program in greater numbers than other memory addresses. Thus, which memory banks will use more power than others may be determined according to the default logical-to-physical addressing scheme. This determination can be made, for example, when the memory device is initialized.At block 552, the method may include distributing memory operations to a plurality of memory banks based on respective power consumption. Distributing memory operations may be accomplished by remapping the addresses of the memory banks based on each memory bank's corresponding relative memory operation load. In at least one embodiment, the remapping may be performed prior to execution of the application. The remapping can be done so that the application addresses each bank based on its relative memory operation load relative to each bank's distance from the interface (eg, interface 430 illustrated in FIG. 4B ). The corresponding distances are proportional. Banks closer to the interface can be remapped to logical addresses where the application operates on larger amounts of memory, and banks further away from the interface can be remapped to logical addresses where the application operates on smaller amounts of memory. Accordingly, at least one embodiment of the present disclosure advantageously reduces the overall power consumption of a memory device relative to default mappings for executing applications.Although not specifically illustrated in FIG. 5, in at least one embodiment, a pertaining method may include operating a memory device with default addressing of a memory bank during an initial cycle. The corresponding power consumption may be determined retrospectively during or after the initial period. Determining power consumption retrospectively may include the memory device counting operations of each memory bank during an initial period. Such memory operations may include one or more of access, read, and write to each memory bank. The determined respective power consumption is based on and proportional to the memory operation count for each bank. After the initial period, memory operations may be distributed to memory banks based on the determined power consumption. Distribution based on power consumption may be achieved by remapping the addresses of the banks based on the respective operation counts of each bank in proportion to the respective distance of each bank from the interface. In at least one embodiment, the remapping may be performed in response to any of the banks reaching a threshold count. The threshold count may be programmed into the memory device prior to operating the memory device.6 is a flowchart illustrating an example of a memory bank remapping operational process in accordance with the present disclosure. At least a portion of the process illustrated in FIG. 6 may be performed by a memory device. At 660, the memory device may operate regularly (eg, through default logical to physical addressing). In some embodiments, the memory die may operate with default logical-to-physical address translation during the initial cycle. At 661, during regular operation, the memory device may count memory operations (eg, bank accesses or bank read/write counts) for each memory bank (eg, during an initial cycle). At 662, if the count of any bank reaches a predefined threshold, the bank may be marked at 663. Marking a bank may include operating a fuse or antifuse associated with the bank, changing a value in a mode register associated with the bank, storing a value in a controller or separate memory of the control circuitry ( For example, in static RAM), or in another component by which the control circuitry of the memory device can track the state of the memory bank. Counting bank accesses may continue at 661 if the threshold count for any bank has not been reached.At 664, it may be determined whether a remapping condition exists for the marked bank. If no remapping conditions exist, bank accesses may continue to be counted, as indicated at 661 . If a remapping condition does exist, then at 665 the logical to physical addresses of at least one bank may be remapped. For example, a default logical address of at least one bank may be remapped to a physical address of a different bank, the default logical address being initially mapped to the physical address based on a count after an initial cycle.Various remapping conditions can be implemented. For example, the remapping condition may be the presence of any of the marked banks. For this remapping condition, a logical-to-physical address remapping can occur whenever a bank is marked. For example, the first time a bank is marked, the logical address associated with it can be remapped to the bank closest to the interface. The next time a bank is marked, the logical address associated with it may be remapped to the next bank closest to the interface, and so on. Another example of a remapping condition is the presence of a marked specific amount of banks (other than one). For example, remapping can be triggered for each pair of banks marked. This embodiment may be particularly applicable to memory dies, such as the memory die shown in Figure 3B, where pairs of memory banks are equidistant from the interface. Another example of a remapping condition is when at least half of the memory banks have been marked and remapping of all memory banks is triggered. This embodiment can be used as a balance between waiting for a count for all banks and having enough counts for most banks to best estimate which bank logical addresses will be accessed the most and interface with the banks accordingly Remaps the memory bank logical addresses proportional to the physical distance. Another example of a remapping condition is the end of the initial period. The initial period may be a predefined length of time, a predefined total count for all banks, or a predefined threshold count for any bank.Embodiments may include tangible machine-readable storage media (also referred to as computer-readable media) having stored thereon one or more sets of instructions or software embodying any one or more of the methods or functions described herein. In some embodiments, the memory device or the processing device constitutes a machine-readable medium. The term "machine-readable storage medium" includes a single medium or multiple media that store one or more sets of instructions. The term "machine-readable storage medium" includes any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. The term "machine-readable storage medium" may include, but is not limited to, solid-state memory, optical media, and magnetic media.Although specific embodiments have been shown and described herein, those of ordinary skill in the art will appreciate that arrangements calculated to achieve the same results may be substituted for the specific embodiments shown. This disclosure is intended to cover modifications or variations of various embodiments of this disclosure. It is to be understood that the foregoing description has been made by way of illustration and not of limitation. Combinations of the above embodiments and other embodiments not specifically described herein will be apparent to those skilled in the art upon reviewing the above description. The scope of the various embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the present disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.In the foregoing Detailed Description, various features were grouped together in a single embodiment for the purpose of streamlining the disclosure. This approach of the disclosure should not be construed as reflecting an intention that the disclosed embodiments of the disclosure must employ more features than are expressly recited in each claim. Indeed, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
A method and apparatus for controlling system access to a memory that includes receiving first and second instructions, and evaluating whether both instructions can architecturally complete. When at least one instruction cannot architecturally complete, delaying both instructions. When both instructions can architecturally complete and at least one is a write instruction, adjusting a write control of the memory to account for an evaluation delay. The evaluation delay can be sufficient to evaluate whether both instructions can architecturally complete. The evaluation delay can be input to the write control and not the read control of the memory. A precharge clock of the memory can be adjusted to account for the evaluation delay. Evaluating whether both instructions can architecturally complete can include determining whether data for each instruction is located in a cache, and whether the instructions are memory access instructions.
CLAIMS What is claimed is: 1. A method for controlling system access to a memory comprising: receiving a first processor instruction and a second processor instruction; evaluating whether the first and second processor instructions can architecturally complete; when at least one of the first and second processor instructions cannot architecturally complete, delaying both the first and second processor instructions; when both of the first and second processor instructions can architecturally complete and at least one of the first and second processor instructions is a write instruction, adjusting a write control of the memory to account for an evaluation delay. 2. The method of claim 1, wherein the evaluation delay is a sufficient time to evaluate whether the first and second processor instructions can architecturally complete. 3. The method of claim 2, wherein the evaluation delay is accounted for in the write column select for the memory. 4. The method of claim 1 , wherein when either of the first or second processor instruction is a read instruction, a read control of the memory does not account for the evaluation delay. 5. The method of claim 1, further comprising when both of the first and second processor instructions can architecturally complete and at least one of the first and second processor instructions is a write instruction, adjusting a precharge clock of the memory to account for the evaluation delay. 6. The method of claim 1, further comprising, when both of the first and second processor instructions can architecturally complete and at least one of the first and second processor instructions is a memory access instruction, sending a non-delayed clock signal not accounting for the evaluation delay to a read control of the memory, sending a delayed clock signal accounting for the evaluation delay to the write control of the memory, sending both the non-delayed clock signal and the delayed clock signal to a precharge clock multiplexer of the memory; and selectively controlling the precharge clock multiplexer to send one of the non- delayed clock signal and the delayed clock signal as a precharge clock signal. 7. The method of claim 6, wherein the controlling a precharge clock comprises: inputting both the non-delayed clock and the delayed clock to a multiplexer; and inputting the read enable signal into a select input of the multiplexer; wherein the non-delayed clock is output by the multiplexer as the precharge clock when the read enable indicates a read instruction, and the delayed clock is output by the multiplexer as the precharge clock when the read enable does not indicate a read instruction. 8. The method of claim 1, wherein the evaluating step comprises determining whether data for the first and second processor instructions are located in a cache. 9. The method of claim 8, wherein the evaluating step further comprises determining whether the first processor instruction is a memory access instruction and determining whether the second processor instruction is a memory access instruction. 10. The method of claim 1, further comprising generating a write enable signal when the first processor instruction is a write instruction and either data for the second processor instruction is located in a cache or the second processor instruction is not a memory access instruction. 11. A memory access controller comprising: a first slot for processing a first instruction; a second slot for processing a second instruction; system combinational logic generating signals indicating whether both the first and second instructions can architecturally complete; and a delay circuit for adjusting a write control of a memory to account for a delay of the signals generated by the system combinational logic. 12. The memory access controller of claim 11, further comprising: a first cache memory; a first cache hit signal indicating whether data for the first instruction is stored in the first cache; a second cache memory; a second cache hit signal indicating whether data for the second instruction is stored in the second cache; the system combinational logic using both the first and second cache hit signals. 13. The memory access controller of claim 11, wherein the system combinational logic further comprises: first slot combinational logic receiving the second cache hit signal and generating a store enable signal for the first instruction; and second slot combinational logic receiving the first cache hit signal and generating a store enable signal for the second instruction. 14. The memory access controller of claim 13, wherein the first slot combinational logic further receives a first instruction store signal indicating whether the first instruction is a store instruction and a second instruction no- dependency signal indicating whether the second instruction is a memory access instruction, and the first slot combinational logic generates the store enable signal for the first instruction when the first instruction store signal indicates the first instruction is a store instruction and either the second cache hit signal indicates that the data for the second instruction is in the second cache or the second instruction no-dependency signal indicates the second instruction is not a memory access instruction; and the second slot combinational logic further receives a second instruction store signal indicating whether the second instruction is a store instruction and a first instruction no-dependency signal indicating whether the first instruction is a memory access instruction, and the second slot combinational logic generates the store enable signal for the second instruction when the second instruction store signal indicates the second instruction is a store instruction and either the first cache hit signal indicates that the data for the first instruction is in the first cache or the first instruction no- dependency signal indicates the first instruction is not a memory access instruction. 15. The memory access controller of claim 13, further comprising: a data array having load logic and store logic; the load logic receiving the first cache hit signal and the second cache hit signal, wherein when one of the first and second cache hit signals indicates a location in the data array, the load logic generates a word line signal indicating the location in the data array; the store logic receiving the store enable signal for the first instruction and the store enable signal for the second instruction, wherein when one of the store enable signals for the first and second instructions indicates a location in the data array, the store logic generates a write chip select signal indicating the location in the data array. 16. The memory access controller of claim 11, wherein, when any of the first and second instructions is a write instruction, the system combinational logic generates a write enable signal when both the first and second instructions can architecturally complete. 17. The memory access controller of claim 16, wherein, when a write enable signal is generated, the delay circuit delays the write control of the memory by approximately the same amount of time as it takes for the system combinational logic to generate the write enable signal. 18. The memory access controller of claim 11 , further comprising a write column select and a read column select for the memory, the delay circuit adjusting the write column select of the memory to account for the delay of the signals generated by the system combinational logic and not adjusting the read column select of the memory to account for the delay of the signals generated by the system combinational logic. 19. The memory access controller of claim 11, further comprising a multiplexer having a first input, a second input, an output and a select line, the first input being coupled to a non-delayed clock not delayed by the delay circuit, the second input being coupled to a delayed clock delayed by the delay circuit, the output generating a precharge clock, and the select line coupled to a read enable signal; wherein the multiplexer passes the non-delayed clock to the output when the read enable indicates a read instruction and passes the delayed clock to the output when the read enable does not indicate a read instruction. 20. The memory access controller of claim 11 incorporated into a device selected from a group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, and a computer. 21. A memory access controller comprising: a first means for processing a first instruction; a second means for processing a second instruction; computational means for generating signals indicating whether both the first and second instructions can architecturally complete; and delay means for adjusting a write control of a memory to account for a delay of the signals generated by the computational means.
ARCHITECTURE AND METHOD FOR ELIMINATING STORE BUFFERS IN A DSP/PROCESSOR WITH MULTIPLE MEMORY ACCESSES FIELD OF DISCLOSURE [0001] The present disclosure relates generally to processors, and more particularly to an architecture and method for eliminating store buffers in a processor with multiple memory accesses. BACKGROUND [0002] The need for faster processing of data and data operations has been a driving force behind the improvements seen in the field of data processing systems. Improvements have lead to the development of faster, smaller, and more complex processors and digital signal processors (DSPs), including those that implement parallel processing, pipelining and/or very long instruction word (VLIW) processing, as well as multiprocessor configurations and distributed memory systems. [0003] Parallel processing can increase the overall speed of a processor by enabling it to execute multiple instructions at the same time. In some cases, to increase the number of instructions being processed and thus increase speed, the processor may be pipelined. Pipelining refers to providing separate stages in a processor where each stage performs one or more of the small steps necessary to execute an instruction. Parallel processing and pipelining can lead to architectural dependencies and timing issues as multiple instructions attempt to execute and access memory or other circuitry simultaneously. [0004] Processors typically provide load and store instructions to access information located in the caches and/or main memory. A load instruction may include a memory address (in the instruction or an address register) and a target register. When the load instruction is executed, data stored at the memory address may be retrieved (e.g., from cache, main memory, or other storage means) and placed in the target register identified in the load instruction. Similarly, a store instruction may include a memory address and a source register. When the store instruction is executed, data from the source register may be written to the memory address identified in the store instruction. [0005] Very long instruction word (VLIW) processors and DSPs execute a group of instructions belonging to the same packet. Each packet includes multiple slots. The processor starts processing the next packet when all of the instructions in the slots of the current packet complete execution. If the execution of any instruction in the packet is delayed, then none of the other instructions in the packet can complete. If the execution takes multiple cycles or stalls due to hazards, the architectural state is not updated until all instructions in the packet complete. The architectural state of a processor includes the states of its registers, caches, memory management unit (MMU), main memory, etc. [0006] A VLIW packet may contain multiple memory access instructions, for example multiple load instructions, multiple store instructions or a combination of load and store instructions. The data may be cached to improve performance. However, even if one of the instructions in the packet can complete, it must not do so until all of the other instructions in the packet can also complete. This produces cross-instruction or cross-slot dependencies for architectural updates within the VLIW packet. For example, if a packet contains a load instruction and a store instruction, there can be architectural and timing path dependencies between a cache hit event for the load instruction and a write enable event for the store instruction. The write enable event would be delayed if the load instruction did not have a cache hit (data for the load instruction stored in the cache). Note that if a VLIW packet contains two store operations, the cross slot architectural dependency affects write enables of the store instructions in both slots. [0007] These architectural dependencies and timing issues of multiple memory accesses can be resolved by different methods. One method is to temporarily store update data in a store buffer during a memory access conflict or cross slot dependency, and updating the cache with the data from the store buffer after the memory conflict is resolved or after knowing the other slot(s) can complete. If the store buffer is sized appropriately, it can make it easier to handle memory bank conflicts and late pipeline cancellations, and provide some speed/frequency improvement. However, the separate store buffer requires additional area and introduces complexity to manage data dependencies (content addressable memory (CAM) structures), data buffering (depth) needs, age of data in the store buffer and address ordering. Note that the area of the store buffer goes up with the number of stores supported in a VLIW packet, so the store buffer solution may not be cost efficient in terms of power, area and complexity. Another method is to reduce the clock frequency/speed of the pipeline to allow dependency resolution prior to the memory stage and relax timing issues. However, this results in a performance loss that directly impacts the clock frequency of the whole processor, increases the load/read latency and can make it harder to handle memory bank conflicts. Yet another method is to use separate read and write wordline clocks where the wordline gets an early clock for load access and a late clock for store access. However, the separate read and write wordline clocks increases the complexity of memory array timing verification for reads and writes, and makes it harder to handle memory bank conflicts. [0008] It would be desirable to have an architecture and method for handling multiple memory accesses in a processor, including digital signal processors (DSPs), without a store buffer that retains the frequency benefits of the pipeline, has little impact on processor speed when there are multiple memory operations and avoids some of the other drawbacks of prior methods. SUMMARY [0009] A method for controlling system access to a memory is disclosed that includes receiving a first processor instruction and a second processor instruction, and evaluating whether the first and second processor instructions can architecturally complete. If at least one of the first and second processor instructions cannot architecturally complete, then delaying both the first and second processor instructions. If both of the first and second processor instructions can architecturally complete and at least one of the first and second processor instructions is a write instruction, then adjusting a write control of the memory to account for an evaluation delay. [0010] The evaluation delay can be a sufficient time to evaluate whether the first and second processor instructions can architecturally complete. The evaluation delay can be accounted for in the write column select for the memory. When either of the first or second processor instruction is a read instruction, a read control of the memory does not account for the evaluation delay. When both of the first and second processor instructions can architecturally complete and at least one of the first and second processor instructions is a write instruction, a precharge clock of the memory can be adjusted to account for the evaluation delay. [0011 ] When both of the first and second processor instructions can architecturally complete and at least one of the first and second processor instructions is a memory access instruction, the method can also include sending a non-delayed clock signal not accounting for the evaluation delay to a read control of the memory, sending a delayed clock signal accounting for the evaluation delay to the write control of the memory, sending both the non-delayed clock signal and the delayed clock signal to a precharge clock multiplexer of the memory; and selectively controlling the precharge clock multiplexer to send one of the non-delayed clock signal and the delayed clock signal as a precharge clock signal. [0012] The controlling of the precharge clock can include inputting both the non-delayed clock and the delayed clock to a multiplexer, and inputting the read enable signal into a select input of the multiplexer, so that the multiplexer outputs the non- delayed clock as the precharge clock when the read enable indicates a read instruction, and the multiplexer outputs the delayed clock as the precharge clock when the read enable does not indicate a read instruction. [0013] The evaluating step can include determining whether data for the first and second processor instructions are located in a cache. The evaluating step can also include determining whether the first processor instruction is a memory access instruction and determining whether the second processor instruction is a memory access instruction. The method can also include generating a write enable signal when the first processor instruction is a write instruction and either data for the second processor instruction is located in a cache or the second processor instruction is not a memory access instruction. [0014] A memory access controller is disclosed that includes a first slot for processing a first instruction, a second slot for processing a second instruction, system combinational logic generating signals indicating whether both the first and second instructions can architecturally complete, and a delay circuit for adjusting a write control of a memory to account for a delay of the signals generated by the system combinational logic. The memory access controller can be incorporated into a device selected from a group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, and a computer. [0015] The memory access controller can also include first and second cache memories and first and second cache hit signals. The first cache hit signal indicates whether data for the first instruction is stored in the first cache, and the second cache hit signal indicates whether data for the second instruction is stored in the second cache. The system combinational logic using both the first and second cache hit signals. [0016] The system combinational logic can include first and second slot combinational logic. The first slot combinational logic receives the second cache hit signal and generates a store enable signal for the first instruction, the first slot combinational logic. The second slot combinational logic receives the first cache hit signal and generates a store enable signal for the second instruction. The first slot combinational logic can also receive a first instruction store signal indicating whether the first instruction is a store instruction and a second instruction no-dependency signal indicating whether the second instruction is a memory access instruction. The first slot combinational logic can generate the store enable signal for the first instruction when the first instruction store signal indicates that the first instruction is a store instruction and either the second cache hit signal indicates that the data for the second instruction is in the second cache or the second instruction no-dependency signal indicates that the second instruction is not a memory access instruction. The second slot combinational logic can also receive a second instruction store signal indicating whether the second instruction is a store instruction and a first instruction no-dependency signal indicating whether the first instruction is a memory access instruction. The second slot combinational logic can generate the store enable signal for the second instruction when the second instruction store signal indicates that the second instruction is a store instruction and either the first cache hit signal indicates that the data for the first instruction is in the first cache or the first instruction no-dependency signal indicates that the first instruction is not a memory access instruction. [0017] The memory access controller can also include a data array having load logic and store logic. The load logic can receive the first cache hit signal and the second cache hit signal, and when one of the first and second cache hit signals indicates a location in the data array, the load logic can generate a word line signal indicating the location in the data array. The store logic can receive the store enable signal for the first instruction and the store enable signal for the second instruction, and when one of the store enable signals for the first and second instructions indicates a location in the data array, the store logic can generate a write chip select signal indicating the location in the data array. [0018] When any of the first and second instructions is a write instruction, the system combinational logic can generate a write enable signal when both the first and second instructions can architecturally complete. When a write enable signal is generated, the delay circuit can delay the write control of the memory by approximately the same amount of time as it takes for the system combinational logic to generate the write enable signal. [0019] The memory access controller can also include a write column select and a read column select for the memory. The delay circuit can adjust the write column select of the memory to account for the delay of the signals generated by the system combinational logic and does not have to adjust the read column select of the memory to account for the delay of the signals generated by the system combinational logic. The memory access controller can also include a multiplexer having a first input, a second input, an output and a select line. The first input can be coupled to a non-delayed clock not delayed by the delay circuit, the second input can be coupled to a delayed clock delayed by the delay circuit, the output can generate a precharge clock, and the select line can be coupled to a read enable signal, so that the multiplexer passes the non- delayed clock to the output when the read enable indicates a read instruction and passes the delayed clock to the output when the read enable does not indicate a read instruction. [0020] A memory access controller is disclosed that includes a first means for processing a first instruction, a second means for processing a second instruction, computational means for generating signals indicating whether both the first and second instructions can architecturally complete, and delay means for adjusting a write control of a memory to account for a delay of the signals generated by the computational means. [0021] For a more complete understanding of the present disclosure, reference is now made to the following detailed description and the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS [0022] Fig. 1 is a circuit diagram showing an exemplary multiple memory access system with cross-slot dependency circuitry; [0023] Fig. 2 is a circuit diagram showing exemplary control logic within a data bank with delays for write control and precharge clock; [0024] Fig. 3 is a flow diagram showing an exemplary method for controlling a multiple memory access system; and [0025] Fig. 4 is a block diagram showing an exemplary wireless communication system in which an embodiment of an architecture and method to eliminate store buffers in a processor with multiple memory accesses may be advantageously employed. DETAILED DESCRIPTION [0026] The present invention describes an architecture and method for retaining the frequency benefits of the pipeline without the need of a store buffer and not affecting the processor operational speed when there are multiple memory operations. [0027] Figure 1 shows a circuit diagram of an exemplary architecture for a system 100 that can handle the architectural dependency of multiple memory accesses without the use of a store buffer. The system 100 is a Very Long Instruction Word (VLIW) system which exemplifies the multiple memory access issues. The system 100 includes tag array section 102, cross-slot dependency circuitry 104 and data array section 106. For clarity, Figure 1 shows two tag arrays for two slots in the tag array section 102 and two data arrays in the data array section 106; however the system can include any number of M tag arrays for M slots in the tag array section 102 and N data arrays in the data array section 106. [0028] The tag array section 102 includes tags for a slot sO and a slot si . If the slot sO holds a memory access instruction, the system checks if the data is stored in a four way sO cache 112. The four way sO cache 112 is only an example of a type of cache that may be used. For example, the cache 112 could be a direct mapped cache or have a number of ways X, where X is 2 or more. If the data is in the sO cache 112, a cache hit occurs and one of the elements of an sO hit vector 114 will indicate the location of the data in the sO cache 112. If none of the elements of an sO hit vector 114 indicates the location of the data, then the data is not in the sO cache 112 and a cache miss occurs. The elements of the sO hit vector 114 are input to an OR reduction gate 116 which outputs an sO hit signal 118. If any of the elements of the sO hit vector 114 indicates a cache hit, then the sO hit signal 118 will indicate a cache hit for the slot sO. If none of the elements of the sO hit vector 114 indicates a cache hit, then the sO hit signal 118 will indicate a cache miss for the slot sO. If the slot sO is a memory access instruction and there is not a hit in the sO cache 112, then the system retrieves the necessary data from memory and puts it into the sO cache 112 at which point the sO hit vector 114 and the sO hit signal 118 will indicate a cache hit for the slot sO. [0029] The tag array section 102 also includes a tag for the slot si . If the slot si holds a memory access instruction, the system checks if the data is stored in a four way si cache 142. The si cache 142 can be of any desired type and size. If the data is in the si cache 142, a cache hit occurs and one of the elements of an si hit vector 144 will indicate the location of the data in the si cache 142. If none of the elements of an si hit vector 144 indicates the location of the data, then the data is not in the si cache 142 and a cache miss occurs. The elements of the si hit vector 144 are input to an OR reduction gate 146 which outputs an si hit signal 148. If any of the elements of the si hit vector 144 indicates a cache hit, then the si hit signal 148 will indicate a cache hit for the slot si . If none of the elements of the si hit vector 144 indicates a cache hit, then the si hit signal 148 will indicate a cache miss for the slot si . If the slot si is a memory access instruction and there is not a hit in the si cache 142, then the system retrieves the necessary data from memory and puts it into the si cache 142 at which point the si hit vector 144 and the si hit signal 148 will indicate a cache hit for the slot si . [0030] SI combinational logic 120 determines cross-slot dependencies and whether an si store enable signal 126 should be sent to data arrays 130 and 160. The si store enable signal 126 indicates that the slot si is a store instruction and the cross-slot dependencies have been resolved so that the store instruction in the slot si can execute and store data. The combinational logic 120 receives several inputs including: the sO hit signal 118; an si store instruction signal 122, and an sO no-dependency signal 124. The sO hit signal 118 indicates whether the instruction in the slot sO has data available in the sO cache 112. When there are more than two slots, a hit signal for each of the slots will be input to the combinational logic, except for the hit signal of the slot for which the store enable signal is being determined. The si store instruction signal 122 indicates whether the slot si holds a store instruction. The sO no-dependency signal 124 indicates when the instruction in the slot sO is not a memory access, store or load instruction, and thus no memory access dependency exists with the slot sO. When there are more than two slots, a no-dependency signal for each of the slots will be input to the combinational logic, except for the no-dependency signal of the slot for which the store enable signal is being determined. [0031] The si combinational logic 120 sends the si store enable signal 126 when the si store instruction signal 122 indicates that the slot si holds a store instruction, and for each of the other slots either (a) the no-dependency signal, for example the sO no-dependency signal 124, indicates that the slot si does not need to wait for that slot, or (b) the hit signal, for example the sO hit signal 118, indicates that there was a cache hit for that slot. [0032] SO combinational logic 150 determines cross-slot dependencies and whether an sO store enable signal 156 should be sent to the data arrays 130 and 160. The sO store enable signal 156 indicates that the slot sO is a store instruction and the cross-slot dependencies have been resolved so that the store instruction in the slot sO can execute and store data. The combinational logic 150 receives several inputs including: the si hit signal 148; an sO store instruction signal 152, and an si no-dependency signal 154. The si hit signal 148 indicates whether the instruction in the slot si has data available in the si cache 142. The sO store instruction signal 152 indicates whether the slot sO holds a store instruction. The si no-dependency signal 154 indicates when the instruction in the slot si is not a memory access, store or load instruction, and thus no memory access dependency exists with the slot si . [0033] The sO combinational logic 150 sends the sO store enable signal 156 when the sO store instruction signal 152 indicates that the slot sO holds a store instruction, and for each of the other slots either (a) the no-dependency signal, for example the si no-dependency signal 154, indicates that the slot sO does not need to wait for that slot, or (b) the hit signal, for example the si hit signal 148, indicates that there was a cache hit for that slot. [0034] The sO hit vector 114, the sO store enable signal 156, the si hit vector 144, and the si store enable signal 126 are sent to each of the data arrays 130, 160. A system that processes a packet with M slots and uses N data arrays would send a hit vector and store enable signal for each of the M slots to each of the N data arrays. [0035] The data array 130 includes load multiplexer 132 and store multiplexer 136. The load multiplexer 132 receives the hit vectors for each of the slots; in this case the sO hit vector 114 and the si hit vector 144. If any of the hit vectors indicates that the location for a load instruction is in the data array 130, then the load multiplexer 132 activates a word line 134 for the data array 130. The store multiplexer 136 receives the store enable signals for each of the slots; in this case the sO store enable 156 and the si store enable 126. If any of the store enable signals indicates that the location for a store instruction is in the data array 130, then the store multiplexer 136 activates a write chip select signal 138 for the data array 130. [0036] The data array 160 includes load multiplexer 162 and store multiplexer 166. The load multiplexer 162 receives the hit vectors for each of the slots; in this case the sO hit vector 114 and the si hit vector 144. If any of the hit vectors indicates that the location for a load instruction is in the data array 160, then the load multiplexer 162 activates a word line 164 for the data array 160. The store multiplexer 166 receives the store enable signals for each of the slots; in this case the sO store enable 156 and the si store enable 126. If any of the store enable signals indicates that the location for a store instruction is in the data array 160, then the store multiplexer 166 activates a write chip select signal 168 for the data array 160. [0037] In the embodiment shown in Figure 1, each of the data banks 130, 160 is single ported; thus only one slot can do a load or a store to a particular data bank at one time. The select signals for the load and store multiplexers of each of the data banks can be used to determine the order of the slots access to the data banks. The select signals for the load and store multiplexers 132, 136 of the data array 130, and for the load and store multiplexers 162, 166 of the data array 160 can be determined via bits from the slot address and the slot read or write access enable to the data bank. If both slots want to do a load or a store to the same data array, then one of the slots can access the data array on a replay. If the slots want to do a load or a store to different data arrays, then the accesses to the different data arrays can occur in parallel. [0038] The data banks can be organized so they are addressed using set bits that are a slice of the memory access address. The banks can be selected by set bits called bank selector bits. The wordline of a bank can be addressed through a hit way vector and some additional set bits, and the column can be addressed through some remaining set bits. This organization allows for low power operation of the banks and also allows for the store enable to be an independently controlled signal from the wordline control. This organization gives the ability of a load or store instruction to cancel itself through its hit way vector (for example, hit vector 114 or 144) while allowing the store enable (for example, store enable signal 126 or 156) to be controlled by another parameter, for example the hit signal from another slot. However, for single ported memory banks, if multiple memory access instructions target the same data bank, then the memory access instructions will be selected to proceed one at a time. [0039] Other factors can also be added to the store enable that are related to memory management unit (MMU) attributes or some form of cache allocation schemes. Since the store enable is independently controlled architecturally from the wordline control, the store enable can arrive late to allow more complete qualification to be added to the store enable; for example cross slot dependencies of hit signals to achieve an atomic update of the processor architectural state. The hit signal (for example, hit signal 118 or 148) is a late arriving signal since it ORs the hit vector to a one bit hit signal. This hit signal is further qualified by opcode decode bits of the other slots in the packet and then AND-ed with the opcode decode bits of the store operation in the current slot to generate the store enable signal for the current slot (for example, store enable signal 126 or 156). However there is a balance in that the store enable signal cannot be so late that the write is not able to complete during the cycle. A delay circuit can maintain the balance at a very low cost of power, area and complexity. This can be done by overlapping the cross slot logic computations for the write enable signal with a portion of the bank access time for the wordline signal. [0040] The processing described above to generate the store enable signals 126, 156 using the OR reduction gates 116, 146 and the combinational logic 120, 150 incurs a processing delay. Figure 2 shows exemplary circuitry to allow the data banks to absorb this processing delay. Inputs to the circuitry shown in Figure 2 include a clock signal 220, wordline enable signals, column address lines, read enable signal, and write enable signal. The wordline enable signals are the output of the row decoder circuit whose input can be the hit vector and some portion of the set bits. The column address can be formed from a portion of the set bits not used in the row decoder. The write enable signal can be one of the two store enable signals 126, 156 selected based on the order of the memory access that is allowed to proceed within the data bank. [0041] The wordline enable signals, indicating the location where data is to be read from or written to, are input to n AND gates along with the clock signal 220. The clock signal 220 activates the n AND gates and the location is passed to N word lines. Two AND gates 202, 204 of the n AND gates for the n word lines are shown in Figure 2. The word lines 134, 164 are examples of the N word lines. The word lines 134, 164 can also be used along with set bits to address a larger row decoder. [0042] The clock signal 220, read enable and column address signals are input to a NAND gate 206 to generate a read column select signal. The clock 220 is then input to a delay circuit 210 to generate a delayed clock 222 used for the write column select signal. The delay circuit 210 accounts for the circuitry delay in generating the store enable signals and relaxes the write enable setup constraint. The delayed clock 222, write enable and column address signals are input to an AND gate 208 to generate a write column select signal. The write chip select signals 138, 168 are examples of write column select signals. For the embodiment shown in Figure 1, the delay circuit 210 accounts for the delay of the OR reduction gates 116, 146, the combinational logic 120, 150 and other circuitry in generating the store enable signals 126, 156. The delay circuit may introduce additional delay in order to provide operation margin. The delay can be tailored to the implementation for generating the write enable signals. If the delay is too long, then unnecessary power may be used; and if the delay is too short, then the write may fail. [0043] In this embodiment, the wordline and read column select signals are not delayed but the write column select signal is delayed. This is because the write column select must wait for the store/write enable signals 126, 156 to be generated and input to the AND gate 208. The read column select signal is not dependent on the store enable signals and, therefore does not need to be delayed to wait for generation of the store enable signals. Delaying the read column select signal adds to the read latency of the system. [0044] A multiplexer 212 is used to control the precharge clock to ensure that the system waits for the bit lines to recharge after a preceding memory access operation. The non-delayed clock signal 220 and the delayed clock signal 222 are input to the multiplexer 212 and the read enable signal is input to the select line of the multiplexer 212. When the read enable signal is active, the non-delayed clock signal 220 is output as the precharge clock and when the read enable signal is not-active, the delayed clock signal 222 is output as the precharge clock. The delayed write and precharge clocks avoids a write failure due to a mismatch between the wordline rise and the write column select. [0045] Figure 3 shows a flow diagram for an exemplary method of controlling system access to a memory. At block 302 the system receives first and second processor instructions. There can be more than two processor instructions received at the same time, and one of skill in the art will understand how the method can be expanded to handle more than two processor instructions. [0046] At block 304, the system evaluates whether the first processor instruction can architecturally complete and, at block 306, the system evaluates whether the second processor instruction can architecturally complete. The flow diagram shows blocks 304 and 306 occurring sequentially, but these evaluations can occur in parallel, by a circuit such as the one shown in Figure 1, to reduce the required evaluation time. If the evaluations in blocks 304 and 306 determine that one of the instructions cannot architecturally complete, then at block 308 both instructions are delayed and execution does not continue until both instructions can architecturally complete. When both the first and second instructions can architecturally complete, control is transferred to block 310. [0047] At block 310, the system determines whether either of the first and second processor instructions is a write/store instruction. If either of the first and second processor instructions is a write/store instruction, then control is transferred to block 312. Otherwise, control is transferred to block 314 where the first and second processor instructions are executed. [0048] At block 312, the evaluation delay in generating the write enable signals is accounted for. In the exemplary embodiment of Figure 1 , this includes the circuit delay in generating the store enable signals 126 and 156. The exemplary embodiment of Figure 2 shows this evaluation delay as the delay circuit 210. Accounting for the delay in block 312, control is transferred to block 314 where the first and second processor instructions are executed. [0049] Figure 4 shows an exemplary wireless communication system 400 in which an embodiment of an architecture and method to eliminate store buffers in a processor with multiple memory accesses may be advantageously employed. For purposes of illustration, Figure 4 shows three remote units 420, 430, and 450 and two base stations 440. It should be recognized that typical wireless communication systems may have many more remote units and base stations. Any of the remote units 420, 430, and 450 may include the architecture and method to eliminate store buffers in a processor with multiple memory accesses as disclosed herein. Figure 4 shows forward link signals 480 from the base stations 440 and the remote units 420, 430, and 450 and reverse link signals 390 from the remote units 420, 430, and 450 to base stations 440. [0050] In Figure 4, remote unit 420 is shown as a mobile telephone, remote unit 430 is shown as a portable computer, and remote unit 450 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote units may be cell phones, hand-held personal communication systems (PCS) units, portable data units such as personal data assistants, or fixed location data units such as meter reading equipment. Although Figure 4 illustrates certain exemplary remote units that may include the architectures and methods to eliminate store buffers in a processor with multiple memory accesses as disclosed herein, the architectures and methods as disclosed herein are not limited to these exemplary illustrated units. Embodiments may be suitably employed in any electronic device in which processors with multiple memory accesses are desired. [0051] While exemplary embodiments incorporating the principles of the present invention have been disclosed hereinabove, the present invention is not limited to the disclosed embodiments. Instead, this application is intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains and which fall within the limits of the appended claims.
A method is disclosed for redeploying an FPGA that has been restricted for use with a first design. The FPGA accepts only those configuration bitstreams whose CRC checksums match a value stored on the FPGA. The restricted FPGA is used with a second configuration bitstream for a second design by altering the second configuration bitstream so that it generates a CRC checksum that matches the value stored on the FPGA. The first checksum is derived by applying a CRC hash function to the first configuration bitstream. The second configuration bitstream is altered so that the second checksum generated when the CRC hash function is applied to the altered second configuration bitstream is identical to the first checksum. Altering the second configuration bitstream can result in an altered second configuration bitstream that is either longer than or the same length as the second configuration bitstream.
What is claimed is:1. A method of altering a bit stream for configuring a programmable logic device (PLD) comprising:storing a first signature derived by applying an algorithm to a first bitstream;receiving a second bitstream;wherein the first and second bitstreams implement different functionality on the PLD; andaltering the second bitstream to generate an altered second bitstream;wherein altering the second bitstream does not change the functionality of the second bitstream and application of the algorithm to the altered second bitstream results in a second signature that is equal to the first signature.2. The method of claim 1, wherein the algorithm is a cyclic redundancy check algorithm.3. The method of claim 1, wherein the first signature is a checksum remainder resulting from dividing the first bitstream by a generator polynomial.4. The method of claim 1, wherein the altering comprises:appending a number of digital zero digits to an end of the second bitstream to generate an extended second bitstream;dividing the extended second bitstream by a generator polynomial to generate an intermediate remainder; andadding a forcing value to the extended second bitstream, wherein the forcing value equals the intermediate remainder plus the first signature.5. The method of claim 4, wherein the first signature is generated by dividing the first bitstream by the generator polynomial.6. The method of claim 4, wherein the dividing is performed using modulo-2 division and wherein the adding a forcing value is performed using modulo-2 addition.7. The method of claim 1, wherein the altering comprises:dividing the second bitstream by a generator polynomial to yield a remainder;comparing a digital value in an Nth bit of the remainder to a digital value in an Nth bit of the first signature; andadding a logic one to each Nth bit of the second bitstream for which the digital value in the Nth bit of the remainder differs from the value in the Nth bit of the first signature.8. The method of claim 1, further comprising:supplying the altered second bitstream to the PLD;applying the algorithm to the altered second bitstream with circuitry on the PLD and outputting the second signature;comparing the second signature to the first signature, wherein the first signature is stored on the PLD; andenabling configuration of the PLD with the altered second bitstream in response to the second signature being equal to the stored first signature.9. A computer-readable medium having computer-executable instructions for performing steps for altering a bitstream for configuring a programmable logic device (PLD) comprising:storing a first signature derived by applying an algorithm to a first bitstream;receiving a second bitstream;wherein the first and second bitstreams implement different functionality on the PLD; andaltering the second bitstream to generate an altered second bitstream;wherein altering the second bitstream does not change the functionality of the second bitstream and application of the algorithm to the altered second bitstream results in a second signature that is equal to the first signature.10. The computer-readable medium of claim 9 having further computer-executable instructions for performing the steps of:supplying the altered second bitstream to the PLD;applying the algorithm to the altered second bitstream with circuitry on the PLD and outputting the second signature;comparing the second signature to the first signature, wherein the first signature is stored on the PLD; andenabling configuration of the PLD with the altered second bitstream in response to the second signature being equal to the stored first signature.
FIELD OF THE INVENTIONThe present invention relates to programmable logic devices, and more particularly, to methods and circuits for enabling PLD manufacturers to dedicate PLDs for use with specified designs.BACKGROUND OF THE INVENTIONProgrammable logic devices (PLDs), such as field-programmable gate arrays (FPGAs), are user-programmable integrated circuits that can be programmed to implement user-defined logic circuits. In a typical FPGA architecture, an array of configurable logic blocks (CLBs) and a programmable interconnect structure are surrounded by a ring of programmable input/output blocks (IOBs). The programmable interconnect structure comprises interconnects and configuration memory cells. Each of the CLBs and the IOBs also includes configuration memory cells. The content of the configuration memory cells determines how the CLBs, the IOBs and the programmable interconnect structure are configured. Additional resources, such as multipliers, block random access memory (BRAM) and microprocessors are also included on an FPGA for use in user-defined circuits. An exemplary FPGA architecture is described by Young in U.S. Pat. No. 5,933,023, entitled "FPGA Architecture Having RAM Blocks with Programmable Word Length and Width and Dedicated Address and Data Lines," which is incorporated herein by reference.To realize a user-defined circuit, a configuration bitstream is loaded into the configuration memory cells such that the CLBs and IOBs are configured to implement particular circuit components used in the user-defined circuit. A configuration bitstream is also loaded into the configuration memory cells of the programmable interconnect structure such that the programmable interconnect structure connects the various configured CLBS and IOBs in a desired manner to realize the user-defined circuit.PLDS are not design specific, but instead afford customers (e.g., circuit designers) the ability to instantiate an almost unlimited number of circuit variations. However, in some cases, it can be desirable to restrict or dedicate a PLD to a particular design, and prevent the use of other designs on that PLD.For instance, not knowing in advance the purpose to which a given PLD will be dedicated places a heavy burden on the quality and reliability of the PLD because PLD manufacturers must verify the functionality of all advertised features. To avoid disappointing customers, PLD manufacturers discard PLDs that include even relatively minor defects.Furthermore, PLDs are growing ever larger as manufacturers attempt to satisfy customer demand for devices capable of performing ever more complex tasks. The probability that a particular PLD will contain a defect increases as the die size of the PLD increases. Therefore, process yield decreases with increasing PLD size. PLD defects can be categorized in two general areas: gross defects that render an entire PLD useless or unreliable, and localized defects that affect a relatively small portion of a PLD. It has been found that, for large dice, nearly two thirds of the dice on a given wafer may be discarded because of localized defects. Considering the costs associated with manufacturing large integrated circuits, discarding a large percentage of PLD dice significantly increases the effective cost per unit of the remaining PLDS that are sold.This yield problem can be mitigated using methods that allow PLDS with limited defects to be sold only to selected customers who will not be disappointed with the specific localized defects in such PLDS. In one such method, PLDs are tested to determine whether they are suitable to implement selected customer designs. As each individual PLD can have different manufacturing defects, a PLD that is found to be unsuitable for one design can, nevertheless, be tested for suitability for additional designs. These test methods typically employ test circuits derived from a customer design and instantiated on the individual PLD of interest to verify resources required for the design. The test circuits thus allow PLD manufacturers to verify the suitability of an individual PLD for a specific design.A PLD manufacturer may want to prevent customers from using such a specially tested PLD for designs other than the tested design. In addition, if purchasers of such PLDs resell them on the "gray market" without indicating that the PLDs are limited to a specific design, the PLD manufacturer's reputation for quality can be harmed. U.S. patent application Ser. No. 10/199,535 entitled "Methods and Circuits for Dedicating a Programmable Logic Device for Use with Specific Designs," by Stephen M. Trimberger, which is incorporated herein by reference, discloses a method for applying a digital signature to configuration bitstreams of tested PLDs to restrict the use of the PLDs to the tested designs. The digital signature of a tested design is burned into the tested PLD as an unchangeable value, for example, by programming the digital signature into an antifuse-based one-time-programmable (OTP) memory.In some cases, it may be desirable to redeploy PLDs that are restricted to accepting a configuration bitstream whose digital signature matches the preset, unchangeable value. For example, a PLD manufacturer may want to repurchase surplus restricted PLDs from one customer and resell those restricted PLDs to a new customer together with a new configuration bitstream for a new design. In another example, a customer that has purchased a PLD dedicated to a particular design may want to use that PLD with an altered or different design. In such instances, in order for the PLD to function with the new configuration bitstream, the digital signature of the new configuration bitstream must match the unchangeable value stored on the PLD. It is, therefore, desirable to force the digital signature of a configuration bitstream to a desired value.SUMMARYA method is disclosed for forcing a PLD restricted for use with a first circuit design to accept a second circuit design. The PLD is restricted for use with the first design by storing a signature value derived from the bitstream specifying the first design. The PLD performs a hash function on any received configuration bitstream and compares the resulting hash value with the signature value; the PLD only works with those configuration bitstreams having a hash value that matches the store signature value.The PLD rejects a bitstream specifying a second design where applying the hash function to that bitstream does not yield the correct signature value. The second bitstream is therefore modified in accordance with one embodiment to yield the same hash result as the first bitstream, causing the PLD to accept the second circuit.In accordance with one embodiment, the hash function is applied to the first bitstream to derive the signature value stored in the PLD. Next, the same hash function is applied to the second bitstream to produce an intermediate remainder, which is then used to alter the second configuration bitstream to produce the signature value in response to the hash function. The alterations to the second bitstream are selected so that they do not alter the portion of the second bitstream specifying the second design. The restricted PLD can then be configured using the altered second bitstream to instantiate the second circuit.In one embodiment, altering the second configuration bitstream results in an altered second configuration bitstream that is longer than the second configuration bitstream. In another embodiment, where the PLD is also restricted for use with a configuration bitstream of a specific length, altering the second configuration bitstream results in an altered second configuration bitstream that is the same length as the first configuration bitstream.A circuit is disclosed for altering a configuration bitstream to generate an altered configuration bitstream. The result of applying a hash function to the altered configuration bitstream is a digital signature that is identical to a signature value stored in non-volatile memory on a restricted PLD. The circuit contains a register with exclusive OR gates at positions in the register that correspond to coefficients of a generator polynomial used by the hash function.Additional novel aspects and embodiments are described in the detailed description below. The allowed claims, and not this summary, define the scope of the invention.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings, where like numerals indicate like components, illustrate embodiments of the invention.FIG. 1 is a simplified block diagram of a digital signal forcing circuit for use with an FPGA.FIG. 2 is a more detailed block diagram of the digital signal forcing circuit and the FPGA.FIG. 3 is a flowchart of the steps of operation of one digital signature forcing circuit.FIGS. 4A-D are tables showing sample calculations performed in the operation of the digital signature forcing circuit of FIG. 2.FIG. 5 is a simplified circuit diagram of a portion of the circuitry of the digital signature forcing circuit.FIG. 6 is a table showing how the circuitry of FIG. 5 obtains the result of the calculation illustrated in FIG. 4B.DETAILED DESCRIPTION OF THE DRAWINGSFIG. 1 depicts an FPGA 21 that can be dedicated for use with specific, authorized circuit designs. FPGA 21 performs a hash function on each configuration bitstream and rejects those that fail to produce a hash result matching a stored signature value. A digital signature forcing circuit 10, in accordance with one embodiment, alters unauthorized configuration bitstreams (i.e., those having a hash result that does not correspond to the signature stored in FPGA 21) to produce the stored signature value in response to the hash function. The alterations do not affect the functionality specified by unauthorized bitstreams, so FPGA 21 can be configured to instantiate designs that would otherwise be rejected.FPGA 21 contains a configuration logic block 22, configuration memory 23, and programmable interconnect and configurable logic (programmable logic) 24. Configuration memory 23 controls the function of programmable logic 24 via a distributed set of logic control lines 25. Configuration logic block 22 receives an expression of a first customer design as a first configuration bitstream that includes instructions and data. Configuration logic block 22 processes the data according to the instructions to load configuration memory 23, via a bus 26, as appropriate to instantiate the desired logic design in programmable logic 24. Configuration logic block 22 can receive the first configuration bitstream through a test port 27, but more commonly receives the configuration bitstream from some external data source 28 (e.g., a PROM) through a dedicated configuration access port 29. Configuration logic block 22 interfaces with test port 27 through a joint test action group (JTAG) logic block 30, whose resources are especially intended for testing the board on which FPGA 21 will be placed. JTAG logic block 30 facilitates debugging of a design at the board level.Anyone with access to the first configuration bitstream can easily copy the corresponding design. FPGA 21 is therefore equipped with a decryptor 31 and associated non-volatile memory 32, which together enable customers to configure FGPA 21 using encrypted configuration bitstreams. Configuration logic block 22 conveys the encrypted first configuration bitstream to decryptor 31 via a bus 33. Decryptor 31 then accesses a decryption key stored in non-volatile memory 32 over a memory access bus 34, employs the key to decrypt the first configuration bitstream, and then returns the decrypted first configuration bitstream to configuration logic block 22 via bus 33. For a more detailed treatment of configuration bitstream decryption and other configuration-data security issues, see U.S. patent application Ser. No. 10/112,838 entitled "Methods and Circuits for Protecting Proprietary Configuration Data for Programmable Logic Devices," by Stephen M. Trimberger, and U.S. patent application Ser. No. 09/724,652, entitled "Programmable Logic Device With Decryption Algorithm and Decryption Key," by Pang et al., both of which are incorporated herein by reference.Configuration logic block 22 includes design verification circuitry 35 that allows FPGA 21 to be dedicated for use with specific, authorized customer designs. Design verification circuitry 35 can be programmed to reject configuration bitstreams that do not produce the same first digital signature as does the authorized first configuration bitstream defining the authorized first design. U.S. patent application Ser. No. 10/104,324 entitled "Application-Specific Testing Methods for Programmable Logic Devices," by Robert W. Wells, et al. is incorporated herein by reference and describes some methods of identifying FPGAs with minor manufacturing defects that might nevertheless function when programmed with specific customer designs.Design verification circuitry 35 connects to non-volatile memory 32 via a bus 36. FPGA 21 is dedicated for use with the first design by storing the first digital signature in non-volatile memory 32. Non-volatile memory 32 can be any form of available non-volatile memory, but preferably is one-time programmable. The first digital signature is the result of performing a hash function on the first configuration bitstream. A register within non-volatile memory 32 is programmed with the first digital signature, typically by the FPGA manufacturer. When the customer uses FPGA 21 with the first design, design verification circuitry 35 performs the hash function on the first configuration bitstream and enables FPGA 21 because the hash result matches the first digital signature stored in memory 32.Hash functions are well known to those of skill in the art. For a more detailed discussion of how to perform a hash function on a design to develop a unique identifier for the design, see U.S. application Ser. No. 09/253,401 entitled "Method and Apparatus for Protecting Proprietary Configuration Data for Programmable Logic Device," by Stephen M. Trimberger, which is incorporated herein by reference. In addition, "Applied Cryptography, Second Edition," (1996) by Schneier, beginning at page 456, describes a way to make a key-dependent one-way hash function by encrypting a message with a block algorithm in the CBC mode, as specified in ANSI X9.9, a United States national wholesale banking standard for authentication of financial transactions. ANSI X9.9 is incorporated herein by reference.In an embodiment, a first configuration bitstream upon which the hash function is performed is a configuration bitstream for a field programmable gate array. The hash function that produces the first digital signature is a frame check sequence (FCS) algorithm called cyclic redundancy check (CRC). In some embodiments, the ITU-TSS CRC method is used, which employs the X25 standard having a generator polynomial for a 16-bit checksum. (A generator polynomial for an n-bit checksum is sometimes referred to as an n-bit polynomial, although the binary number corresponding to the n-bit polynomial has n+1 bits.) The 16-bit polynomial employed by the X25 standard is G(x)=x<16> +x<12> +x<5> +1. Other embodiments can use other polynomials, such as the 16-bit polynomial G(x)=x<16> +x<15> +x<2> +1, which is based on the "CRC-16" protocol. Alternatively, a 32-bit generator polynomial can be used, such as G(x)=x<32> +x<26> +x<23> +x<22> +x<16> +x<12> +x<11> +x<10> +x<8> +x<7> +x<5> +x<4> +x<2> +x<1> +1, which is based on the Ethernet 802.5 standard.FIG. 2 is a block diagram detailing aspects of digital signature forcing circuit 10 and FPGA 21 that are of particular relevance in the procedure for restricting FPGA 21 for use with the first design and then redeploying FPGA 21 for use with a second design. FIG. 2 includes expanded views of digital signature forcing circuit 10, configuration logic block 22 and non-volatile memory 32. Configuration logic block 22 includes structures to prevent design reallocation, including design verification circuitry 35 and control logic 37.Non-volatile memory 32 includes a register 42 adapted to store the first digital signature. Register 42 also includes an extra memory cell 43 that is programmed to include a logic one if FGPA 21 is dedicated for use with a specified design. As is known in the art, a logic zero can also be used with straightforward changes.A PLD manufacturer receives the first design from a customer and can employ design-specific tests to determine whether FPGA 21 functions with the first design. If FPGA 21 is fully functional with the first design, a hash function is performed, frame-by-frame, on configuration bitstream defining the first design (i.e., the first configuration bitstream). The PLD manufacturer then stores the hash result, a "first digital signature," within digital signature register 42 of non-volatile memory 32. At the same time, memory cell 43 is loaded with a logic one to enable design verification circuitry 35. FPGA 21 is thus dedicated for use with the first design.There is a low probability that a random second configuration bitstream will produce the first digital signature in response to the hash function. The probability that two random configuration bitstreams will produce the same CRC checksum when the CRC algorithm employs a 32-bit generator polynomial is 1 in (2<32> -1) (since a CRC checksum of zero is not typically used).FIG. 2 also shows a pair of registers 38 and 39 in configuration logic block 22. Register 38 is a 64-bit shift register that receives portions of configuration bitstreams from configuration access port 29. Configuration access port 29 can be a single pin for one-bit-wide data, eight pins for eight-bit-wide data, or any other width. Portions of configuration bitstreams are loaded into register 38 until register 38 is full. These 64 bits (in register 38) are then shifted in parallel into 64-bit transfer register 39. From there, a multiplexer 40 alternately selects right and left 32-bit words, and the data is output as 32-bit words on bus 41. Design verification circuitry 35 receives portions of configuration bitstreams from multiplexer 40 and operates on the configuration bitstreams 32 bits at a time.Design verification circuitry 35 includes hash logic 44, hash register 45, and comparison logic 46. The output of comparison logic 46 is coupled to control logic 37. Control logic 37 contains an FPGA disabler 47 that aborts the configuration operation of FPGA 21 when FPGA disabler 47 receives a disable signal asserted by comparison logic 46. To abort the configuration operation, FPGA disabler 47 can clear configuration memory 23 by overwriting it with all zeroes or another disabling pattern. One input of design verification circuitry 35 connects to memory cell 43 in memory 32. By programming memory cell 43 to store a logic zero, design verification circuitry 35 is prevented from sending a disable signal to control logic 37 and is thereby prevented from aborting the configuration operation. Thus, by programming memory cell 43 to store a logic zero, design verification circuitry 35 can be disabled, and FGPA 21 is not limited to a design having a specific configuration bitstream.Returning to an example in which FPGA 21 is dedicated for use with a specific design (and memory cell 43 is programmed with a logic one), hash logic 44 performs a CRC algorithm on any received bitstreams and stores the resulting CRC checksum in hash register 45. Comparison logic 46 then compares the CRC checksum in hash register 45 with the first digital signature in register 42. Any mismatch between corresponding bits of hash register 45 and digital signature register 42 asserts a disable signal on the output of comparison logic 46, flagging an unauthorized bitstream to control logic 37. In response to the disable signal, control logic 37 aborts the configuration operation.A PLD manufacturer can redeploy FPGA 21 for use with a second tested design by using digital signature forcing circuit 10. For example, a PLD manufacturer can repurchase restricted FPGA 21 from one customer and resell restricted FPGA 21 to a new customer together with a second configuration bitstream for the second design, provided the digital signature of the second configuration bitstream matches the first digital signature stored in memory 32. As noted above, the probability that a second configuration bitstream will produce the stored digital signature is extremely remote; however, signature forcing circuit 10 alters configuration bitstreams to produce digital signatures identical to a stored signature.FIG. 3 is a flowchart 50 depicting the operation of digital signature forcing circuit 10. Flowchart 50 is described in connection with digital signature forcing circuit 10 of FIG. 2 to illustrate how an embodiment of the invention facilitates using FPGA 21 with the second design after FPGA 21 has been dedicated for use with the first design.In a first step 51, the CRC algorithm is applied by hash logic 11 to the first configuration bitstream to obtain the first digital signature, which is then stored in digital signature register 12 of digital signature forcing circuit 10. In this example, the CRC algorithm is applied by dividing the first configuration bitstream by the 16-bit generator polynomial G(x)=x<16> +x<12> +x<5> +x<0 > using modulo-2 division. (Step 51 is unnecessary if one has other access to the first digital signature.)In step 52, digital signature forcing circuit 10 receives the second configuration bitstream corresponding to the second design.In step 53, bit extender 13 adds a number of digital zero digits to the least significant digit of the second configuration bitstream, thereby generating an extended second configuration bitstream. In an example where the generator polynomial x<16> +x<12> +x<5> +x<0 > (represented as 1 0001 0000 0010 0001) is used, sixteen digital zero digits are appended to the end of the second configuration bitstream.In step 54, hash logic 11 applies the CRC algorithm to the extended second configuration bitstream. Step 54 produces an intermediate remainder, which is stored in hash register 14 of digital signature forcing circuit 10. As in step 51, the CRC algorithm is applied by dividing by the 16-bit generator polynomial G(x)=x<16> +x<12> +x<5> +x<0 > using modulo-2 division.In step 55, a forcing value is added to the extended second configuration bitstream to form an altered second configuration bitstream. The forcing value is the sum of the intermediate remainder stored in hash register 14 and the first digital signature stored in digital signature register 12. The sum is obtained using modulo-2 addition.In step 56, the altered second configuration bitstream is supplied to FPGA 21.The altered second configuration bitstream can then be used to configure FPGA 21. FPGA 21 receives the altered second configuration bitstream from digital signature forcing circuit 10 through configuration access port 29. Alternatively, FPGA 21 can receive the altered second configuration bitstream through test port 27 (as shown in FIG. 1). In the process of configuring FPGA 21 using altered second configuration bitstream, design verification circuitry 35 receives the altered second configuration bitstream as a series of 32-bit words, and hash logic 44 performs the CRC algorithm to obtain a CRC checksum, the second digital signature. The second digital signature is identical to the first digital signature and is stored in hash register 45. Comparison logic 46 then compares the second digital signature with the first digital signature in register 42, and the digital signatures match. The disable signal output of comparison logic 46 is, therefore, not asserted, and control logic 37 does not abort the configuration operation.The digits of the forcing value that are added to second configuration bitstream to form the extended second configuration bitstream are not used to configure the CLBs, IOBs, and programmable interconnect structure of the second design. FPGA 21 does not send the digits of the forcing value to configuration memory 23, but rather to an unused address of FPGA 21. The unused address can be in a circuit of FPGA 21 that is not being configured. Alternatively, the digits added to second configuration bitstream can be sent to addresses of block RAM that will be subsequently initialized by FPGA 21.FIGS. 4A-D show a sample calculation employed in the operation of digital signature forcing circuit 10. In FIG. 4A, a first configuration bitstream 1010 1010 1010 1010 1010 1010 1010 1010 is divided by a generator polynomial 1 0001 0000 0010 0001, which can also be represented as x<16> +x<12> +x<5> +x<0> . The remainder resulting from the modulo-2 division is the first signature.In FIG. 4B, a second configuration bitstream 1001 1001 1001 1001 1001 1001 1001 1001 is extended by 0000 0000 0000 0000 and then divided by the generator polynomial, resulting in an intermediate remainder. In FIG. 4C, the first signature is added to the intermediate remainder using modulo-2 addition to yield a forcing value.In FIG. 4D, the forcing value is added to the extended second configuration bitstream to yield an altered second configuration bitstream. When the altered second configuration bitstream is divided by the generator polynomial, the remainder is the second signature. The second signature is identical to the first signature.In other embodiments, an additional check is used to dedicate a PLD for use with a certain configuration bitstream that corresponds to a tested design. In such a check, design verification circuitry 35 determines the length of a configuration bitstream by counting the number of words, bits, or frames. Design verification circuitry 35 then compares the resulting count against an allowed bitstream length. A bitstream that does not match the allowed length is rejected. Non-volatile memory 32 can be adapted to store values indicative of the allowed bitstream length.Where the allowed length of the configuration bitstream is fixed, the method employed in digital signature forcing circuit 10 does not extend the second configuration bitstream. In an embodiment adapted for fixed-length configuration bitstreams, the first signature is determined in the same manner as previously described. Then, without first extending the second configuration bitstream, the second configuration bitstream is divided by the generator polynomial to yield a remainder. Each digital value in the Nth bit of the remainder is compared to the digital value in the Nth bit of the first signature. Then an altered second configuration bitstream is obtained by adding a logic one to each Nth bit of the second configuration bitstream for which the digital value in the Nth bit of the remainder differs from the value in the Nth bit of the first signature.In an embodiment adapted for fixed-length configuration bitstreams, the last bits of the second configuration bitstream do not contain essential configuration data, and can be altered by digital signature forcing circuit 10 to facilitate using a PLD with a second design after the PLD has been dedicated for use with a first design. The number of bits that are potentially altered is the number of bits of the remainder, which can be up to one less than the number of bits in the generator polynomial.FIG. 5 is a simplified circuit diagram of a portion of the circuitry in one embodiment of hash logic 11. Hash logic 11 contains a CRC shift register 60 with sixteen flip-flops and three exclusive OR (XOR) gates 61-63. Hash logic 11 performs a hash function on the first configuration bitstream and on the extended second configuration bitstream. In this embodiment, the hash function is a modulo-2 division by generator polynomial x<16> +x<12> +x<5> +x<0> . Hash logic 11 performs the modulo-2 division using the three XOR gates 61-63, whose outputs are coupled to inputs of the flip-flops located at the 12th, 5th and 0th positions of CRC register 60. One of the inputs of each of XOR gates 61-63 is coupled through node 64 to the output of the flip-flop located at the 15th position of CRC register 60. The result of the division is a CRC checksum. When the division is performed on the first configuration bitstream, the resulting CRC checksum is the first signature. When the division is performed on the extended second configuration bitstream, the resulting CRC checksum is the intermediate remainder.Hash logic 11 receives first configuration bitstream, as well as extended second configuration bitstream, bit-by-bit on input lead 65. In this embodiment, the first signature and the intermediate remainder are output on 16-bit output bus 66.FIG. 6 shows how the circuitry of FIG. 5 performs modulo-2 division on the extended second configuration bitstream to obtain the intermediate remainder. Hash logic 11 operates on extended second configuration bitstream in a bit-by-bit manner and cascades the bits of second configuration bitstream down through the sixteen flip-flops of CRC register 60. When the last bit of the extended second configuration bitstream has been input into the flip-flop of 0th position, the digital logic values present in the sixteen flip-flops of CRC register 60 are the same values as the values of the intermediate remainder obtained by performing the longhand modulo-2 division illustrated in FIG. 4B. Hash logic 11 derives the first signature from the first configuration bitstream in an analogous manner.In addition to using the CRC checksum (a hash value) of a configuration bitstream to dedicate a PLD for use with specific designs, the CRC checksum can also be used to verify the uncorrupted transmission of the configuration bitstream. The computed value of the CRC checksum is not only compared with the preset digital signature on the PLD, but it is also compared with a CRC value that is conveyed as part of the configuration bitstream. FPGA 21 performs this transmission check even if design verification circuitry 35 is disabled by programming memory cell 43 to store a logic zero. In another embodiment, the first signature is determined from the CRC value that is contained as part of the first configuration bitstream, and hash logic does not derive the first signature from the first configuration bitstream using a hash function.Although in the embodiments described above, FPGA 21 receives the altered second configuration bitstream from digital signature forcing circuit 10, in other embodiments, digital signature forcing circuit 10 delivers the altered second configuration bitstream back to the external data source 28 from which the original first configuration bitstream was received. The external data source then supplies the altered second configuration bitstream to FPGA 21.Although in the embodiments described above, digital signature forcing circuit 10 is a circuit made up of semiconductor components, in other embodiments, digital signature forcing is performed by a software program. The software program can run on a processor separate from FPGA 21. The software program performs the operations depicted in flowchart 50 of FIG. 3.Although the present invention is described in connection with certain specific embodiments for instructional purposes, the present invention is not limited thereto. Although the invention is described in connection with employing cyclic redundancy check (CRC) algorithms, other hash functions can also be used. These other hash functions employ different methods for deriving the altered second configuration bitstream. Although one embodiment of the invention is adapted to applying a 16-bit generator polynomial, other embodiments employ hash functions that apply generator polynomials of other lengths, such as 32-bit generator polynomials. Moreover, the invention is applicable to data strings other than configuration bitstreams that configure PLDs. The invention can be used to redeploy any device that has been dedicated for use with a data string whose hash result matches an unchangeable value. Accordingly, various modifications, adaptations, and combinations of various features of the described embodiments can be practiced without departing from the scope of the invention as set forth in the following claims.
Example methods, apparatus, and systems to facilitate service proxying are disclosed. An example apparatus includes interface circuitry to access a service request intercepted by from an infrastructure processing unit, the service request corresponding to a first node; instructions in the apparatus; and infrastructure sidecar circuitry to execute the instructions to: identify an active service instance corresponding to the service request; compare first telemetry data corresponding to the active service instance to a service quality metric; select a second node to service the service request based on the comparison and further telemetry data; and cause transmission of the service request to the second node.
An apparatus to process service requests, the apparatus including:interface circuitry to access a service request intercepted by from an infrastructure processing unit, the service request corresponding to a first node;instructions in the apparatus; andinfrastructure sidecar circuitry to execute the instructions to:identify an active service instance corresponding to the service request;compare first telemetry data corresponding to the active service instance to a service quality metric;select a second node to service the service request based on the comparison and further telemetry data; andcause transmission of the service request to the second node.The apparatus of claim 1, wherein the infrastructure sidecar circuitry is to determine that the active service instance corresponds to the intercepted service request based on a topology mapping of service instances across a plurality of platforms.The apparatus of any one of claims 1-2, wherein the infrastructure sidecar circuitry is to update a topology mapping based on second telemetry data from the active service instance.The apparatus of any one of claims 1-3, wherein the infrastructure sidecar circuitry is to select the second node by:processing a topology mapping of service instances to identify a first group of service instances capable of servicing the service request; andgenerating a second group of service instances by filtering out services instances in the first group that do not have capacity to service the service request.The apparatus of claim 4, wherein the infrastructure sidecar circuitry is to determine the first group based on at least one of capacity information or response time information from the service instances.The apparatus of claim 4, wherein the infrastructure sidecar circuitry is to, when the second group of service instances is empty, initiate a new service instance to service the service request.The apparatus of any one of claims 1-6, wherein the second node is at least one of an edge appliance, an edge device, a virtual machine, or an infrastructure device.The apparatus of any one of claims claim 1-6, wherein the infrastructure sidecar circuitry is to, when the service request corresponds to a non-default load balancing protocol, select the second node based on the non-default load balancing protocol.The apparatus of claim 8, wherein the infrastructure sidecar circuitry is to validate that the second node selected based on the non-default load balancing protocol will not result in an error.The apparatus of and one of claims 1-9, wherein the further telemetry data includes at least one of first telemetry data corresponding to the first node, second telemetry data corresponding to the second node, or third telemetry data corresponding to infrastructure.A method comprising:accessing a service request intercepted by from an infrastructure processing unit, the service request corresponding to a first node;identifying an active service instance corresponding to the service request;comparing first telemetry data corresponding to the service instance to a service quality metric;selecting a second node to service the service request based on the comparison and further telemetry data; andcausing transmission of the service request to the second node.The method of claim 11, further including determining that the service instance corresponds to the intercepted service request based on a topology mapping of service instances across a plurality of platforms.The method of any one of claims 11-12, further including a topology mapping based on second telemetry data from the service instance.The method of any one of claims 11-13, wherein the selecting the second node includes:processing a topology mapping of service instances to identify a first group of service instances capable of servicing the service request; andgenerating a second group of service instance by filtering out services instances in the first group that do not have capacity to service the service request.A machine readable medium including code, when executed, to cause a machine to realize the apparatus of any one of claims 1-10.
FIELD OF THE DISCLOSUREThis disclosure relates generally to computing devices, and, more particularly, to methods and apparatus to facilitate service proxying.BRIEF DESCRIPTIONIn cloud-native computing environments, the decomposition of erstwhile large monolithic applications into microservices and event-driven functions hasbeen widely embraced dues to the velocity, scalability, flexibility, and resilience benefits. In some examples, decomposition is facilitated through service meshes that move network and security particulars from individual services and place them into "sidecars" or "sidecar circuitry" which frees developers of services from minutiae of discovery, connection setup, and communication with peer microservices, functions, etc. A sidecar or sidecar circuitry is a deployment pattern (e.g., a container) inserted into a pod (e.g., a collection of containers and/or application containers) within a platform of the cloud-native computing environment.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates an overview of an edge cloud configuration for edge computing.FIG. 2 illustrates operational layers among endpoints, an edge cloud, and cloud computing environments.FIG. 3 illustrates a block diagram of an example environment for networking and services in an edge computing system.FIG. 4 illustrates deployment of a virtual edge configuration in an edge computing system operated among multiple edge nodes and multiple tenants.FIG. 5 illustrates various compute arrangements deploying containers in an edge computing system.FIG. 6 illustrates an example compute and communication use case involving mobile access to applications in an example edge computing system.FIG. 7 is a block diagram of an example system described in conjunction with examples disclosed herein to manage quality of service.FIG. 8 is a block diagram of an implementation of the example infrastructure processing unit sidecar circuitry of FIG. 7 .FIG. 9 is another block diagram of an implementation of the example infrastructure sidecar circuitry of FIG. 7 .FIG. 10 illustrates a flowchart representative of example machine readable instructions that may be executed to implement the infrastructure processing unit sidecar circuitry of FIG. 8 .FIGS. 11A-11C illustrates a flowchart representative of example machine readable instructions that may be executed to implement the infrastructure sidecar circuitry of FIG. 9 .FIG. 12 is a block diagram of an example implementation of an example compute node that may be deployed in one of the infrastructure processing unit in FIGS. 1-4 and/or 6-7.FIG. 13 is a block diagram of an example processor platform structured to execute the instructions of FIGS. 10 to implement the infrastructure processing unit and/or infrastructure processing unit sidecar circuitry of FIGS. 7 and/or 8.FIG. 14 is a block diagram of an example processor platform structured to execute the instructions of FIG. 11A-11C to implement the infrastructure and/or the infrastructure sidecar circuitry of FIGS. 7 and/or 9.FIG. 15 is a block diagram of an example implementation of the processor circuitry of FIG. 14 .FIG. 16 is a block diagram of another example implementation of the processor circuitry of FIG. 14 .FIG. 17 is a block diagram of an example software distribution platform (e.g., one or more servers) to distribute software (e.g., software corresponding to the example machine readable instructions of FIGS. 10 , 11A , 11B , and/or 11C to client devices associated with end users and/or consumers (e.g., for license, sale, and/or use), retailers (e.g., for sale, re-sale, license, and/or sub-license), and/or original equipment manufacturers (OEMs) (e.g., for inclusion in products to be distributed to, for example, retailers and/or to other end users such as direct buy customers).The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.Descriptors "first," "second," "third," etc. are used herein when identifying multiple elements or components which may be referred to separately. Unless otherwise specified or understood based on their context of use, such descriptors are not intended to impute any meaning of priority or ordering in time but merely as labels for referring to multiple elements or components separately for ease of understanding the disclosed examples. In some examples, the descriptor "first" may be used to refer to an element in the detailed description, while the same element may be referred to in a claim with a different descriptor such as "second" or "third." In such instances, it should be understood that such descriptors are used merely for ease of referencing multiple elements or components.As used herein, the phrase "in communication," including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. As used herein, "processor circuitry" is defined to include (i) one or more special purpose electrical circuits structured to perform specific operation(s) and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors), and/or (ii) one or more general purpose semiconductor-based electrical circuits programmed with instructions to perform specific operations and including one or more semiconductor-based logic devices (e.g., electrical hardware implemented by one or more transistors). Examples of processor circuitry include programmed microprocessors, Field Programmable Gate Arrays (FPGAs) that may instantiate instructions, Central Processor Units (CPUs), Graphics Processor Units (GPUs), Digital Signal Processors (DSPs), XPUs, or microcontrollers and integrated circuits such as Application Specific Integrated Circuits (ASICs). For example, an XPU may be implemented by a heterogeneous computing system including multiple types of processor circuitry (e.g., one or more FPGAs, one or more CPUs, one or more GPUs, one or more DSPs, etc., and/or a combination thereof) and application programming interface(s) (API(s)) that may assign computing task(s) to whichever one(s) of the multiple types of the processing circuitry is/are best suited to execute the computing task(s).DETAILED DESCRIPTIONFlexible and easily manageable rack level solutions are desirable for network edge processing and/or virtual machine processing, where multiple edge domains share a pool of resources. For example, if a process is memory intensive and/or processor intensive, it may be advantageous to use service meshes to split the process into sub processes that can be executed by different computing devices to ensure efficiency, speed, and/or accuracy. An edge deployment environment may include infrastructure and one or more platforms (e.g., IoT and on-premise edge, network edge, core DC, public cloud, etc.) implementing one or more edge appliances (e.g., computing devices, end devices, edge device, nodes, etc.). An infrastructure includes nodes (e.g., individual machines including heterogenous devices) and connection network between them. Each of these computing devices may have a set of resources that may be shared with other connected devices. As used herein, pooled resources are resources (e.g., memory, CPU, GPU, accelerators, etc.) of a first device, a single Edge node device, and/or first virtual machine that allocates or ear-marks resources for a particular tenant/user. Pooled resources can be shared with and/or used by another device to perform a particular task. In this manner, an application can sent out service requests to utilize the resources of multiple computing device and/or virtual machines to perform a task in a fast and efficient manner.Service meshes (e.g., mesh proxies) are components implemented in containers that implement a common set of functionalities needed for directory lookups to locate other services on the same or a different machine. Containers are decentralized computing resources that can run large, distributed applications with low overhead. In some examples, containers include sidecar containers. Sidecar containers, sidecars, or sidecar circuitry are containers that run along with the main contain of a pod, node, device, etc. The sidecar container provides additional functionality to a container without changing the container. A mesh proxy operates as a communication bridge to other mesh-proxies in other containers and implements common infrastructure services such as transport, secure transport, a connection to host bridges, load-balancing, protocol bridging, etc. The mesh proxy is implemented in the computing device and is in communication with the operating system of the computing device. In this manner, applications can be platform-agnostic (e.g., without including code corresponding to a particular operating system and/or particular hardware resources of the device/platform that the application is implemented in). Thus, the applications can run on a device and remain agnostic of the operating system and/or hardware of the underlying computing device as well as other connected computing devices because the mesh proxy operates as the bridge between the application and the operating system of the device and other devices.Although implementing mesh proxies on the CPU of edge devices adds flexibility, the flexibility adds overhead, latency and performance variability due to the scheduling and execution of the sidecar based proxies. Because edge environments are resource constrained and require low response times, low and predictable latency is highly valued on the edge. Additionally, complex overhead of load balancing, other run-time variations, and telemetry collection add dilation of response time, jitter, high hysteresis, and cost. Additionally, across one or more chains of microservices, application response times can easily exceed the few milliseconds that are required to function according to service quality metrics (e.g., latency, response time, efficiency, etc.). Sidecar circuitry has a limited time (e.g., milliseconds) to execute one or more functions or service chains while accounting for various service management actions like load-balancing, migration, communication, auto-scaling, resource redirection for service level agreements, etc. Load-balancing includes splitting a service (e.g., service instances) between multiple endpoints (e.g., edge devices, edge appliances, infrastructure device, hardware switches, etc.). Auto-scaling includes activating new devices (e.g., endpoints, devices, appliances, hardware switches, etc.) when the service instance(s) that are currently implementing a service are oversubscribed or deactivating of one or more of the instances implementing the service when usage declines. The service level agreement (SLA) requirements and/or service level objectives (SLO) are requirements for performing the task. For example, an SLA/SLO requirement(s) of a service request from an application may correspond to bandwidth requirements, latency requirements, jitter requirements, number/percentage of packet drops, number of images processed per second, etc.Examples disclosed herein provide an infrastructure processing unit (IPU) to utilize infrastructure and appliance telemetry to guide smart scheduling across the edge and edge to cloud migration of services. Examples disclosed herein include implementing IPU sidecar circuitry in the IPU and implementing infrastructure sidecar circuitry in the infrastructure. Using examples disclosed herein, the role of the IPU and the infrastructure is extended through the use of IPU sidecar (e.g., logic implemented in container circuitry) and infrastructure sidecar circuitry to achieve low latency service management with guidance from software, while making service mesh execution efficient and extensible. For this integration of dynamic load balancing, auto-scaling, and local orchestration, examples disclosed herein create an "IPU-mesh" in which one or more IPUs that host application sidecars also communicate with an infrastructure extension (e.g., infrastructure sidecar circuitry implemented in the infrastructure). As used herein, remote devices includes both a peer device (e.g., processing unit) operating in a different node in the same platform and a device (e.g., processing unit) operating at in a different node in a different platform. Thus, requests may arise anywhere in a network and can be load-balanced and/or auto-scaled at low latency and high efficiency.FIG. 1 is a block diagram 100 showing an overview of a configuration for edge computing, which includes a layer of processing referred to in many of the following examples as an "edge cloud". As shown, the edge cloud 110 is co-located at an edge location, such as an access point or base station 140, a local processing hub 150, or a central office 120, and thus may include multiple entities, devices, and equipment instances. The edge cloud 110 is located much closer to the endpoint (consumer and producer) data sources 160 (e.g., autonomous vehicles 161, user equipment 162, business and industrial equipment 163, video capture devices 164, drones 165, smart cities and building devices 166, sensors and IoT devices 167, etc.) than the cloud data center 130. Compute, memory, and storage resources which are offered at the edges in the edge cloud 110 are critical to providing ultra-low latency response times for services and functions used by the endpoint data sources 160 as well as reduce network backhaul traffic from the edge cloud 110 toward cloud data center 130 thus improving energy consumption and overall network usages among other benefits.Compute, memory, and storage are scarce resources, and generally decrease depending on the edge location (e.g., fewer processing resources being available at consumer endpoint devices, than at a base station, than at a central office). However, the closer that the edge location is to the endpoint (e.g., user equipment (UE)), the more that space and power is often constrained. Thus, edge computing attempts to reduce the amount of resources needed for network services, through the distribution of more resources which are located closer both geographically and in network access time. In this manner, edge computing attempts to bring the compute resources to the workload data where appropriate, or, bring the workload data to the compute resources.The following describes aspects of an edge cloud architecture that covers multiple potential deployments and addresses restrictions that some network operators or service providers may have in their own infrastructures. These include, variation of configurations based on the edge location (because edges at a base station level, For example, may have more constrained performance and capabilities in a multi-tenant scenario); configurations based on the type of compute, memory, storage, fabric, acceleration, or like resources available to edge locations, tiers of locations, or groups of locations; the service, security, and management and orchestration capabilities; and related objectives to achieve usability and performance of end services. These deployments may accomplish processing in network layers that may be considered as "near edge", "close edge", "local edge", "middle edge", or "far edge" layers, depending on latency, distance, and timing characteristics.Edge computing is a developing paradigm where computing is performed at or closer to the "edge" of a network, typically through the use of a compute platform (e.g., x86 or ARM compute hardware architecture) implemented at base stations, gateways, network routers, or other devices which are much closer to endpoint devices producing and consuming the data. For example, edge gateway servers may be equipped with pools of memory and storage resources to perform computation in real-time for low latency use-cases (e.g., autonomous driving or video surveillance) for connected client devices. Or as an example, base stations may be augmented with compute and acceleration resources to directly process service workloads for connected user equipment, without further communicating data via backhaul networks. Or as another example, central office network management hardware may be replaced with standardized compute hardware that performs virtualized network functions and offers compute resources for the execution of services and consumer functions for connected devices. Within edge computing networks, there may be scenarios in services which the compute resource will be "moved" to the data, as well as scenarios in which the data will be "moved" to the compute resource. Or as an example, base station compute, acceleration and network resources can provide services in order to scale to workload demands on an as needed basis by activating dormant capacity (subscription, capacity on demand) in order to manage corner cases, emergencies or to provide longevity for deployed resources over a significantly longer implemented lifecycle.FIG. 2 illustrates operational layers among endpoints, an edge cloud, and cloud computing environments. Specifically, FIG. 2 depicts examples of computational use cases 205, utilizing the edge cloud 110 among multiple illustrative layers of network computing. The layers begin at an endpoint (devices and things) layer 200, which accesses the edge cloud 110 to conduct data creation, analysis, and data consumption activities. The edge cloud 110 may span multiple network layers, such as an edge devices layer 210 having gateways, on-premise servers, or network equipment (nodes 215) located in physically proximate edge systems; a network access layer 220, encompassing base stations, radio processing units, network hubs, regional data centers (DC), or local network equipment (equipment 225); and any equipment, devices, or nodes located therebetween (in layer 212, not illustrated in detail). The network communications within the edge cloud 110 and among the various layers may occur via any number of wired or wireless mediums, including via connectivity architectures and technologies not depicted.Examples of latency, resulting from network communication distance and processing time constraints, may range from less than a millisecond (ms) when among the endpoint layer 200, under 5 ms at the edge devices layer 210, to even between 10 to 40 ms when communicating with nodes at the network access layer 220. Beyond the edge cloud 110 are core network 230 and cloud data center 240 layers, each with increasing latency (e.g., between 50-60 ms at the core network layer 230, to 100 or more ms at the cloud data center layer). As a result, operations at a core network data center 235 or a cloud data center 245, with latencies of at least 50 to 100 ms or more, will not be able to accomplish many time-critical functions of the use cases 205. Each of these latency values are provided for purposes of illustration and contrast; it will be understood that the use of other access network mediums and technologies may further reduce the latencies. In some examples, respective portions of the network may be categorized as "close edge", "local edge", "near edge", "middle edge", or "far edge" layers, relative to a network source and destination. For example, from the perspective of the core network data center 235 or a cloud data center 245, a central office or content data network may be considered as being located within a "near edge" layer ("near" to the cloud, having high latency values when communicating with the devices and endpoints of the use cases 205), whereas an access point, base station, on-premise server, or network gateway may be considered as located within a "far edge" layer ("far" from the cloud, having low latency values when communicating with the devices and endpoints of the use cases 205). It will be understood that other categorizations of a particular network layer as constituting a "close", "local", "near", "middle", or "far" edge may be based on latency, distance, number of network hops, or other measurable characteristics, as measured from a source in any of the network layers 200-240.The various use cases 205 may access resources under usage pressure from incoming streams, due to multiple services utilizing the edge cloud. To achieve results with low latency, the services executed within the edge cloud 110 balance varying requirements in terms of: (a) Priority (throughput or latency) and Quality of Service (QoS) (e.g., traffic for an autonomous car may have higher priority than a temperature sensor in terms of response time requirement; or, a performance sensitivity/bottleneck may exist at a compute/accelerator, memory, storage, or network resource, depending on the application); (b) Reliability and Resiliency (e.g., some input streams need to be acted upon and the traffic routed with mission-critical reliability, where as some other input streams may be tolerate an occasional failure, depending on the application); and (c) Physical constraints (e.g., power, cooling and form-factor).The end-to-end service view for these use cases involves the concept of a service-flow and is associated with a transaction. The transaction details the overall service requirement for the entity consuming the service, as well as the associated services for the resources, workloads, workflows, and business functional and business level requirements. The services executed with the "terms" described may be managed at each layer in a way to assure real time, and runtime contractual compliance for the transaction during the lifecycle of the service. When a component in the transaction is missing its agreed to SLA, the system as a whole (components in the transaction) may provide the ability to (1) understand the impact of the SLA violation, and (2) augment other components in the system to resume overall transaction SLA, and (3) implement steps to remediate.Thus, with these variations and service features in mind, edge computing within the edge cloud 110 may provide the ability to serve and respond to multiple applications of the use cases 205 (e.g., object tracking, video surveillance, connected cars, etc.) in real-time or near real-time, and meet ultra-low latency requirements for these multiple applications. These advantages enable a whole new class of applications (Virtual Network Functions (VNFs), Function as a Service (FaaS), Edge as a Service (EaaS), standard processes, etc.), which cannot leverage conventional cloud computing due to latency or other limitations.However, with the advantages of edge computing comes the following caveats. The devices located at the edge are often resource constrained and therefore there is pressure on usage of edge resources. Typically, this is addressed through the pooling of memory and storage resources for use by multiple users (tenants) and devices. The edge may be power and cooling constrained and therefore the power usage needs to be accounted for by the applications that are consuming the most power. There may be inherent power-performance tradeoffs in these pooled memory resources, as many of them are likely to use emerging memory technologies, where more power requires greater memory bandwidth. Likewise, improved security of hardware and root of trust trusted functions are also required because edge locations may be unmanned and may even need permissioned access (e.g., when housed in a third-party location). Such issues are magnified in the edge cloud 110 in a multi-tenant, multi-owner, or multi-access setting, where services and applications are requested by many users, especially as network usage dynamically fluctuates and the composition of the multiple stakeholders, use cases, and services changes.At a more generic level, an edge computing system may be described to encompass any number of deployments at the previously discussed layers operating in the edge cloud 110 (network layers 200-240), which provide coordination from client and distributed computing devices. One or more edge gateway nodes, one or more edge aggregation nodes, and one or more core data centers may be distributed across layers of the network to provide an implementation of the edge computing system by or on behalf of a telecommunication service provider ("telco", or "TSP"), internet-of-things service provider, cloud service provider (CSP), enterprise entity, or any other number of entities. Various implementations and configurations of the edge computing system may be provided dynamically, such as when orchestrated to meet service objectives.Consistent with the examples provided herein, a client compute node may be embodied as any type of endpoint component, device, appliance, or other thing capable of communicating as a producer or consumer of data. Further, the label "node" or "device" as used in the edge computing system does not necessarily mean that such node or device operates in a client or agent/minion/follower role; rather, any of the nodes or devices in the edge computing system refer to individual entities, nodes, or subsystems which include discrete or connected hardware or software configurations to facilitate or use the edge cloud 110.As such, the edge cloud 110 is formed from network components and functional features operated by and within edge gateway nodes, edge aggregation nodes, or other edge compute nodes among network layers 210-230. The edge cloud 110 thus may be embodied as any type of network that provides edge computing and/or storage resources which are proximately located to radio access network (RAN) capable endpoint devices (e.g., mobile computing devices, IoT devices, smart devices, etc.), which are discussed herein. In other words, the edge cloud 110 may be envisioned as an "edge" which connects the endpoint devices and traditional network access points that serve as an ingress point into service provider core networks, including mobile carrier networks (e.g., Global System for Mobile Communications (GSM) networks, Long-Term Evolution (LTE) networks, 5G/6G networks, etc.), while also providing storage and/or compute capabilities. Other types and forms of network access (e.g., Wi-Fi, long-range wireless, wired networks including optical networks) may also be utilized in place of or in combination with such 3GPP carrier networks.The network components of the edge cloud 110 may be servers, multi-tenant servers, appliance computing devices, and/or any other type of computing devices. For example, the edge cloud 110 may include an appliance computing device that is a self-contained electronic system including a housing, a chassis, a case, or a shell. In some circumstances, the housing may be dimensioned for portability such that it can be carried by a human and/or shipped. Example housings may include materials that form one or more exterior surfaces that partially or fully protect contents of the appliance, in which protection may include weather protection, hazardous environment protection (e.g., EMI, vibration, extreme temperatures), and/or enable submergibility. Example housings may include power circuitry to provide power for stationary and/or portable implementations, such as AC power inputs, DC power inputs, AC/DC or DC/AC converter(s), power regulators, transformers, charging circuitry, batteries, wired inputs and/or wireless power inputs. Example housings and/or surfaces thereof may include or connect to mounting hardware to enable attachment to structures such as buildings, telecommunication structures (e.g., poles, antenna structures, etc.) and/or racks (e.g., server racks, blade mounts, etc.). Example housings and/or surfaces thereof may support one or more sensors (e.g., temperature sensors, vibration sensors, light sensors, acoustic sensors, capacitive sensors, proximity sensors, etc.). One or more such sensors may be contained in, carried by, or otherwise embedded in the surface and/or mounted to the surface of the appliance. Example housings and/or surfaces thereof may support mechanical connectivity, such as propulsion hardware (e.g., wheels, propellers, etc.) and/or articulating hardware (e.g., robot arms, pivotable appendages, etc.). In some circumstances, the sensors may include any type of input devices such as user interface hardware (e.g., buttons, switches, dials, sliders, etc.). In some circumstances, example housings include output devices contained in, carried by, embedded therein and/or attached thereto. Output devices may include displays, touchscreens, lights, LEDs, speakers, I/O ports (e.g., USB), etc. In some circumstances, edge devices are devices presented in the network for a specific purpose (e.g., a traffic light), but may have processing and/or other capacities that may be utilized for other purposes. Such edge devices may be independent from other networked devices and may be provided with a housing having a form factor suitable for its primary purpose; yet be available for other compute tasks that do not interfere with its primary task. Edge devices include Internet of Things devices. The appliance computing device may include hardware and software components to manage local issues such as device temperature, vibration, resource utilization, updates, power issues, physical and network security, etc. Example hardware for implementing an appliance computing device is described in conjunction with FIG. 14 . The edge cloud 110 may also include one or more servers and/or one or more multi-tenant servers. Such a server may include an operating system and a virtual computing environment. A virtual computing environment may include a hypervisor manager (spawning, deploying, destroying, etc.) one or more virtual machines, one or more containers, etc. Such virtual computing environments provide an execution environment in which one or more applications and/or other software, code or scripts may execute while being isolated from one or more other applications, software, code, or scripts.FIG. 3 illustrates a block diagram of an example environment 300 in which various client endpoints 310 (in the form of mobile devices, computers, autonomous vehicles, business computing equipment, industrial processing equipment) exchange requests and responses with the example edge cloud 110. For example, client endpoints 310 may obtain network access via a wired broadband network, by exchanging requests and responses 322 through an on-premise network system 332. Some client endpoints 310, such as mobile computing devices, may obtain network access via a wireless broadband network, by exchanging requests and responses 324 through an access point (e.g., cellular network tower) 334. Some client endpoints 310, such as autonomous vehicles may obtain network access for requests and responses 326 via a wireless vehicular network through a street-located network system 336. However, regardless of the type of network access, the TSP may deploy aggregation points 342, 344 within the edge cloud 110 to aggregate traffic and requests. Thus, within the edge cloud 110, the TSP may deploy various compute and storage resources, such as at edge aggregation nodes 340, to provide requested content. The edge aggregation nodes 340 and other systems of the edge cloud 110 are connected to a cloud or data center 360, which uses a backhaul network 350 to fulfill higher-latency requests from a cloud/data center for websites, applications, database servers, etc. Additional or consolidated instances of the edge aggregation nodes 340 and the aggregation points 342, 344, including those deployed on a single server framework, may also be present within the edge cloud 110 or other areas of the TSP infrastructure.FIG. 4 illustrates deployment and orchestration for virtual edge configurations across an edge computing system operated among multiple edge nodes and multiple tenants. Specifically, FIG. 4 depicts coordination of a first edge node 422 and a second edge node 424 in an edge computing system 400, to fulfill requests and responses for various client endpoints 410 (e.g., smart cities / building systems, mobile devices, edge appliances, computing devices, business/logistics systems, industrial systems, etc.), which access various virtual edge instances. Here, the virtual edge instances 432, 434 provide edge compute capabilities and processing in an edge cloud, with access to a cloud/data center 440 for higher-latency requests for websites, applications, database servers, etc. However, the edge cloud enables coordination of processing among multiple edge nodes for multiple tenants or entities.In the example of FIG. 4 , these virtual edge instances include: a first virtual edge 432, offered to a first tenant (Tenant 1), which offers a first combination of edge storage, computing, and services; and a second virtual edge 434, offering a second combination of edge storage, computing, and services. The virtual edge instances 432, 434 are distributed among the edge nodes 422, 424, and may include scenarios in which a request and response are fulfilled from the same or different edge nodes. The configuration of the edge nodes 422, 424 to operate in a distributed yet coordinated fashion occurs based on edge provisioning functions 450. The functionality of the edge nodes 422, 424 to provide coordinated operation for applications and services, among multiple tenants, occurs based on orchestration functions 460.It should be understood that some of the devices 410 are multi-tenant devices where Tenant 1 may function within a tenant1 'slice' while a Tenant 2 may function within a tenant2 'slice' (and, in further examples, additional or sub-tenants may exist; and each tenant may even be specifically entitled and transactionally tied to a specific set of features all the way day to specific hardware features). A trusted multi-tenant device may further contain a tenant-specific cryptographic key such that the combination of key and slice may be considered a "root of trust" (RoT) or tenant specific RoT. A RoT may further be computed dynamically composed using a DICE (Device Identity Composition Engine) architecture such that a single DICE hardware building block may be used to construct layered trusted computing base contexts for layering of device capabilities (such as a Field Programmable Gate Array (FPGA)). The RoT may further be used for a trusted computing context to enable a "fan-out" that is useful for supporting multi-tenancy. Within a multi-tenant environment, the respective edge nodes 422, 424 may operate as security feature enforcement points for local resources allocated to multiple tenants per node. Additionally, tenant runtime and application execution (e.g., in instances 432, 434) may serve as an enforcement point for a security feature that creates a virtual edge abstraction of resources spanning potentially multiple physical hosting platforms. Finally, the orchestration functions 460 at an orchestration entity may operate as a security feature enforcement point for marshalling resources along tenant boundaries.Edge computing nodes may partition resources (memory, central processing unit (CPU), graphics processing unit (GPU), interrupt controller, input/output (I/O) controller, memory controller, bus controller, etc.) where respective partitionings may contain a RoT capability and where fan-out and layering according to a DICE model may further be applied to Edge Nodes. Cloud computing nodes consisting of containers, FaaS engines, Servlets, servers, or other computation abstraction may be partitioned according to a DICE layering and fan-out structure to support a RoT context for each. Accordingly, the respective devices 410, 422, and 440 spanning RoTs may coordinate the establishment of a distributed trusted computing base (DTCB) such that a tenant-specific virtual trusted secure channel linking all elements end to end can be established.Further, it will be understood that a container may have data or workload specific keys protecting its content from a previous edge node. As part of migration of a container, a pod controller at a source edge node may obtain a migration key from a target edge node pod controller where the migration key is used to wrap the container-specific keys. When the container/pod is migrated to the target edge node, the unwrapping key is exposed to the pod controller that then decrypts the wrapped keys. The keys may now be used to perform operations on container specific data. The migration functions may be gated by properly attested edge nodes and pod managers (as described above).In further examples, an edge computing system is extended to provide for orchestration of multiple applications through the use of containers (a contained, deployable unit of software that provides code and needed dependencies) in a multi-owner, multi-tenant environment. A multi-tenant orchestrator may be used to perform key management, trust anchor management, and other security functions related to the provisioning and lifecycle of the trusted 'slice' concept in FIG. 4 . For example, an edge computing system may be configured to fulfill requests and responses for various client endpoints from multiple virtual edge instances (and, from a cloud or remote data center). The use of these virtual edge instances may support multiple tenants and multiple applications (e.g., augmented reality (AR)/virtual reality (VR), enterprise applications, content delivery, gaming, compute offload) simultaneously. Further, there may be multiple types of applications within the virtual edge instances (e.g., normal applications; latency sensitive applications; latency-critical applications; user plane applications; networking applications; etc.). The virtual edge instances may also be spanned across systems of multiple owners at different geographic locations (or, respective computing systems and resources which are co-owned or co-managed by multiple owners).For example, each of the edge nodes 422, 424 may implement the use of containers, such as with the use of a container "pod" 426, 428 providing a group of one or more containers. In a setting that uses one or more container pods, a pod controller or orchestrator is responsible for local control and orchestration of the containers in the pod. Various edge node resources (e.g., storage, compute, services, depicted with hexagons) provided for the respective edge slices 432, 434 are partitioned according to the needs of each container.With the use of container pods, a pod controller oversees the partitioning and allocation of containers and resources. The pod controller receives instructions from an orchestrator (e.g., the orchestrator 460) that instructs the controller on how best to partition physical resources and for what duration, such as by receiving key performance indicator (KPI) targets based on SLA contracts. The pod controller determines which container requires which resources and for how long in order to complete the workload and satisfy the SLA. The pod controller also manages container lifecycle operations such as: creating the container, provisioning it with resources and applications, coordinating intermediate results between multiple containers working on a distributed application together, dismantling containers when workload completes, and the like. Additionally, a pod controller may serve a security role that prevents assignment of resources until the right tenant authenticates or prevents provisioning of data or a workload to a container until an attestation result is satisfied.Also, with the use of container pods, tenant boundaries can still exist but in the context of each pod of containers. If each tenant specific pod has a tenant specific pod controller, there will be a shared pod controller that consolidates resource allocation requests to avoid typical resource starvation situations. Further controls may be provided to ensure attestation and trustworthiness of the pod and pod controller. For example, the orchestrator 460 may provision an attestation verification policy to local pod controllers that perform attestation verification. If an attestation satisfies a policy for a first tenant pod controller but not a second tenant pod controller, then the second pod could be migrated to a different edge node that does satisfy it. Alternatively, the first pod may be allowed to execute and a different shared pod controller is installed and invoked prior to the second pod executing.FIG. 5 illustrates additional compute arrangements deploying containers in an edge computing system. As a simplified example, system arrangements 510, 520 depict settings in which a pod controller (e.g., container managers 511, 521, and a container orchestrator 531) is adapted to launch containerized pods, functions, and functions-as-a-service instances through execution via compute nodes (515 in arrangement 510), or to separately execute containerized virtualized network functions through execution via compute nodes (523 in arrangement 520). This arrangement is adapted for use of multiple tenants in an example system arrangement 530 (using compute nodes 537), where containerized pods (e.g., pods 512), functions (e.g., functions 513, VNFs 522, 536), and functions-as-a-service instances (e.g., FaaS instance 514) are launched within virtual machines (e.g., VMs 534, 535 for tenants 532, 533) specific to respective tenants (aside the execution of virtualized network functions). This arrangement is further adapted for use in system arrangement 540, which provides containers 542, 543, or execution of the various functions, applications, and functions on compute nodes 544, as coordinated by an container-based orchestration system 541.The system arrangements of depicted in FIG. 5 provides an architecture that treats VMs, Containers, and Functions equally in terms of application composition (and resulting applications are combinations of these three ingredients). Each ingredient may involve use of one or more accelerator (FPGA, ASIC) components as a local backend. In this manner, applications can be split across multiple edge owners, coordinated by an orchestrator.In the context of FIG. 5 , the pod controller/container manager, container orchestrator, and individual nodes may provide a security enforcement point. However, tenant isolation may be orchestrated where the resources allocated to a tenant are distinct from resources allocated to a second tenant, but edge owners cooperate to ensure resource allocations are not shared across tenant boundaries. Or, resource allocations could be isolated across tenant boundaries, as tenants could allow "use" via a subscription or transaction/contract basis. In these contexts, virtualization, containerization, enclaves, and hardware partitioning schemes may be used by edge owners to enforce tenancy. Other isolation environments may include: bare metal (dedicated) equipment, virtual machines, containers, virtual machines on containers, or combinations thereof.In further examples, aspects of software-defined or controlled silicon hardware, and other configurable hardware, may integrate with the applications, functions, and services an edge computing system. Software defined silicon may be used to ensure the ability for some resource or hardware ingredient to fulfill a contract or service level agreement, based on the ingredient's ability to remediate a portion of itself or the workload (e.g., by an upgrade, reconfiguration, or provision of new features within the hardware configuration itself).It should be appreciated that the edge computing systems and arrangements discussed herein may be applicable in various solutions, services, and/or use cases involving mobility. As an example, FIG. 6 shows an example simplified vehicle compute and communication use case involving mobile access to applications in an example edge computing system 600 that implements an edge cloud such as the edge cloud 110 of FIG .1 . In this use case, respective client compute nodes 610 may be embodied as in-vehicle compute systems (e.g., in-vehicle navigation and/or infotainment systems) located in corresponding vehicles which communicate with example edge gateway nodes 620 during traversal of a roadway. For example, the edge gateway nodes 620 may be located in a roadside cabinet or other enclosure built-into a structure having other, separate, mechanical utility, which may be placed along the roadway, at intersections of the roadway, or other locations near the roadway. As respective vehicles traverse along the roadway, the connection between its client compute node 610 and a particular one of the edge gateway nodes 620 may propagate so as to maintain a consistent connection and context for the example client compute node 610. Likewise, mobile edge nodes may aggregate at the high priority services or according to the throughput or latency resolution requirements for the underlying service(s) (e.g., in the case of drones). The respective edge gateway devices 620 include an amount of processing and storage capabilities and, as such, some processing and/or storage of data for the client compute nodes 610 may be performed on one or more of the edge gateway nodes 620.The edge gateway nodes 620 may communicate with one or more edge resource nodes 640, which are illustratively embodied as compute servers, appliances or components located at or in a communication base station 642 (e.g., a based station of a cellular network). As discussed above, the respective edge resource node(s) 640 include an amount of processing and storage capabilities and, as such, some processing and/or storage of data for the client compute nodes 610 may be performed on the edge resource node(s) 640. For example, the processing of data that is less urgent or important may be performed by the edge resource node(s) 640, while the processing of data that is of a higher urgency or importance may be performed by the edge gateway devices 620 (depending on, for example, the capabilities of each component, or information in the request indicating urgency or importance). Based on data access, data location or latency, work may continue on edge resource nodes when the processing priorities change during the processing activity. Likewise, configurable systems or hardware resources themselves can be activated (e.g., through a local orchestrator) to provide additional resources to meet the new demand (e.g., adapt the compute resources to the workload data).The edge resource node(s) 640 also communicate with the core data center 650, which may include compute servers, appliances, and/or other components located in a central location (e.g., a central office of a cellular communication network). The example core data center 650 may provide a gateway to the global network cloud 660 (e.g., the Internet) for the edge cloud 110 operations formed by the edge resource node(s) 640 and the edge gateway devices 620. Additionally, in some examples, the core data center 650 may include an amount of processing and storage capabilities and, as such, some processing and/or storage of data for the client compute devices may be performed on the core data center 650 (e.g., processing of low urgency or importance, or high complexity).The edge gateway nodes 620 or the edge resource node(s) 640 may offer the use of stateful applications 632 and a geographic distributed database 634. Although the applications 632 and database 634 are illustrated as being horizontally distributed at a layer of the edge cloud 110, it will be understood that resources, services, or other components of the application may be vertically distributed throughout the edge cloud (including, part of the application executed at the client compute node 610, other parts at the edge gateway nodes 620 or the edge resource node(s) 640, etc.). Additionally, as stated previously, there can be peer relationships at any level to meet service objectives and obligations. Further, the data for a specific client or application can move from edge to edge based on changing conditions (e.g., based on acceleration resource availability, following the car movement, etc.). For example, based on the "rate of decay" of access, prediction can be made to identify the next owner to continue, or when the data or computational access will no longer be viable. These and other services may be utilized to complete the work that is needed to keep the transaction compliant and lossless.In further scenarios, a container 636 (or pod of containers) may be flexibly migrated from one of the edge nodes 620 to other edge nodes (e.g., another one of edge nodes 620, one of the edge resource node(s) 640, etc.) such that the container with an application and workload does not need to be reconstituted, re-compiled, re-interpreted in order for migration to work. However, in such settings, there may be some remedial or "swizzling" translation operations applied. For example, the physical hardware at the edge resource node(s) 640 may differ from the hardware at the edge gateway nodes 620 and therefore, the hardware abstraction layer (HAL) that makes up the bottom edge of the container will be re-mapped to the physical layer of the target edge node. This may involve some form of late-binding technique, such as binary translation of the HAL from the container native format to the physical hardware format, or may involve mapping interfaces and operations. A pod controller may be used to drive the interface mapping as part of the container lifecycle, which includes migration to/from different hardware environments.The scenarios encompassed by FIG. 6 may utilize various types of mobile edge nodes, such as an edge node hosted in a vehicle (car/truck/tram/train) or other mobile unit, as the edge node will move to other geographic locations along the platform hosting it. With vehicle-to-vehicle communications, individual vehicles may even act as network edge nodes for other cars, (e.g., to perform caching, reporting, data aggregation, etc.). Thus, it will be understood that the application components provided in various edge nodes may be distributed in static or mobile settings, including coordination between some functions or operations at individual endpoint devices or the edge gateway nodes 620, some others at the edge resource node(s) 640, and others in the core data center 650 or global network cloud 660.In further configurations, the edge computing system may implement FaaS computing capabilities through the use of respective executable applications and functions. In an example, a developer writes function code (e.g., "computer code" herein) representing one or more computer functions, and the function code is uploaded to a FaaS platform provided by, for example, an edge node or data center. A trigger such as, for example, a service use case or an edge processing event, initiates the execution of the function code with the FaaS platform.In an example of FaaS, a container is used to provide an environment in which function code (e.g., an application which may be provided by a third party) is executed. The container may be any isolated-execution entity such as a process, a Docker or Kubernetes container, a virtual machine, etc. Within the edge computing system, various datacenter, edge, and endpoint (including mobile) devices are used to "spin up" functions (e.g., activate and/or allocate function actions) that are scaled on demand. The function code gets executed on the physical infrastructure (e.g., edge computing node) device and underlying virtualized containers. Finally, container is "spun down" (e.g., deactivated and/or deallocated) on the infrastructure in response to the execution being completed.Further aspects of FaaS may enable deployment of edge functions in a service fashion, including a support of respective functions that support edge computing as a service (Edge-as-a-Service or "EaaS"). Additional features of FaaS may include: a granular billing component that enables customers (e.g., computer code developers) to pay only when their code gets executed; common data storage to store data for reuse by one or more functions; orchestration and management among individual functions; function execution management, parallelism, and consolidation; management of container and function memory spaces; coordination of acceleration resources available for functions; and distribution of functions between containers (including "warm" containers, already deployed or operating, versus "cold" which require initialization, deployment, or configuration).The edge computing system 600 can include or be in communication with an edge provisioning node 644. The edge provisioning node 644 can distribute software such as the example computer readable instructions 1482 of FIG. 14 , to various receiving parties for implementing any of the methods described herein. The example edge provisioning node 644 may be implemented by any computer server, home server, content delivery network, virtual server, software distribution system, central facility, storage device, storage node, data facility, cloud service, etc., capable of storing and/or transmitting software instructions (e.g., code, scripts, executable binaries, containers, packages, compressed files, and/or derivatives thereof) to other computing devices. Component(s) of the example edge provisioning node 644 may be located in a cloud, in a local area network, in an edge network, in a wide area network, on the Internet, and/or any other location communicatively coupled with the receiving party(ies). The receiving parties may be customers, clients, associates, users, etc. of the entity owning and/or operating the edge provisioning node 644. For example, the entity that owns and/or operates the edge provisioning node 644 may be a developer, a seller, and/or a licensor (or a customer and/or consumer thereof) of software instructions such as the example computer readable instructions 1482 of FIG. 14 . The receiving parties may be consumers, service providers, users, retailers, OEMs, etc., who purchase and/or license the software instructions for use and/or re-sale and/or sub-licensing.In an example, edge provisioning node 644 includes one or more servers and one or more storage devices. The storage devices host computer readable instructions such as the example computer readable instructions 1482 of FIG. 14 , as described below. Similarly to edge gateway devices 620 described above, the one or more servers of the edge provisioning node 644 are in communication with a base station 642 or other network communication entity. In some examples, the one or more servers are responsive to requests to transmit the software instructions to a requesting party as part of a commercial transaction. Payment for the delivery, sale, and/or license of the software instructions may be handled by the one or more servers of the software distribution platform and/or via a third party payment entity. The servers enable purchasers and/or licensors to download the computer readable instructions 1482 from the edge provisioning node 644. For example, the software instructions, which may correspond to the example computer readable instructions 1482 of FIG. 14 , may be downloaded to the example processor platform/s, which is to execute the computer readable instructions 1482 to implement the methods described herein.In some examples, the processor platform(s) that execute the computer readable instructions 1482 can be physically located in different geographic locations, legal jurisdictions, etc. In some examples, one or more servers of the edge provisioning node 644 periodically offer, transmit, and/or force updates to the software instructions (e.g., the example computer readable instructions 1482 of FIG. 14 ) to ensure improvements, patches, updates, etc. are distributed and applied to the software instructions implemented at the end user devices. In some examples, different components of the computer readable instructions 1482 can be distributed from different sources and/or to different processor platforms; for example, different libraries, plug-ins, components, and other types of compute modules, whether compiled or interpreted, can be distributed from different sources and/or to different processor platforms. For example, a portion of the software instructions (e.g., a script that is not, in itself, executable) may be distributed from a first source while an interpreter (capable of executing the script) may be distributed from a second source.In further examples, any of the compute nodes or devices discussed with reference to the present edge computing systems and environment may be fulfilled based on the components depicted in FIGS. 14A and 14 . Respective edge compute nodes may be embodied as a type of device, appliance, computer, or other "thing" capable of communicating with other edge, networking, or endpoint components. For example, an edge compute device may be embodied as a personal computer, server, smartphone, a mobile compute device, a smart appliance, an in-vehicle compute system (e.g., a navigation system), a self-contained device having an outer case, shell, etc., or other device or system capable of performing the described functions.FIG. 7 is a block diagram of an example environment 700 to facilitate edge-to-edge and edge-to-cloud management of a cloud native service with co-active infrastructure and service proxying in accordance with examples disclosed herein. The example environment 700 includes example platforms 702a-702d, example edge appliances 704a-d, example services 706, 708 (e.g., service instance, container, etc.), example IPUs 710a-710d, an example sidecar service 712, example IPU sidecar circuitry 714, an example IPU capacity manager 716, example infrastructure 718, and an example infrastructure sidecar circuitry 720. Although the example of FIG. 7 corresponds to a cloud-based network, examples disclosed herein can be applied to any type of computing environment (e.g., virtual machines, racks of servers, etc.). In some examples, the infrastructure can be implemented by the cloud/data center 360, 440 of FIGS. 3 and/or 4 and/or the global network cloud 660 of FIG. 6 . Additionally, the example edge appliances 704a-704d may correspond to the example local processing hub 150, the example edge nodes 340, 422, 424, 620, 644 of FIGS. 4 and/or 6.The example platforms 702a-702d of FIG. 7 represent different platforms (e.g., deployment locations) that may be implemented in the example environment 700 in communication with the example infrastructure 718. The example platform 702a is an internet of things (IOT) and on-premise edge platform, the example platform 702b is a network edge platform, the example platform 702c is a core data center (DC) platform, and the example platform 702d is a public cloud platform. In some examples, different and/or additional platforms may be included or removed from the environment 700. Each platform 702a-702d includes at least one edge appliance 704a-704d, and at least one IPU 710a-710b. Although the example of FIG. 7 illustrates each platform 702a-702d including one edge appliance 704a-704d.The example edge appliances 704a-704d of FIG. 7 are computing devices, nodes, endpoints, etc. that execute the example services 706, 708 (e.g., functions of a container). For example, the edge appliances 704a-704d may be edge devices, IoT devices, fog devices, virtual machines, servers, and/or any other type of computing device that is capable of executing instructions (e.g., service instances, containers, etc.). In some examples, the edge appliances 704a-704d transmits remote service innovations (e.g., service requests) to execute have the service 706, 708 or part of the service 706, 708 executed by a different device within the same platform (e.g., platform 702a), at a different platform (e.g., platforms 702b-d, if the request is coming from platform 702a), or at the unfractured 718. The edge appliances 704a-704d obtain and/or transmits service requests via the corresponding IPU 710a-710d. Each service instance 706, 708 implemented and/or requested by the example appliances 704a-704d is registered to the example infrastructure 718. In this manner, a corresponding sidecar circuitry proxy 712 on the IPU 710a can provide real time telemetry data of the capacity of the instance to absorb new requests.The example IPUs 710a-710d of FIG. 7 offload sidecar processing from the CPUs of the edge appliances 704a-704d to the respective IPU 710a-710d. For example, the IPUs 710a-710d integrate with accelerators, implement root of trust and chain of trust capabilities, assist with telemetry processing, securing communications, hosting a service mesh, adding programmable acceleration in hardware or embedded software, enabling high performance streaming between devices, etc. As described above, the IPUs 710a-710d may include the example sidecar service proxy 712 that corresponds to the service 706 to provide real-time telemetry data of the capacity of the service 706 to absorb new requests. Additionally, the example IPUs 710a-710d include the example IPU sidecar circuitry 714. Each IPU 710a-710 obtains telemetry data from all edge appliances and/or service instances within the corresponding platform.The example IPU sidecar circuitry 714 of FIG. 7 is a hardware component, firmware component, and/or an embedded software component, such as a container. The IPU sidecar circuitry 714 interacts with the infrastructure sidecar circuitry 720 to inject a low-level step into the normal peer-to-peer service invocations. The step provides the infrastructure 718 with information for making a just-in-time determination of whether or not to intervene and insert a load-balancing action, an auto-scaling action, a circuit breaking action, or other actions that are based on real-time assessments that are made at the infrastructure sidecar circuitry 720. The example IPU sidecar circuitry 714 transmits capacity data, response time data, and/or telemetry data to the example infrastructure sidecar circuitry 720 periodically, aperiodically, or based on a trigger. Additionally, the example IPU sidecar circuitry 714 transmits intercepted service requests to the example infrastructure sidecar circuitry 720 so that the example infrastructure sidecar circuitry 720 can make load balancing decisions, scaling decisions, etc., based on the up-to-date (e.g., live) telemetry data corresponding to the infrastructure 718 and the edge appliances 704a-704d across the platforms 702a-702d to ensure that the services are being executed according to service objectives. Additionally, the example IPU sidecar circuitry 714 transmits data corresponding to the service and/or instance properties at different nodes (e.g., edge appliance) of the corresponding platform (e.g., to discover and/or register the service and/or instance). The IPU sidecar circuitry 714 is further described below in conjunction with FIG. 8 .The example IPU capacity manager 716 of FIG. 7 provides a set of resources to a container orchestration engine (CoE) (e.g., container orchestrator). In this manner, the container orchestration engine can determine whether the set of resources are suitable to host sidecar circuitry. The example IPU capacity manager 716 includes attributes such as an optimized sidecar target label, indicative of an output and memory resources of the IPU, etc. In some examples, the indicative compute and memory resources may appear to the CoE as "regular" CPU compute and memory resources, but could be underpinned differently in an IPU implementation. In this manner, the IPU capacity manager 716 provides a layered view of the IPU capacity in a manner native to the CoE. The combination of the indicative compute and memory resources with a label identifier indicates that the combination of compute and memory are a desired target for the CoE to select this node and to provision one or more sidecar container(s) into the IPU. To enable the deployment of a sidecar via a sidecar deployment pattern, the IPU capacity manager 716 may indicate to the CoE the capacity and capability data that the CoE needs to select the node for provisioning and to make sure that the sidecar is deployed on the IPU 710a. In some examples, the IPU capacity manager 716 supports capacity isolation when multiple service meshes are to be co-located.The example infrastructure 718 of FIG. 7 is hardware that includes nodes (e.g., individual machines including heterogenous device, switches, entities, etc.) and communication networks between the nodes. The infrastructure 718 communicates with the IPUs 710a-710b across the platforms 702a-702b (e.g., to forward service requests and/or obtain data). Additionally, the infrastructure 718 may execute one or more service requests locally using the nodes of the infrastructure 718. Accordingly, the infrastructure 718 may transmit infrastructure telemetry data corresponding to the execution of a service to the example infrastructure sidecar circuitry 720.The example infrastructure sidecar circuitry 720 of FIG. 7 includes logic that acts as a global sidecar proxy that obtains information from the IPUs 710a-710d across the platforms 702-702d corresponding to the capability and/or capacity of the edge appliances 704a-704d. The example infrastructure sidecar circuitry 720 obtains service-to-service invocations (e.g., active and/or established service requests from one node to another node) that have been intercepted at the example IPU sidecar circuitry 714 and processes the invocations determine whether to load balance, auto-scale, or do nothing (e.g., keep the service execution at the intended endpoint) based on the real-time telemetry data from all the nodes in the environment 700. The real-time telemetry data includes utilization and response time metrics from the infrastructure telemetry indicative of the processing capacity, response time, and network congestions along route to one or more active instances (e.g., service containers implemented in edge appliances) of the invoked service. The example IPU sidecar circuitry 714 may determine whether to implement a new instance, whether to redirect a determination to an application provided sidecar broker, where to auto-scale, whether to load balance, etc. based on the telemetry data. The example infrastructure sidecar circuitry 720 is further described below in conjunction with FIG. 9 .FIG. 8 is a block diagram of an example implementation of the IPU sidecar circuitry 714 of FIG. 7 . The example IPU sidecar circuitry 714 includes example interface(s) 800, an example telemetry processing controller 802, and example intercept circuitry 804. The example of FIG. 8 is described in conjunction with the example IPU 710a. However, the example of FIG. 8 may be used in conjunction with any of the IPUs 710a-710d.The example interface(s) 800 carry out interaction with the infrastructure 718 to facilitate interaction with the infrastructure 718 and with other services that the hosted sidecar circuitry 712 requires. The example interface(s) 800 may include a first interface that allows application or management middleware to register services running in the local edge applicant 704a. When the edge application 704a capable of performing the service 706 enters (e.g., is deployed in, comes on line, is implemented in, etc.) the platform 702a, the first interface of the interface(s) 800 transmits an identification (e.g., a globally unique identifier) of the service 706 that the edge application 704a is capable of executing. Additionally, the first interface of the interface(s) 800 transmits capacity and response time data corresponding to the service 706 to the example infrastructure 718. The capacity and response time data may include how many request that the service 706 can handle locally, and at what response time. The response time may include average time, maximum time, P99 or P95 levels, and/or may be a vector of values for different fractions of the maximum capacity.The second interface of the interface(s) 800 of FIG. 8 allows the edge appliance 704a and its components to periodically provide and/or register service telemetry to the example infrastructure 718. The second interface of the interface(s) 800 transmits or causes transmission of (e.g., by instructing one or more components of the IPU 710a) the identification (e.g., the globally unique identifier) of the service 706 that the edge application 704a is capable of executing. Additionally, the second interface of the interface(s) 800 may obtain the application data in a normalized metrics form and transmit or cause transmission of the normalized metrics form to the infrastructure sidecar circuitry 720. The application telemetry may be obtained from the edge appliance 704a (and/or other nodes in the platform 702a) and/or another component that monitors performance of the edge appliance 704a. The normalized metrics form may be a form which the infrastructure sidecar circuitry 720 can access how close to the maximum capacity the service 706 is, and how the response time is trending so that the infrastructure sidecar circuitry 720, etc. In this manner, the example infrastructure sidecar circuitry 720 can determine whether to continue sending invocations to the edge appliance 704a, whether to create a new instance (e.g., service 706) to execute part of the workload, whether to load balance with another edge appliance 704a (e.g., local or peer), and/or whether to scale back the instance (e.g., service 706). The service telemetry may include current (or moving window) arrival rate, estimated head room for accepting new arrivals, load on the current edge appliance 704a, estimated response time for new arrivals, etc. In some examples, the first interface and the second interface of the interfaces(s) 800 may be combined into one or multiple interfaces.The example telemetry processing controller 802 of FIG. 8 may perform preprocessing of the telemetry data before sending to the infrastructure 718. For example, the telemetry processing controller 802 may obtain the telemetry data from application software (e.g., the edge appliance 704a and/or other edge appliances in the platform 702a) and put the telemetry data into the normalized metrics form. Additionally, the telemetry processing controller 802 may analyze the telemetry data to determine if there is any significant change (e.g., based on one or more thresholds) in the telemetry data. For example, if the telemetry data from the edge appliances in the platform 702a have not changed for a duration of time, the telemetry processing controller 802 may adjust (e.g., decrease) the frequency of sending the telemetry data to the infrastructure (e.g., to conserve resources and bandwidth). If the telemetry data from the edge appliances in the platform 702a have change or are rapidly changing, the telemetry processing controller 802 may adjust (e.g., increase) the frequency of sending the telemetry data to the infrastructure (e.g., to send more live updates corresponding to the changes). Additionally or alternatively, the telemetry processing controller 802 may trigger a transmission of telemetry data when the telemetry data is changing and enter into a transmission sleep mode (e.g., to not transmit or only transmit after a duration of time when the telemetry data has not changed. In such examples, if the telemetry processing controller 802 identifies more than a threshold amount of telemetry, the telemetry processing controller 802 exits the sleep mode to continue to send telemetry data more regularly.The example intercept circuitry 804 of FIG. 8 intercepts local original requests targeting remote services (e.g., proxied through their corresponding sidecar circuitry). The example intercept circuitry 804 forwards the intercepted request to the example infrastructure sidecar circuitry 720 for a potential "last-inch" scheduling decision based on up-to-date telemetry data. For example, because the telemetry data from all the services are sent to the infrastructure sidecar circuitry 720, the infrastructure sidecar circuitry 720 can determine if load balancing, auto scaling, etc. is needed to ensure that service metrics are being met and/or execution of the service is optimized and/or otherwise improved based on the capacity and capability of all devices across the environment 700.In some examples, the IPU sidecar circuitry 714 includes means for receiving, transmitting, and/or causing transmission of data, means for intercepting service requests, means for processing telemetry data, etc. For example, the means for receiving, transmitting, and/or causing transmission of data may be implemented by the interface(s) 800, the means for processing telemetry data may be implemented by the telemetry processing controller 802, and the means for intercepting service requests may be implemented by the intercept logic circuitry 804. In some examples, the IPU sidecar circuitry 714 may be implemented by machine executable instructions such as that implemented by at least blocks 1002-1022 of FIG. 10 executed by processor circuitry, which may be implemented by the example processor circuitry 1352 of FIG. 13 , the example processor circuitry 1500 of FIG. 15 , and/or the example Field Programmable Gate Array (FPGA) circuitry 1600 of FIG. 16 . In other examples, the interface(s) 800, the telemetry processing controller 802, and/or the intercept logic circuitry 804 is/are implemented by other hardware logic circuitry, hardware implemented state machines, and/or any other combination of hardware, software, and/or firmware. For example, the interface(s) 800, the telemetry processing controller 802, and/or the intercept logic circuitry 804 may be implemented by at least one or more hardware circuits (e.g., processor circuitry, discrete and/or integrated analog and/or digital circuitry, an FPGA, an Application Specific Integrated Circuit (ASIC), a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware, but other structures are likewise appropriate.FIG. 9 is a block diagram of an example implementation of the infrastructure sidecar circuitry 720 of FIG. 7 . The example infrastructure sidecar circuitry 714 includes example interface(s) 900, an example target endpoint identification circuitry 902, an example comparator 904, an example filter 906, an example brokering controller 908, an example telemetry monitor circuitry 910, an example topology mapping circuitry 912.The example interface(s) 900 of FIG. 9 obtain(s) data from the IPU sidecar circuitry 714 across the platforms 702a-d. For example, the interface(s) 900 may include a single interface or multiple interfaces for the multiple IPUs 710a-710d (e.g., to register local service that is apprised by the interface(s) 800 of the IPU sidecar circuitry 714 as available at the application appliances 704a-704d or node hosting the services 706, 708). The interface(s) 900 obtain(s) the service identifiers, capacity, response time, Internet Protocol (IP) address or other locator for the edge appliances 704a-704d, and/or telemetry data from IPUs 710a-710d across the platforms 702a-702d.The example target endpoint identification circuitry 902 of FIG. 9 determines the target endpoint (e.g., appliance, device, etc.) for a service quest from the sidecar service 712 (e.g., for a new service request or service flow) or an intercepted service requested from the IPU sidecar circuitry 714 (e.g., a service request that previously has developed a target endpoint). The example target endpoint identification circuitry 902 determines a target endpoint for the transmission of a service request based on a topology mapping that includes a mapping of the capacity and capability of the services, edge appliances, devices, nodes, infrastructure devices, etc., in the environment 700. In this manner, the target endpoint identification circuitry 902 can identify target endpoint to schedule the service request by developing new endpoints to service a new request, load balancing and/or scaling based on already established requested, etc. The example target endpoint identification circuitry 902 includes an example comparator 904 to compare telemetry data to service metrics to determine whether the service metrics are/can be met at using target endpoints based on the corresponding telemetry data. Additionally, the example target endpoint identification circuitry 902 includes an example filter 906 to filter out endpoints that are not capable or have capacity to service an incoming or intercepted request. In this manner, the target endpoint identification circuitry 902 can select a target endpoint from a group of target endpoints that are capable of servicing a request while meeting the service metrics (e.g., latency, response time, efficiency, etc.). Additionally, the example target endpoint identification circuitry 902 acts as a failsafe to validate endpoint assignments from the brokering controller 908. For example, the target endpoint identification circuitry 902 checks whether the endpoint assignment is feasible under current conditions and/or whether the endpoint assignment will result in an error.The example brokering controller 908 of FIG. 9 handles service requests that requires non-default load balancing and/or other scheduling. For example is a service owner wishes to apply more complex objectives (e.g., associated with cost, staying within a contract, and/or any other criteria that may override the default scheduling protocol) when scheduling a service request of a flow of service requests. Accordingly, in such examples, an indication that service corresponds to a non-default load balancing protocol may be identified (e.g., based on the global identifier, the service request itself, etc.) and the service request may be transmitted to the example brokering controller 908 to develop scheduling for the service request and/or a flow of service requests based on the non-default protocol. The brokering controller 908 obtains the topology mapping from the telemetry monitor circuitry 910 including the corresponding telemetry data to schedule the service request(s) to conform to the non-default protocol.The example telemetry monitor circuitry 910 of FIG. 9 monitors the telemetry data for devices across the environment 700 based on obtained telemetry data. The example telemetry monitor circuitry 910 transmits or causes transmission of updates to the telemetry data to the example topology mapping circuitry 912 to include the most up-to-date telemetry data for devices in a topology map. In this manner, the target endpoint identification circuitry 902 and/or the brokering controller 908 can make scheduling decisions based on the topology of devices and their corresponding capability and capacity. In some examples, the telemetry monitor circuitry 910 obtains network telemetry data (e.g., from one or more devices in the environment 700) corresponding to telemetry data related to communications within the environment 700. For example, the telemetry monitor circuitry 910 may obtain latency and/or bandwidth information corresponding to communications between any two or more devices within the environment 700.The example topology mapping circuitry 912 generates and maintains a topology map that identifies all services, devices, service instances, endpoints, etc. within the environment 700. For example, using the example of FIG. 7 , the topology mapping circuitry 912 maps the edge appliance 704a to the service 706 and the platform 702a/IPU 710a, the edge appliance 704b to the service 706 and the platform 702b/IPU 710b, etc. Additionally, the topology mapping circuitry 912 may map devices and/or services being executed at the infrastructure 718. Additionally, the topology mapping circuitry 912 includes the current telemetry data corresponding to each of the endpoint devices and/or services. The topology mapping circuitry 912 updates the topology as the topology updates (e.g., generation of new services, endpoints, decommission of old/unused services/devices, change in load balancing/where a service is executed, etc.).In some examples, the infrastructure sidecar circuitry 720 includes means for transmitting and/or receipting data (e.g., intercepted service requests, telemetry data, etc.), means for identifying a target endpoint, means for handling service requests that requires non-default load balancing and/or other scheduling, means for monitoring telemetry data, and means for mapping a topology. For example, the means for receiving, transmitting and/or causing transmission of data may be implemented by means for transmitting and/or receipting data (e.g., intercepted service requests, telemetry data, etc.) may be implemented by the interface(s) 900, the means for identifying a target endpoint may be implemented by the target endpoint identification circuitry 902, the means for handling service requests that requires non-default load balancing and/or other scheduling may be implemented by the brokering controller 908, the means for monitoring telemetry data may be implemented by the telemetry monitoring circuitry 910, and the means for mapping a topology may be implemented by the topology mapping circuitry 912. In some examples, the IPU sidecar circuitry 714 may be implemented by machine executable instructions such as that implemented by at least blocks 1102-1150 of FIG. 11 executed by processor circuitry, which may be implemented by the example processor circuitry 1452 of FIG. 14 , the example processor circuitry 1500 of FIG. 15 , and/or the example Field Programmable Gate Array (FPGA) circuitry 1600 of FIG. 16 . In other examples, the interface(s) 900, the example target endpoint identification circuitry 902, the example brokering controller 908, the example telemetry monitor circuit 910, and/or the example topology mapping circuitry 912 is/are implemented by other hardware logic circuitry, hardware implemented state machines, and/or any other combination of hardware, software, and/or firmware. For example, the interface(s) 900, the example target endpoint identification circuitry 902, the example brokering controller 908, the example telemetry monitor circuit 910, and/or the example topology mapping circuitry 912 may be implemented by at least one or more hardware circuits (e.g., processor circuitry, discrete and/or integrated analog and/or digital circuitry, an FPGA, an Application Specific Integrated Circuit (ASIC), a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware, but other structures are likewise appropriate.While an example manner of implementing the example IPU sidecar circuitry 714 and/or the example infrastructure sidecar circuitry 720 of FIG. 7 is illustrated in FIGS. 8 and 9 , one or more of the elements, processes and/or devices illustrated in FIGS. 8 and/or 9 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example interface(s) 800, the example telemetry processing controller 802, the example intercept circuitry 804, and/or, more generally the example IPU sidecar circuitry 714 of FIG. 8 and/or the example interface(s) 900, an example target endpoint identification circuitry 902, the example comparator 904, the example filter 906, the example brokering controller 908, the example telemetry monitor circuitry 910, the example topology mapping circuitry 912, and/or, more generally, the example infrastructure sidecar circuitry 720 of FIG. 9 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example interface(s) 800, the example telemetry processing controller 802, the example intercept circuitry 804, and/or, more generally the example IPU sidecar circuitry 714 of FIG. 8 and/or the example interface(s) 900, an example target endpoint identification circuitry 902, the example comparator 904, the example filter 906, the example brokering controller 908, the example telemetry monitor circuitry 910, the example topology mapping circuitry 912, and/or, more generally, the example infrastructure sidecar circuitry 720 of FIG. 9 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example, the example interface(s) 800, the example telemetry processing controller 802, the example intercept circuitry 804, and/or, more generally the example IPU sidecar circuitry 714 of FIG. 8 and/or the example interface(s) 900, an example target endpoint identification circuitry 902, the example comparator 904, the example filter 906, the example brokering controller 908, the example telemetry monitor circuitry 910, the example topology mapping circuitry 912, and/or, more generally, the example infrastructure sidecar circuitry 720 of FIG. 9 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example IPU sidecar circuitry 714 and/or the example infrastructure sidecar circuitry 720 of FIGS. 8 and/or 9 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIGS. 8 and/or 9, and/or may include more than one of any or all of the illustrated elements, processes, and devices. As used herein, the phrase "in communication," including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events.Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the example IPU sidecar circuitry 714 and/or the example infrastructure sidecar circuitry 720 of FIGS. 8 and/or 9 are shown in FIGS. 10-11C . The machine readable instructions may be one or more executable programs or portion(s) of one or more executable programs for execution by a computer processor such as the processor 1352, 1452 shown in the example processor platform 1350, 1450 discussed below in connection with FIGS. 13 and/or 14. The programs may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 1352, 1452 but the entirety of the programs and/or parts thereof could alternatively be executed by a device other than the processor 1352, 1452 and/or embodied in firmware or dedicated hardware. Further, although the example program(s) is/are described with reference to the flowcharts illustrated in FIGS. 10-11C , many other methods of implementing the example IPU sidecar circuitry 714 and/or the example infrastructure sidecar circuitry 720 of FIGS. 8 and/or 9 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware.The machine readable instructions described herein may be stored in one or more of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, etc. Machine readable instructions as described herein may be stored as data (e.g., portions of instructions, code, representations of code, etc.) that may be utilized to create, manufacture, and/or produce machine executable instructions. For example, the machine readable instructions may be fragmented and stored on one or more storage devices and/or computing devices (e.g., servers). The machine readable instructions may require one or more of installation, modification, adaptation, updating, combining, supplementing, configuring, decryption, decompression, unpacking, distribution, reassignment, compilation, etc. in order to make them directly readable, interpretable, and/or executable by a computing device and/or other machine. For example, the machine readable instructions may be stored in multiple parts, which are individually compressed, encrypted, and stored on separate computing devices, wherein the parts when decrypted, decompressed, and combined form a set of executable instructions that implement a program such as that described herein.In another example, the machine readable instructions may be stored in a state in which they may be read by a computer, but require addition of a library (e.g., a dynamic link library (DLL)), a software development kit (SDK), an application programming interface (API), etc. in order to execute the instructions on a particular computing device or other device. In another example, the machine readable instructions may need to be configured (e.g., settings stored, data input, network addresses recorded, etc.) before the machine readable instructions and/or the corresponding program(s) can be executed in whole or in part. Thus, the disclosed machine readable instructions and/or corresponding program(s) are intended to encompass such machine readable instructions and/or program(s) regardless of the particular format or state of the machine readable instructions and/or program(s) when stored or otherwise at rest or in transit.The machine readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, etc. For example, the machine readable instructions may be represented using any of the following languages: C, C++, Java, C#, Perl, Python, JavaScript, HyperText Markup Language (HTML), Structured Query Language (SQL), Swift, etc.As mentioned above, the example programs of FIGS. 11-13 may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media."Including" and "comprising" (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of "include" or "comprise" (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase "at least" is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term "comprising" and "including" are open ended. The term "and/or" when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A and B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase "at least one of A or B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A and B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase "at least one of A or B" is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.As used herein, singular references (e.g., "a", "an", "first", "second", etc.) do not exclude a plurality. The term "a" or "an" entity, as used herein, refers to one or more of that entity. The terms "a" (or "an"), "one or more", and "at least one" can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.FIG. 10 illustrates a flowchart representative of example machine readable instructions 1000 that may be executed to implement the example IPU sidecar circuitry 714 ( FIG. 7 ) to collect and transmit telemetry data to the example infrastructure 718 of FIG. 7 . Although the flowchart of FIG. 10 is described in conjunction with the IPU sidecar circuitry 714 of FIG. 7 , the example instructions may be used to implement other type(s) of sidecar circuitry(ies).At block 1002, the example interface(s) 800 obtains current capacity and response time data corresponding to the service identifier from application(s) executed on the edge appliance(s) (e.g., edge appliance 704a) and/or management middleware. In some examples, the interface(s) 800 only obtains the capacity and response time data when a service and/or a device becomes available or if there is a change in the capacity and/or response time. At block 1004, the example interface(s) 800 obtain(s) current telemetry data corresponding to the edge appliances in the corresponding platform. In some examples, the interface(s) 80 obtain(s) the current telemetry data from a device that monitors the operation of the edge appliance sin the corresponding platform.At block 1006, the example telemetry processing controller 802 compares the current capacity, resources, and/or telemetry data for the edge appliances of the platform to corresponding historical data (e.g., one or more previous capacity, response time, and/or telemetry data) of the edge appliances of the platform. At block 1008, the example telemetry processing controller 802 determines if the comparison indicates more than a threshold amount of change between the current and the historical data.If the example telemetry processing controller 802 determines that more than a threshold amount of change did not occurred (block 1008: NO), the instructions continue to block 1012. If the example telemetry processing controller 802 determines that more than a threshold amount of change occurred (block 1008: YES), the example telemetry processing controller 802 adjust the reporting frequency (e.g., the frequency at which the interface(s) 800 transmits and/or causes transmission of the telemetry data to the example infrastructure 718 (block 1010). For example, if the telemetry data corresponds to a lot of change, the transmission frequency may be increased to ensure the infrastructure has up-to-date telemetry data for making scheduling decisions. If the telemetry data corresponds to little or no change, the transmission frequency may be deceased to conserve resources. In some examples, the telemetry processing controller 802 may decide to only send telemetry data when the comparison results is more than a threshold amount of change between the current and the historical data.At block 1012, the example telemetry processing controller 802 stores the current capacity, response time, and/or telemetry data for future comparisons (e.g., the current capacity, response time, and/or telemetry data becomes historical data to compare in updated current data in an upcoming point in time). The telemetry processing controller 802 may store the current data in memory (e.g., local memory, volatile memory, non-volatile memory, the example memory 1354 of FIG. 13 , and/or the example storage 1358 of FIG. 13 ). At block 1104, the example telemetry processing controller 802 determines whether to transmit the current capacity, response time, and/or telemetry data to the infrastructure 118. As described above, the telemetry processing controller 802 may determine whether to transmit the telemetry data or wait for a later point in time based on the comparison of the current data to historical data.If the example telemetry processing controller 802 determines it is not time to transmit the current capacity, response time, and/or telemetry data (block 1014: NO), control continues to block 1018. If the example telemetry processing controller 802 determines that it is time to transmits the current capacity, response time, and/or telemetry data (block 1014: YES), the example interface(s) 800 transmit(s) and/or cause(s) transmission of the current capacity, response time, and/or telemetry data (e.g., including a corresponding service identifier) to the example infrastructure sidecar circuitry 720 of the infrastructure 718 (block 1016).At block 1018, the example interface(s) 800 determine(s) whether a request for a remote service has been sent from one of the edge appliances in the same platform as the IPU sidecar circuitry 714. If the example interface(s) 800 determine(s) that a request for a remote service has not been sent (block 1018: NO), control continues to block 1022. If the example interface(s) 800 determine(s) that a request for a remote service has been sent (block 1018: YES), the example interface(s) 800 intercept(s) the request and forward(s) the intercepted request to the infrastructure sidecar circuitry 720 (block 1020).At block 1022, the example telemetry processing controller 802 determines if there additional current capacity, response time, and/or telemetry data has obtained from the edge appliances in the platform. In some examples, the telemetry data corresponds to communications between two or more devices withing the environment 700 (e.g., latency data, bandwidth, etc. between an originating device and candidates nodes). If the example telemetry processing controller 802 determines that there no additional (e.g., updated) current capacity, response time, and/or telemetry data that has been obtained (block 1022: NO), control returns to block 1018 until a new service request is intercepted or additional current capacity, response time, and/or telemetry data is obtained. If the example telemetry processing controller 802 determines that there is additional (e.g., updated) current capacity, response time, and/or telemetry data that has been obtained (block 1022: YES), control returns to block 1006 to compare the updated current capacity, response time, and/or telemetry data to historical data.FIGS. 11A-11C illustrate a flowchart representative of example machine readable instructions 1100 that may be executed to implement the example infrastructure sidecar circuitry 720 ( FIG. 7 ) to manage quality service metrics across edge platforms based on data from the edge platforms. Although the flowchart of FIGS. 11A-11C is described in conjunction with the infrastructure sidecar circuitry 720 in the example environment 700 of FIG. 7 , the example instructions may be used to implement other type(s) of sidecar circuitry(ies) in any environment.At block 1102, the topology mapping circuitry 912 determines if a new service instance was created in the environment 700. For example, the topology mapping circuitry 912 may determine that a new service instance was created when the interface(s) 900 obtains data from one or more of the IPUs 710a-710d across the platforms(s) 702a-702d that indicate that a new service instance was created and/or implemented by an existing or a new edge application. If the example topology mapping circuitry 912 determines that a new service instance was not created (block 1102: NO), control continues to block 1108. If the example topology mapping circuitry 912 determines that a new service instance was created (block 1102: YES), the example topology mapping circuitry 912 identifies the service based on an identifier, from the obtained data, corresponding to the service instance (block 1104).At block 1106, the example topology mapping circuitry 912 updates the service topology mapping based on the identifier, a location, and/or the capacity and/or response time of the new service instance. At block 1108, the example telemetry monitor circuitry 910 determines if new telemetry data has been obtained from any of the IPUs 710a-710d via the example interface(s) 900. As described above, the IPUs 710a-710b may transmit and/or cause transmission of updated telemetry data corresponding to one or more edge appliances operating in a corresponding platform. Additionally or alternatively, the telemetry data may be related to network information (e.g., bandwidth, latency, etc. related to communications between two or more devices in the environment 700). If the example telemetry monitor circuitry 910 determines that new telemetry data has not been obtained (block 1008: NO), control continues to block 1114. If the example telemetry monitor circuitry 910 determines that new telemetry data has been obtained (block 1008: YES), the example telemetry monitor circuitry 910 identifies the service instance corresponding to the updated telemetry data (e.g., based on an identifier included in the obtained telemetry data) (block 1110). At block 1112, the example topology mapping circuitry 912 updates the telemetry data and/or network telemetry data for the corresponding identifier in the service topology mapping. In this manner, the topology mapping, that includes a mapping of every service instance has up-to-date telemetry data and/or network telemetry data for the infrastructure sidecar circuitry to make live and/or last minute scheduling of services.At block 1114, the example target endpoint identification circuitry 902 determines if a new service request or service flow has been obtained via the interface(s) 900. The service request or service flow may be a new request or an already established intercepted request. If the example target endpoint identification circuitry 902 determines that a new service request or service flow has not been obtained (block 1114: NO), control returns to block 1102. If the example target endpoint identification circuitry 902 determines that a new service request or service flow has been obtained (block 1114: YES), the example target endpoint identification circuitry 902 determines if the new service request or service flow corresponds to a non-default load-balancing (block 1116). As described above, a service owner may desire a non-default load balancing for a service request or a service flow. Accordingly, the service request and/or service flow may identify and/or the service corresponding to the service request or service flow the non-default load balancing protocol. When the target endpoint identification circuitry 902 determines that the service request/flow corresponds to a non-default load balancing protocol, the target endpoint identification circuitry 902 passes the request over to the brokering controller 908.If the example target endpoint identification circuitry 902 determines that a new service request or service flow corresponding to the non-default load-balancing was not obtained (block 1116: NO), control continues to block 1126 of FIG. 11B . If the example endpoint identification circuitry 902 determines that a new service request or service flow corresponding to the non-default load-balancing was obtained (block 116: YES), the example brokering controller 908 accesses the telemetry data, network telemetry data, and service topology mapping (e.g., generated and updated by the topology mapping circuitry 912). At block 1120, the example brokering controller 908 determines endpoint(s) (e.g., devices, edge appliances, nodes, etc.) to service the request and/or flow based on the telemetry, network telemetry, service topology mapping, and non-default load user request(s). For example, the brokering controller 908 attempts to find edge appliances and/or devices that are capable and have capacity to service the request(s) according to the non-default load balancing request from the service owner.At block 1122, the example target endpoint identification circuitry 902 attempts to validate the determined endpoint(s) (e.g., for load balancing) selected by the example brokering controller 908. For example, the target endpoint identification circuitry 902 determines if the selected endpoints for the brokering controller 908 will result in an error, overuse or endpoints, underuse of endpoints, etc. If the example target endpoint identification circuitry 902 determines that the determined endpoint(s) selected to service the request is not valid (block 122: NO), control continues to block 1126. If the example target endpoint identification circuitry 902 determines that the determined endpoint(s) selected to service the request is valid (block 1122: YES), the example interface(s) 900 forward the service request(s) to the target service endpoint(s) based on the load balance generated by the brokering controller 908 (block 1124).At block 1128, the example target endpoint identification circuitry 902 identifies the service context based on the service request or flow. For example, the service request may correspond to a peer service instance (e.g., an active or established service instance), a service, or a factory. A peer service instance corresponds to a service that has already been linked to one or more endpoint(s). A service is a new service request that has not been linked with a service instance. A factory is a service request that corresponds to a service instance that is available across the platforms. If the example target endpoint identification circuitry 902 determines that the service context corresponds to a peer instance (block 1128: 1), control continues to block 1140 of FIG. 11C . If the example target endpoint identification circuitry 902 determines that the service context corresponds to a service (block 1128: 2), control continues to block 1144 of FIG. 11C . If the example target endpoint identification circuitry 902 determines that the service context corresponds to a factory (block 1128: 3), the example contacts a factory at a platform to initiate a service based on the service request (block 1130).At block 1132, the example target endpoint identification circuitry 902 obtains a response from the factory (e.g., including data identifying the new generated service instance capable of handling the service request). At block 1134 the example target endpoint identification circuitry 902 determines the target endpoint(s) based on the response. At block 1136, the example topology mapping circuitry 912 update the service topology mapping based on the response (e.g., identifying the new service instances and corresponding capacity and/or response time). At block 1138, the example interface(s) 900 forwards the service request(s) and/or flow to the target service endpoint(s).If the example target endpoint identification circuitry 902 determines that the service context corresponds to a factory (block 1128: 1), the example comparator 904 of the target endpoint identification circuitry 902 compares the telemetry data corresponding to the service instance associated with the service request to the service quality metrics associated with the service request to verify whether the service is executed according to the service quality metrics (block 1140). If the example comparator 904 determines that the service is not being executed according to the service quality metrics (block 1140: NO), control continues to block 1144 to perform auto-scaling and/or load balancing to ensure that the service quality metrics are met. If the example comparator 904 determines that the service is being executed according to the service quality metrics (block 1140: YES), the example interface(s) 900 forward(s) the service request to the corresponding endpoint(s) (e.g., the endpoints that have previously been selected to service the request and/or flow) (block 1142) and control returns to block 1102.If the example target endpoint identification circuitry 902 determines that the service context corresponds to a factory (block 1128: 2), the example target endpoint identification circuitry 902 identifies endpoint(s) capable of serving the request/flow based on the capacity and/or response time data of the service instances across the platforms 702a-702d (block 1144). At block 1146, the example filter 906 of the example target endpoint identification circuitry 902 filters out endpoint(s) from the identified endpoints that do not have capacity to service the request/flow based on telemetry data and/or the service metrics. For example, the filter 906 filters out endpoint(s) that are capable of servicing the request but do not have the capacity to service the request according to the service metrics. The target endpoint identification circuitry 902 may make the determination that the endpoint(s) do not have capacity based on the telemetry data of the endpoint(s) and the service metrics that set the thresholds corresponding to latency, efficiency, overhead, etc. In some examples, if there are no or limited endpoint(s) available after the filtering, the target endpoint identifier may auto-scale by commissioning new endpoint(s) and/or devices at the infrastructure 718 and/or at one or more platforms to add more resources to execute the service request. In some examples, if the telemetry data indicates that one or more endpoint(s) are not being utilized (e.g., within a threshold amount of time), the example target endpoint identifier may facilitate the decommission of one or more endpoints. At block 1148, the example target endpoint identification circuitry 902 selects one or more target endpoints to service the request and/or flow. At block 1150, the example target endpoint identification circuitry 902 forwards the service request to the corresponding endpoint(s) and control returns to block 1102.FIG. 12 is a block diagram of an example implementation of an example edge compute node 1200 that includes a compute engine (also referred to herein as "compute circuitry") 1202, an input/output (I/O) subsystem 1208, data storage 1210, a communication circuitry subsystem 1212, and, optionally, one or more peripheral devices 1214. In other examples, respective compute devices may include other or additional components, such as those typically found in a computer (e.g., a display, peripheral devices, etc.). Additionally, in some examples, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. The example edge compute node 1200 of FIG. 12 may be deployed in one of the edge computing systems illustrated in FIGS. 1-4 and/or 6-10C to implement any edge compute node of FIGS. 1-4 and/or 6-10C. The example edge compute node 1200 may additionally or alternatively include any of the components of the edge appliances 704a-704d of FIG. 7 .The example compute node 1200 may be embodied as any type of engine, device, or collection of devices capable of performing various compute functions. In some examples, the compute node 1200 may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable gate array (FPGA), a system-on-a-chip (SOC), or other integrated system or device. In the illustrative example, the compute node 1200 includes or is embodied as a processor 1204 and a memory 1206. The example processor 1204 may be embodied as any type of processor capable of performing the functions described herein (e.g., executing an application). For example, the processor 1204 may be embodied as a multi-core processor(s), a microcontroller, a processing unit, a specialized or special purpose processing unit, or other processor or processing/controlling circuit.In some examples, the processor 1204 may be embodied as, include, or be coupled to an FPGA, an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. Also in some examples, the processor 1204 may be embodied as a specialized x-processing unit (xPU) also known as a data processing unit (DPU), infrastructure processing unit (IPU), or network processing unit (NPU). Such an xPU may be embodied as a standalone circuit or circuit package, integrated within an SOC, or integrated with networking circuitry (e.g., in a SmartNIC), acceleration circuitry, storage devices, or AI hardware (e.g., GPUs or programmed FPGAs). Such an xPU may be designed to receive programming to process one or more data streams and perform specific tasks and actions for the data streams (such as hosting microservices, performing service management or orchestration, organizing, or managing server or data center hardware, managing service meshes, or collecting and distributing telemetry), outside of the CPU and/or GPU or general purpose processing hardware. However, it will be understood that a xPU, a SOC, a CPU, a GPU, and other variations of the processor 1204 may work in coordination with each other to execute many types of operations and instructions within and on behalf of the compute node 1200.The example memory 1206 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as DRAM or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM).In an example, the memory device 1206 is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device 1206 may also include a three dimensional crosspoint memory device (e.g., Intel® 3D XPoint™ memory), or other byte addressable write-in-place nonvolatile memory devices. The memory device 1206 may refer to the die itself and/or to a packaged memory product. In some examples, 3D crosspoint memory (e.g., Intel® 3D XPoint™ memory) may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. In some examples, all or a portion of the memory 1206 may be integrated into the processor 1204. The memory 1206 may store various software and data used during operation such as one or more applications, data operated on by the application(s), libraries, and drivers.The example compute circuitry 1202 is communicatively coupled to other components of the compute node 1200 via the I/O subsystem 1208, which may be embodied as circuitry and/or components to facilitate input/output operations with the compute circuitry 1202 (e.g., with the processor 1204 and/or the main memory 1206) and other components of the compute circuitry 1202. For example, the I/O subsystem 1208 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some examples, the I/O subsystem 1208 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the processor 1204, the memory 1206, and other components of the compute circuitry 1202, into the compute circuitry 1202.The one or more illustrative data storage devices 1210 may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Individual data storage devices 1210 may include a system partition that stores data and firmware code for the data storage device 1210. Individual data storage devices 1210 may also include one or more operating system partitions that store data files and executables for operating systems depending on, for example, the type of compute node 1200.The example communication circuitry 1212 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over a network between the compute circuitry 1202 and another compute device (e.g., an edge gateway of an implementing edge computing system). The example communication circuitry 1212 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., a cellular networking protocol such a 3GPP 4G or 5G standard, a wireless local area network protocol such as IEEE 802.11/Wi-Fi®, a wireless wide area network protocol, Ethernet, Bluetooth®, Bluetooth Low Energy, a IoT protocol such as IEEE 802.15.4 or ZigBee®, low-power wide-area network (LPWAN) or low-power wide-area (LPWA) protocols, etc.) to effect such communication.The illustrative communication circuitry 1212 includes a network interface controller (NIC) 1220, which may also be referred to as a host fabric interface (HFI). The example NIC 1220 may be embodied as one or more add-in-boards, daughter cards, network interface cards, controller chips, chipsets, or other devices that may be used by the compute node 1200 to connect with another compute device (e.g., an edge gateway node). In some examples, the NIC 1220 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some examples, the NIC 1220 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 1220. In such examples, the local processor of the NIC 1220 may be capable of performing one or more of the functions of the compute circuitry 1202 described herein. Additionally, or alternatively, in such examples, the local memory of the NIC 1220 may be integrated into one or more components of the client compute node at the board level, socket level, chip level, and/or other levels.Additionally, in some examples, a respective compute node 1200 may include one or more peripheral devices 1214. Such peripheral devices 1214 may include any type of peripheral device found in a compute device or server such as audio input devices, a display, other input/output devices, interface devices, and/or other peripheral devices, depending on the particular type of the compute node 1200. In further examples, the compute node 1200 may be embodied by a respective edge compute node (whether a client, gateway, or aggregation node) in an edge computing system or like forms of appliances, computers, subsystems, circuitry, or other components.FIG. 13 illustrates a block diagram of an example may computing device 1350 structured to execute the instructions of FIG. 10 to implement the techniques (e.g., operations, processes, methods, and methodologies) described herein such as one of the IPU 710-710d of FIGS. 7 and/or 8. This computing device 1350 provides a closer view of the respective components of node 1300 when implemented as or as part of a computing device (e.g., as a mobile device, a base station, server, gateway, etc.). The computing device 1350 may include any combinations of the hardware or logical components referenced herein, and it may include or couple with any device usable with an edge communication network or a combination of such networks. The components may be implemented as integrated circuits (ICs), portions thereof, discrete electronic devices, or other modules, instruction sets, programmable logic or algorithms, hardware, hardware accelerators, software, firmware, or a combination thereof adapted in the computing device 1350, or as components otherwise incorporated within a chassis of a larger system. For example, the computing device 1350 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPadTM), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, an Internet of Things (IoT) device, or any other type of computing device.The edge computing device 1350 may include processing circuitry in the form of a processor 1352, which may be a microprocessor, a multi-core processor, a multithreaded processor, an ultra-low voltage processor, an embedded processor, an xPU/DPU/IPU/NPU, special purpose processing unit, specialized processing unit, or other known processing elements. The processor 1352 may be a part of a system on a chip (SoC) in which the processor 1352 and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel Corporation, Santa Clara, California. As an example, the processor 1352 may include an Intel® Architecture Core™ based CPU processor, such as a Quark™, an Atom™, an i3, an i5, an i14, an i9, or an MCU-class processor, or another such processor available from Intel®. However, any number other processors may be used, such as available from Advanced Micro Devices, Inc. (AMD®) of Sunnyvale, California, a MIPS®-based design from MIPS Technologies, Inc. of Sunnyvale, California, an ARM®-based design licensed from ARM Holdings, Ltd., or a customer thereof, or their licensees or adopters. The processors may include units such as an A5-A13 processor from Apple® Inc., a Snapdragon™ processor from Qualcomm® Technologies, Inc., or an OMAP™ processor from Texas Instruments, Inc. The processor 1352 and accompanying circuitry may be provided in a single socket form factor, multiple socket form factor, or a variety of other formats, including in limited hardware configurations or configurations that include fewer than all elements shown in FIG. 13 . In this example, the processor implements the example IPU sidecar circuitry 714, including the example interface(s) 800, the example telemetry processing controller 802, and/or the example intercept circuitry 804 of FIG. 8 .The processor 1352 may communicate with a system memory 1354 over an interconnect 1356 (e.g., a bus). Any number of memory devices may be used to provide for a given amount of system memory. As examples, the memory 1354 may be random access memory (RAM) in accordance with a Joint Electron Devices Engineering Council (JEDEC) design such as the DDR or mobile DDR standards (e.g., LPDDR, LPDDR2, LPDDR3, or LPDDR4). In particular examples, a memory component may comply with a DRAM standard promulgated by JEDEC, such as JESD149F for DDR SDRAM, JESD149-2F for DDR2 SDRAM, JESD149-3F for DDR3 SDRAM, JESD149-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4. Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces. In various implementations, the individual memory devices may be of any number of different package types such as single die package (SDP), dual die package (DDP) or quad die package (Q114P). These devices, in some examples, may be directly soldered onto a motherboard to provide a lower profile solution, while in other examples the devices are configured as one or more memory modules that in turn couple to the motherboard by a given connector. Any number of other memory implementations may be used, such as other types of memory modules, e.g., dual inline memory modules (DIMMs) of different varieties including but not limited to microDIMMs or MiniDIMMs.To provide for persistent storage of information such as data, applications, operating systems and so forth, a storage 1358 may also couple to the processor 1352 via the interconnect 1356. In an example, the storage 1358 may be implemented via a solid-state disk drive (SSDD). Other devices that may be used for the storage 1358 include flash memory cards, such as Secure Digital (SD) cards, microSD cards, eXtreme Digital (XD) picture cards, and the like, and Universal Serial Bus (USB) flash drives. In an example, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory.In low power implementations, the storage 1358 may be on-die memory or registers associated with the processor 1352. However, in some examples, the storage 1358 may be implemented using a micro hard disk drive (HDD). Further, any number of new technologies may be used for the storage 1358 in addition to, or instead of, the technologies described, such resistance change memories, phase change memories, holographic memories, or chemical memories, among others.The components may communicate over the interconnect 1356. The interconnect 1356 may include any number of technologies, including industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other technologies. The interconnect 1356 may be a proprietary bus, for example, used in an SoC based system. Other bus systems may be included, such as an Inter-Integrated Circuit (I2C) interface, a Serial Peripheral Interface (SPI) interface, point to point interfaces, and a power bus, among others.The interconnect 1356 may couple the processor 1352 to a transceiver 1366, for communications with the connected edge devices 1362. The transceiver 1366 may use any number of frequencies and protocols, such as 2.4 Gigahertz (GHz) transmissions under the IEEE 802.15.4 standard, using the Bluetooth® low energy (BLE) standard, as defined by the Bluetooth® Special Interest Group, or the ZigBee® standard, among others. Any number of radios, configured for a particular wireless communication protocol, may be used for the connections to the connected edge devices 1362. For example, a wireless local area network (WLAN) unit may be used to implement Wi-Fi® communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard. In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, may occur via a wireless wide area network (WWAN) unit.The wireless network transceiver 1366 (or multiple transceivers) may communicate using multiple standards or radios for communications at a different range. For example, the computing device 1350 may communicate with close devices, e.g., within about 10 meters, using a local transceiver based on Bluetooth Low Energy (BLE), or another low power radio, to save power. More distant connected edge devices 1362, e.g., within about 50 meters, may be reached over ZigBee® or other intermediate power radios. Both communications techniques may take place over a single radio at different power levels or may take place over separate transceivers, for example, a local transceiver using BLE and a separate mesh transceiver using ZigBee®.A wireless network transceiver 1366 (e.g., a radio transceiver) may be included to communicate with devices or services in the edge cloud 1395 via local or wide area network protocols. The wireless network transceiver 1366 may be a low-power wide-area (LPWA) transceiver that follows the IEEE 802.15.4, or IEEE 802.15.4g standards, among others. The computing device 1350 may communicate over a wide area using LoRaWAN™ (Long Range Wide Area Network) developed by Semtech and the LoRa Alliance. The techniques described herein are not limited to these technologies but may be used with any number of other cloud transceivers that implement long range, low bandwidth communications, such as Sigfox, and other technologies. Further, other communications techniques, such as time-slotted channel hopping, described in the IEEE 802.15.4e specification may be used.Any number of other radio communications and protocols may be used in addition to the systems mentioned for the wireless network transceiver 1366, as described herein. For example, the transceiver 1366 may include a cellular transceiver that uses spread spectrum (SPA/SAS) communications for implementing high-speed communications. Further, any number of other protocols may be used, such as Wi-Fi® networks for medium speed communications and provision of network communications. The transceiver 1366 may include radios that are compatible with any number of 3GPP (Third Generation Partnership Project) specifications, such as Long Term Evolution (LTE) and 5th Generation (5G) communication systems, discussed in further detail at the end of the present disclosure. A network interface controller (NIC) 1368 may be included to provide a wired communication to nodes of the edge cloud 1395 or to other devices, such as the connected edge devices 1362 (e.g., operating in a mesh). The wired communication may provide an Ethernet connection or may be based on other types of networks, such as Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others. An additional NIC 1368 may be included to enable connecting to a second network, for example, a first NIC 1368 providing communications to the cloud over Ethernet, and a second NIC 1368 providing communications to other devices over another type of network.Given the variety of types of applicable communications from the device to another component or network, applicable communications circuitry used by the device may include or be embodied by any one or more of components 1364, 1366, 1368, or 1370. Accordingly, in various examples, applicable means for communicating (e.g., receiving, transmitting, causing transmission, etc.) may be embodied by such communications circuitry.The computing device 1350 may include or be coupled to acceleration circuitry 1364, which may be embodied by one or more artificial intelligence (AI) accelerators, a neural compute stick, neuromorphic hardware, an FPGA, an arrangement of GPUs, an arrangement of xPUs/DPUs/IPU/NPUs, one or more SoCs, one or more CPUs, one or more digital signal processors, dedicated ASICs, or other forms of specialized processors or circuitry designed to accomplish one or more specialized tasks. These tasks may include AI processing (including machine learning, training, inferencing, and classification operations), visual data processing, network data processing, object detection, rule analysis, or the like. These tasks also may include the specific edge computing tasks for service management and service operations discussed elsewhere in this document.The interconnect 1356 may couple the processor 1352 to a sensor hub or external interface 1370 that is used to connect additional devices or subsystems. The devices may include sensors 1372, such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, global navigation system (e.g., GPS) sensors, pressure sensors, barometric pressure sensors, and the like. The hub or interface 1370 further may be used to connect the computing device 1350 to actuators 1374, such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like.In some optional examples, various input/output (I/O) devices may be present within or connected to, the computing device 1350. For example, a display or other output device 1384 may be included to show information, such as sensor readings or actuator position. An input device 1386, such as a touch screen or keypad may be included to accept input. An output device 1384 may include any number of forms of audio or visual display, including simple visual outputs such as binary status indicators (e.g., light-emitting diodes (LEDs)) and multi-character visual outputs, or more complex outputs such as display screens (e.g., liquid crystal display (LCD) screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the computing device 1350. A display or console hardware, in the context of the present system, may be used to provide output and receive input of an edge computing system; to manage components or services of an edge computing system; identify a state of an edge computing component or service; or to conduct any other number of management or administration functions or service use cases.A battery 1376 may power the computing device 1350, although, in examples in which the computing device 1350 is mounted in a fixed location, it may have a power supply coupled to an electrical grid, or the battery may be used as a backup or for temporary capabilities. The battery 1376 may be a lithium ion battery, or a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, and the like.A battery monitor/charger 1378 may be included in the computing device 1350 to track the state of charge (SoCh) of the battery 1376, if included. The battery monitor/charger 1378 may be used to monitor other parameters of the battery 1376 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery 1376. The battery monitor/charger 1378 may include a battery monitoring integrated circuit, such as an LTC4020 or an LTC2990 from Linear Technologies, an ADT7488A from ON Semiconductor of Phoenix Arizona, or an IC from the UCD90xxx family from Texas Instruments of Dallas, TX. The battery monitor/charger 1378 may communicate the information on the battery 1376 to the processor 1352 over the interconnect 1356. The battery monitor/charger 1378 may also include an analog-to-digital (ADC) converter that enables the processor 1352 to directly monitor the voltage of the battery 1376 or the current flow from the battery 1376. The battery parameters may be used to determine actions that the computing device 1350 may perform, such as transmission frequency, mesh network operation, sensing frequency, and the like.A power block 1380, or other power supply coupled to a grid, may be coupled with the battery monitor/charger 1378 to charge the battery 1376. In some examples, the power block 1380 may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the computing device 1350. A wireless battery charging circuit, such as an LTC4020 chip from Linear Technologies of Milpitas, California, among others, may be included in the battery monitor/charger 1378. The specific charging circuits may be selected based on the size of the battery 1376, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard, promulgated by the Alliance for Wireless Power, among others.The storage 1358 may include instructions 1382 in the form of software, firmware, or hardware commands to implement the techniques described herein. Although such instructions 1382 are shown as code blocks included in the memory 1354 and the storage 1358, it may be understood that any of the code blocks may be replaced with hardwired circuits, for example, built into an application specific integrated circuit (ASIC).In an example, the instructions 1382 provided via the memory 1354, the storage 1358, or the processor 1352 may be embodied as a non-transitory, machine-readable medium 1360 including code to direct the processor 1352 to perform electronic operations in the computing device 1350. The processor 1352 may access the non-transitory, machine-readable medium 1360 over the interconnect 1356. For example, the non-transitory, machine-readable medium 1360 may be embodied by devices described for the storage 1358 or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices. The non-transitory, machine-readable medium 1360 may include instructions to direct the processor 1352 to perform a specific sequence or flow of actions, for example, as described with respect to the flowchart(s) and block diagram(s) of operations and functionality depicted above. As used herein, the terms "machine-readable medium" and "computer-readable medium" are interchangeable.Also in a specific example, the instructions 1382 on the processor 1352 (separately, or in combination with the instructions 1382 of the machine readable medium 1360) may configure execution or operation of a trusted execution environment (TEE) 1390. In an example, the TEE 1390 operates as a protected area accessible to the processor 1352 for secure execution of instructions and secure access to data. Various implementations of the TEE 1390, and an accompanying secure area in the processor 1352 or the memory 1354 may be provided, For example, through use of Intel® Software Guard Extensions (SGX) or ARM® TrustZone® hardware security extensions, Intel® Management Engine (ME), or Intel® Converged Security Manageability Engine (CSME). Other aspects of security hardening, hardware roots-of-trust, and trusted or protected operations may be implemented in the device 1350 through the TEE 1390 and the processor 1352. As described above in conjunction with FIG. 8 , the TEE 1390 may process privacy sensitive telemetry data (e.g., AI inference over telemetry data). In such examples, the TEE 1390 may ensure that the various interests (e.g., conditions) are met as a condition of acceptance and/or disclosure of the telemetry data.In further examples, a machine-readable medium also includes any tangible medium that is capable of storing, encoding, or carrying instructions for execution by a machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding, or carrying data structures utilized by or associated with such instructions. A "machine-readable medium" thus may include but is not limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of a number of transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)).A machine-readable medium may be provided by a storage device or other apparatus which is capable of hosting data in a non-transitory format. In an example, information stored or otherwise provided on a machine-readable medium may be representative of instructions, such as instructions themselves or a format from which the instructions may be derived. This format from which the instructions may be derived may include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like. The information representative of the instructions in the machine-readable medium may be processed by processing circuitry into the instructions to implement any of the operations discussed herein. For example, deriving the instructions from the information (e.g., processing by the processing circuitry) may include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically, or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions.In an example, the derivation of the instructions may include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions from some intermediate or preprocessed format provided by the machine-readable medium. The information, when provided in multiple parts, may be combined, unpacked, and modified to create the instructions. For example, the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers. The source code packages may be encrypted when in transit over a network and decrypted, uncompressed, assembled (e.g., linked) if necessary, and compiled or interpreted (e.g., into a library, stand-alone executable, etc.) at a local machine, and executed by the local machine.The machine executable instructions 1000 (1382) of FIG. 10 may be stored in the memory 1354, the storage 1358, and/or on a removable non-transitory computer readable storage medium such as a CD or DVDFIG. 14 illustrates a block diagram of an example computing device 1450 structured to execute the instructions of FIG. 11A-11C to implement the techniques (e.g., operations, processes, methods, and methodologies) described herein such as the infrastructure 718 of FIG. 7 and/or 9. This computing device 1450 provides a closer view of the respective components of node 1400 when implemented as or as part of a computing device (e.g., as a mobile device, a base station, server, gateway, etc.). The computing device 1450 may include any combinations of the hardware or logical components referenced herein, and it may include or couple with any device usable with an edge communication network or a combination of such networks. The components may be implemented as integrated circuits (ICs), portions thereof, discrete electronic devices, or other modules, instruction sets, programmable logic or algorithms, hardware, hardware accelerators, software, firmware, or a combination thereof adapted in the computing device 1450, or as components otherwise incorporated within a chassis of a larger system. For example, the computing device 1450 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPadTM), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, an Internet of Things (IoT) device, or any other type of computing device.The edge computing device 1450 may include processing circuitry in the form of a processor 1452, which may be a microprocessor, a multi-core processor, a multithreaded processor, an ultra-low voltage processor, an embedded processor, an xPU/DPU/IPU/NPU, special purpose processing unit, specialized processing unit, or other known processing elements. The processor 1452 may be a part of a system on a chip (SoC) in which the processor 1452 and other components are formed into a single integrated circuit, or a single package, such as the Edison™ or Galileo™ SoC boards from Intel Corporation, Santa Clara, California. As an example, the processor 1452 may include an Intel® Architecture Core™ based CPU processor, such as a Quark™, an Atom™, an i3, an i5, an i14, an i9, or an MCU-class processor, or another such processor available from Intel®. However, any number other processors may be used, such as available from Advanced Micro Devices, Inc. (AMD®) of Sunnyvale, California, a MIPS®-based design from MIPS Technologies, Inc. of Sunnyvale, California, an ARM®-based design licensed from ARM Holdings, Ltd., or a customer thereof, or their licensees or adopters. The processors may include units such as an A5-A13 processor from Apple® Inc., a Snapdragon™ processor from Qualcomm® Technologies, Inc., or an OMAP™ processor from Texas Instruments, Inc. The processor 1452 and accompanying circuitry may be provided in a single socket form factor, multiple socket form factor, or a variety of other formats, including in limited hardware configurations or configurations that include fewer than all elements shown in FIG. 14 . In this example, the processor implements the example infrastructure sidecar circuitry 720, including the example interface(s) 900, the example target endpoint identification circuitry 902, the example comparator 904, the example filter 906, the example brokering controller 908, the example telemetry monitor circuitry 910, and the example topology mapping circuitry 912 of FIG. 9 .The processor 1452 may communicate with a system memory 1454 over an interconnect 1456 (e.g., a bus). Any number of memory devices may be used to provide for a given amount of system memory. As examples, the memory 1454 may be random access memory (RAM) in accordance with a Joint Electron Devices Engineering Council (JEDEC) design such as the DDR or mobile DDR standards (e.g., LPDDR, LPDDR2, LPDDR3, or LPDDR4). In particular examples, a memory component may comply with a DRAM standard promulgated by JEDEC, such as JESD149F for DDR SDRAM, JESD149-2F for DDR2 SDRAM, JESD149-3F for DDR3 SDRAM, JESD149-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4. Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces. In various implementations, the individual memory devices may be of any number of different package types such as single die package (SDP), dual die package (DDP) or quad die package (Q114P). These devices, in some examples, may be directly soldered onto a motherboard to provide a lower profile solution, while in other examples the devices are configured as one or more memory modules that in turn couple to the motherboard by a given connector. Any number of other memory implementations may be used, such as other types of memory modules, e.g., dual inline memory modules (DIMMs) of different varieties including but not limited to microDIMMs or MiniDIMMs.To provide for persistent storage of information such as data, applications, operating systems and so forth, a storage 1458 may also couple to the processor 1452 via the interconnect 1456. In an example, the storage 1458 may be implemented via a solid-state disk drive (SSDD). Other devices that may be used for the storage 1458 include flash memory cards, such as Secure Digital (SD) cards, microSD cards, eXtreme Digital (XD) picture cards, and the like, and Universal Serial Bus (USB) flash drives. In an example, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory.In low power implementations, the storage 1458 may be on-die memory or registers associated with the processor 1452. However, in some examples, the storage 1458 may be implemented using a micro hard disk drive (HDD). Further, any number of new technologies may be used for the storage 1458 in addition to, or instead of, the technologies described, such resistance change memories, phase change memories, holographic memories, or chemical memories, among others.The components may communicate over the interconnect 1456. The interconnect 1456 may include any number of technologies, including industry standard architecture (ISA), extended ISA (EISA), peripheral component interconnect (PCI), peripheral component interconnect extended (PCIx), PCI express (PCIe), or any number of other technologies. The interconnect 1456 may be a proprietary bus, for example, used in an SoC based system. Other bus systems may be included, such as an Inter-Integrated Circuit (I2C) interface, a Serial Peripheral Interface (SPI) interface, point to point interfaces, and a power bus, among others.The interconnect 1456 may couple the processor 1452 to a transceiver 1466, for communications with the connected edge devices 1462. The transceiver 1466 may use any number of frequencies and protocols, such as 2.4 Gigahertz (GHz) transmissions under the IEEE 802.15.4 standard, using the Bluetooth® low energy (BLE) standard, as defined by the Bluetooth® Special Interest Group, or the ZigBee® standard, among others. Any number of radios, configured for a particular wireless communication protocol, may be used for the connections to the connected edge devices 1462. For example, a wireless local area network (WLAN) unit may be used to implement Wi-Fi® communications in accordance with the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard. In addition, wireless wide area communications, e.g., according to a cellular or other wireless wide area protocol, may occur via a wireless wide area network (WWAN) unit.The wireless network transceiver 1466 (or multiple transceivers) may communicate using multiple standards or radios for communications at a different range. For example, the computing device 1450 may communicate with close devices, e.g., within about 10 meters, using a local transceiver based on Bluetooth Low Energy (BLE), or another low power radio, to save power. More distant connected edge devices 1462, e.g., within about 50 meters, may be reached over ZigBee® or other intermediate power radios. Both communications techniques may take place over a single radio at different power levels or may take place over separate transceivers, for example, a local transceiver using BLE and a separate mesh transceiver using ZigBee®.A wireless network transceiver 1466 (e.g., a radio transceiver) may be included to communicate with devices or services in the edge cloud 1495 via local or wide area network protocols. The wireless network transceiver 1466 may be a low-power widearea (LPWA) transceiver that follows the IEEE 802.15.4, or IEEE 802.15.4g standards, among others. The computing device 1450 may communicate over a wide area using LoRaWAN™ (Long Range Wide Area Network) developed by Semtech and the LoRa Alliance. The techniques described herein are not limited to these technologies but may be used with any number of other cloud transceivers that implement long range, low bandwidth communications, such as Sigfox, and other technologies. Further, other communications techniques, such as time-slotted channel hopping, described in the IEEE 802.15.4e specification may be used.Any number of other radio communications and protocols may be used in addition to the systems mentioned for the wireless network transceiver 1466, as described herein. For example, the transceiver 1466 may include a cellular transceiver that uses spread spectrum (SPA/SAS) communications for implementing high-speed communications. Further, any number of other protocols may be used, such as Wi-Fi® networks for medium speed communications and provision of network communications. The transceiver 1466 may include radios that are compatible with any number of 3GPP (Third Generation Partnership Project) specifications, such as Long Term Evolution (LTE) and 5th Generation (5G) communication systems, discussed in further detail at the end of the present disclosure. A network interface controller (NIC) 1468 may be included to provide a wired communication to nodes of the edge cloud 1495 or to other devices, such as the connected edge devices 1462 (e.g., operating in a mesh). The wired communication may provide an Ethernet connection or may be based on other types of networks, such as Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others. An additional NIC 1468 may be included to enable connecting to a second network, for example, a first NIC 1468 providing communications to the cloud over Ethernet, and a second NIC 1468 providing communications to other devices over another type of network.Given the variety of types of applicable communications from the device to another component or network, applicable communications circuitry used by the device may include or be embodied by any one or more of components 1464, 1466, 1468, or 1470. Accordingly, in various examples, applicable means for communicating (e.g., receiving, transmitting, causing transmission of, etc.) may be embodied by such communications circuitry.The computing device 1450 may include or be coupled to acceleration circuitry 1464, which may be embodied by one or more artificial intelligence (AI) accelerators, a neural compute stick, neuromorphic hardware, an FPGA, an arrangement of GPUs, an arrangement of xPUs/DPUs/IPU/NPUs, one or more SoCs, one or more CPUs, one or more digital signal processors, dedicated ASICs, or other forms of specialized processors or circuitry designed to accomplish one or more specialized tasks. These tasks may include AI processing (including machine learning, training, inferencing, and classification operations), visual data processing, network data processing, object detection, rule analysis, or the like. These tasks also may include the specific edge computing tasks for service management and service operations discussed elsewhere in this document.The interconnect 1456 may couple the processor 1452 to a sensor hub or external interface 1470 that is used to connect additional devices or subsystems. The devices may include sensors 1472, such as accelerometers, level sensors, flow sensors, optical light sensors, camera sensors, temperature sensors, global navigation system (e.g., GPS) sensors, pressure sensors, barometric pressure sensors, and the like. The hub or interface 1470 further may be used to connect the computing device 1450 to actuators 1474, such as power switches, valve actuators, an audible sound generator, a visual warning device, and the like.In some optional examples, various input/output (I/O) devices may be present within or connected to, the computing device 1450. For example, a display or other output device 1484 may be included to show information, such as sensor readings or actuator position. An input device 1486, such as a touch screen or keypad may be included to accept input. An output device 1484 may include any number of forms of audio or visual display, including simple visual outputs such as binary status indicators (e.g., light-emitting diodes (LEDs)) and multi-character visual outputs, or more complex outputs such as display screens (e.g., liquid crystal display (LCD) screens), with the output of characters, graphics, multimedia objects, and the like being generated or produced from the operation of the computing device 1450. A display or console hardware, in the context of the present system, may be used to provide output and receive input of an edge computing system; to manage components or services of an edge computing system; identify a state of an edge computing component or service; or to conduct any other number of management or administration functions or service use cases.A battery 1476 may power the computing device 1450, although, in examples in which the computing device 1450 is mounted in a fixed location, it may have a power supply coupled to an electrical grid, or the battery may be used as a backup or for temporary capabilities. The battery 1476 may be a lithium ion battery, or a metal-air battery, such as a zinc-air battery, an aluminum-air battery, a lithium-air battery, and the like.A battery monitor/charger 1478 may be included in the computing device 1450 to track the state of charge (SoCh) of the battery 1476, if included. The battery monitor/charger 1478 may be used to monitor other parameters of the battery 1476 to provide failure predictions, such as the state of health (SoH) and the state of function (SoF) of the battery 1476. The battery monitor/charger 1478 may include a battery monitoring integrated circuit, such as an LTC4020 or an LTC2990 from Linear Technologies, an ADT7488A from ON Semiconductor of Phoenix Arizona, or an IC from the UCD90xxx family from Texas Instruments of Dallas, TX. The battery monitor/charger 1478 may communicate the information on the battery 1476 to the processor 1452 over the interconnect 1456. The battery monitor/charger 1478 may also include an analog-to-digital (ADC) converter that enables the processor 1452 to directly monitor the voltage of the battery 1476 or the current flow from the battery 1476. The battery parameters may be used to determine actions that the computing device 1450 may perform, such as transmission frequency, mesh network operation, sensing frequency, and the like.A power block 1480, or other power supply coupled to a grid, may be coupled with the battery monitor/charger 1478 to charge the battery 1476. In some examples, the power block 1480 may be replaced with a wireless power receiver to obtain the power wirelessly, for example, through a loop antenna in the computing device 1450. A wireless battery charging circuit, such as an LTC4020 chip from Linear Technologies of Milpitas, California, among others, may be included in the battery monitor/charger 1478. The specific charging circuits may be selected based on the size of the battery 1476, and thus, the current required. The charging may be performed using the Airfuel standard promulgated by the Airfuel Alliance, the Qi wireless charging standard promulgated by the Wireless Power Consortium, or the Rezence charging standard, promulgated by the Alliance for Wireless Power, among others.The storage 1458 may include instructions 1482 in the form of software, firmware, or hardware commands to implement the techniques described herein. Although such instructions 1482 are shown as code blocks included in the memory 1454 and the storage 1458, it may be understood that any of the code blocks may be replaced with hardwired circuits, for example, built into an application specific integrated circuit (ASIC).In an example, the instructions 1482 provided via the memory 1454, the storage 1458, or the processor 1452 may be embodied as a non-transitory, machine-readable medium 1460 including code to direct the processor 1452 to perform electronic operations in the computing device 1450. The processor 1452 may access the non-transitory, machine-readable medium 1460 over the interconnect 1456. For example, the non-transitory, machine-readable medium 1460 may be embodied by devices described for the storage 1458 or may include specific storage units such as optical disks, flash drives, or any number of other hardware devices. The non-transitory, machine-readable medium 1460 may include instructions to direct the processor 1452 to perform a specific sequence or flow of actions, for example, as described with respect to the flowchart(s) and block diagram(s) of operations and functionality depicted above. As used herein, the terms "machine-readable medium" and "computer-readable medium" are interchangeable.Also in a specific example, the instructions 1482 on the processor 1452 (separately, or in combination with the instructions 1482 of the machine readable medium 1460) may configure execution or operation of a trusted execution environment (TEE) 1490. In an example, the TEE 1490 operates as a protected area accessible to the processor 1452 for secure execution of instructions and secure access to data. Various implementations of the TEE 1490, and an accompanying secure area in the processor 1452 or the memory 1454 may be provided, For example, through use of Intel® Software Guard Extensions (SGX) or ARM® TrustZone® hardware security extensions, Intel® Management Engine (ME), or Intel® Converged Security Manageability Engine (CSME). Other aspects of security hardening, hardware roots-of-trust, and trusted or protected operations may be implemented in the device 1450 through the TEE 1490 and the processor 1452.In further examples, a machine-readable medium also includes any tangible medium that is capable of storing, encoding, or carrying instructions for execution by a machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding, or carrying data structures utilized by or associated with such instructions. A "machine-readable medium" thus may include but is not limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The instructions embodied by a machine-readable medium may further be transmitted or received over a communications network using a transmission medium via a network interface device utilizing any one of a number of transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)).A machine-readable medium may be provided by a storage device or other apparatus which is capable of hosting data in a non-transitory format. In an example, information stored or otherwise provided on a machine-readable medium may be representative of instructions, such as instructions themselves or a format from which the instructions may be derived. This format from which the instructions may be derived may include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., split into multiple packages), or the like. The information representative of the instructions in the machine-readable medium may be processed by processing circuitry into the instructions to implement any of the operations discussed herein. For example, deriving the instructions from the information (e.g., processing by the processing circuitry) may include: compiling (e.g., from source code, object code, etc.), interpreting, loading, organizing (e.g., dynamically, or statically linking), encoding, decoding, encrypting, unencrypting, packaging, unpackaging, or otherwise manipulating the information into the instructions.In an example, the derivation of the instructions may include assembly, compilation, or interpretation of the information (e.g., by the processing circuitry) to create the instructions from some intermediate or preprocessed format provided by the machine-readable medium. The information, when provided in multiple parts, may be combined, unpacked, and modified to create the instructions. For example, the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers. The source code packages may be encrypted when in transit over a network and decrypted, uncompressed, assembled (e.g., linked) if necessary, and compiled or interpreted (e.g., into a library, stand-alone executable, etc.) at a local machine, and executed by the local machine.The machine executable instructions 1100 (1482) of FIGS. 11A-11C may be stored in the memory 1454, the storage 1458, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.FIG. 15 is a block diagram of an example implementation of the processor circuitry 412 of FIG. 4 . In this example, the processor circuitry 1352, 1452 of FIGS. 13 and/or 14 is implemented by a microprocessor 1500. For example, the microprocessor 1500 may implement multi-core hardware circuitry such as a CPU, a DSP, a GPU, an XPU, etc. Although it may include any number of example cores 1502 (e.g., 1 core), the microprocessor 1500 of this example is a multi-core semiconductor device including N cores. The cores 1502 of the microprocessor 1500 may operate independently or may cooperate to execute machine readable instructions. For example, machine code corresponding to a firmware program, an embedded software program, or a software program may be executed by one of the cores 1502 or may be executed by multiple ones of the cores 1502 at the same or different times. In some examples, the machine code corresponding to the firmware program, the embedded software program, or the software program is split into threads and executed in parallel by two or more of the cores 1502. The software program may correspond to a portion or all of the machine readable instructions and/or operations represented by the flowcharts of FIGS. 10-11C .The cores 1502 may communicate by an example bus 1504. In some examples, the bus 1504 may implement a communication bus to effectuate communication associated with one(s) of the cores 1502. For example, the bus 1504 may implement at least one of an Inter-Integrated Circuit (I2C) bus, a Serial Peripheral Interface (SPI) bus, a PCI bus, or a PCIe bus. Additionally or alternatively, the bus 1504 may implement any other type of computing or electrical bus. The cores 1502 may obtain data, instructions, and/or signals from one or more external devices by example interface circuitry 1506. The cores 1502 may output data, instructions, and/or signals to the one or more external devices by the interface circuitry 1506. Although the cores 1502 of this example include example local memory 1520 (e.g., Level 1 (LI) cache that may be split into an L1 data cache and an L1 instruction cache), the microprocessor 1500 also includes example shared memory 1510 that may be shared by the cores (e.g., Level 2 (L2_ cache)) for high-speed access to data and/or instructions. Data and/or instructions may be transferred (e.g., shared) by writing to and/or reading from the shared memory 1510. The local memory 1520 of each of the cores 1502 and the shared memory 1510 may be part of a hierarchy of storage devices including multiple levels of cache memory and the main memory (e.g., the main memory 1354, 1454 of FIGS. 13 and/or 14). Typically, higher levels of memory in the hierarchy exhibit lower access time and have smaller storage capacity than lower levels of memory. Changes in the various levels of the cache hierarchy are managed (e.g., coordinated) by a cache coherency policy.Each core 1502 may be referred to as a CPU, DSP, GPU, etc., or any other type of hardware circuitry. Each core 1502 includes control unit circuitry 1514, arithmetic and logic (AL) circuitry (sometimes referred to as an ALU) 1516, a plurality of registers 1518, the L1 cache 1520, and an example bus 1522. Other structures may be present. For example, each core 1502 may include vector unit circuitry, single instruction multiple data (SIMD) unit circuitry, load/store unit (LSU) circuitry, branch/jump unit circuitry, floating-point unit (FPU) circuitry, etc. The control unit circuitry 1514 includes semiconductor-based circuits structured to control (e.g., coordinate) data movement within the corresponding core 1502. The AL circuitry 1516 includes semiconductor-based circuits structured to perform one or more mathematic and/or logic operations on the data within the corresponding core 1502. The AL circuitry 1516 of some examples performs integer based operations. In other examples, the AL circuitry 1516 also performs floating point operations. In yet other examples, the AL circuitry 1516 may include first AL circuitry that performs integer based operations and second AL circuitry that performs floating point operations. In some examples, the AL circuitry 1516 may be referred to as an Arithmetic Logic Unit (ALU). The registers 1518 are semiconductor-based structures to store data and/or instructions such as results of one or more of the operations performed by the AL circuitry 1516 of the corresponding core 1502. For example, the registers 1518 may include vector register(s), SIMD register(s), general purpose register(s), flag register(s), segment register(s), machine specific register(s), instruction pointer register(s), control register(s), debug register(s), memory management register(s), machine check register(s), etc. The registers 1518 may be arranged in a bank as shown in FIG. 15 . Alternatively, the registers 1518 may be organized in any other arrangement, format, or structure including distributed throughout the core 1502 to shorten access time. The bus 1520 may implement at least one of an I2C bus, a SPI bus, a PCI bus, or a PCIe bus.Each core 1502 and/or, more generally, the microprocessor 1500 may include additional and/or alternate structures to those shown and described above. For example, one or more clock circuits, one or more power supplies, one or more power gates, one or more cache home agents (CHAs), one or more converged/common mesh stops (CMSs), one or more shifters (e.g., barrel shifter(s)) and/or other circuitry may be present. The microprocessor 1500 is a semiconductor device fabricated to include many transistors interconnected to implement the structures described above in one or more integrated circuits (ICs) contained in one or more packages. The processor circuitry may include and/or cooperate with one or more accelerators. In some examples, accelerators are implemented by logic circuitry to perform certain tasks more quickly and/or efficiently than can be done by a general purpose processor. Examples of accelerators include ASICs and FPGAs such as those discussed herein. A GPU or other programmable device can also be an accelerator. Accelerators may be on-board the processor circuitry, in the same chip package as the processor circuitry and/or in one or more separate packages from the processor circuitry.FIG. 16 is a block diagram of another example implementation of the processor circuitry 1352, 1452 of FIGS. 13 and/or 14. In this example, the processor circuitry 1352, 1452 is implemented by FPGA circuitry 1600. The FPGA circuitry 1600 can be used, for example, to perform operations that could otherwise be performed by the example microprocessor 1500 of FIG. 15 executing corresponding machine readable instructions. However, once configured, the FPGA circuitry 1600 instantiates the machine readable instructions in hardware and, thus, can often execute the operations faster than they could be performed by a general purpose microprocessor executing the corresponding software.More specifically, in contrast to the microprocessor 1500 of FIG. 15 described above (which is a general purpose device that may be programmed to execute some or all of the machine readable instructions represented by the flowcharts of FIGS. 10-11C but whose interconnections and logic circuitry are fixed once fabricated), the FPGA circuitry 1600 of the example of FIG. 16 includes interconnections and logic circuitry that may be configured and/or interconnected in different ways after fabrication to instantiate, for example, some or all of the machine readable instructions represented by the flowcharts of FIGS. 10-11C . In particular, the FPGA 1600 may be thought of as an array of logic gates, interconnections, and switches. The switches can be programmed to change how the logic gates are interconnected by the interconnections, effectively forming one or more dedicated logic circuits (unless and until the FPGA circuitry 1600 is reprogrammed). The configured logic circuits enable the logic gates to cooperate in different ways to perform different operations on data received by input circuitry. Those operations may correspond to some or all of the software represented by the flowcharts of FIGS. 10-11C . As such, the FPGA circuitry 1600 may be structured to effectively instantiate some or all of the machine readable instructions of the flowcharts of FIGS. 10-11C as dedicated logic circuits to perform the operations corresponding to those software instructions in a dedicated manner analogous to an ASIC. Therefore, the FPGA circuitry 1600 may perform the operations corresponding to the some or all of the machine readable instructions of FIGS. 10-11C faster than the general purpose microprocessor can execute the same.In the example of FIG. 16 , the FPGA circuitry 1600 is structured to be programmed (and/or reprogrammed one or more times) by an end user by a hardware description language (HDL) such as Verilog. The FPGA circuitry 1600 of FIG. 16 , includes example input/output (I/O) circuitry 1602 to obtain and/or output data to/from example configuration circuitry 1604 and/or external hardware (e.g., external hardware circuitry) 1606. For example, the configuration circuitry 1604 may implement interface circuitry that may obtain machine readable instructions to configure the FPGA circuitry 1600, or portion(s) thereof. In some such examples, the configuration circuitry 1604 may obtain the machine readable instructions from a user, a machine (e.g., hardware circuitry (e.g., programmed or dedicated circuitry) that may implement an Artificial Intelligence/Machine Learning (AI/ML) model to generate the instructions), etc. In some examples, the external hardware 1606 may implement the microprocessor 1500 of FIG. 15 . The FPGA circuitry 1600 also includes an array of example logic gate circuitry 1608, a plurality of example configurable interconnections 1610, and example storage circuitry 1612. The logic gate circuitry 1608 and interconnections 1610 are configurable to instantiate one or more operations that may correspond to at least some of the machine readable instructions of FIGS. 10-11C and/or other desired operations. The logic gate circuitry 1608 shown in FIG. 16 is fabricated in groups or blocks. Each block includes semiconductor-based electrical structures that may be configured into logic circuits. In some examples, the electrical structures include logic gates (e.g., And gates, Or gates, Nor gates, etc.) that provide basic building blocks for logic circuits. Electrically controllable switches (e.g., transistors) are present within each of the logic gate circuitry 1608 to enable configuration of the electrical structures and/or the logic gates to form circuits to perform desired operations. The logic gate circuitry 1608 may include other electrical structures such as look-up tables (LUTs), registers (e.g., flip-flops or latches), multiplexers, etc.The interconnections 1610 of the illustrated example are conductive pathways, traces, vias, or the like that may include electrically controllable switches (e.g., transistors) whose state can be changed by programming (e.g., using an HDL instruction language) to activate or deactivate one or more connections between one or more of the logic gate circuitry 1608 to program desired logic circuits.The storage circuitry 1612 of the illustrated example is structured to store result(s) of the one or more of the operations performed by corresponding logic gates. The storage circuitry 1612 may be implemented by registers or the like. In the illustrated example, the storage circuitry 1612 is distributed amongst the logic gate circuitry 1608 to facilitate access and increase execution speed.The example FPGA circuitry 1600 of FIG. 16 also includes example Dedicated Operations Circuitry 1614. In this example, the Dedicated Operations Circuitry 1614 includes special purpose circuitry 1616 that may be invoked to implement commonly used functions to avoid the need to program those functions in the field. Examples of such special purpose circuitry 1616 include memory (e.g., DRAM) controller circuitry, PCIe controller circuitry, clock circuitry, transceiver circuitry, memory, and multiplier-accumulator circuitry. Other types of special purpose circuitry may be present. In some examples, the FPGA circuitry 1600 may also include example general purpose programmable circuitry 1618 such as an example CPU 1620 and/or an example DSP 1622. Other general purpose programmable circuitry 1618 may additionally or alternatively be present such as a GPU, an XPU, etc., that can be programmed to perform other operations.Although FIGS. 15 and 16 illustrate two example implementations of the processor circuitry 412 of FIG. 4 , many other approaches are contemplated. For example, as mentioned above, modern FPGA circuitry may include an on-board CPU, such as one or more of the example CPU 1620 of FIG. 16 . Therefore, the processor circuitry 412 of FIG. 4 may additionally be implemented by combining the example microprocessor 1500 of FIG. 15 and the example FPGA circuitry 1600 of FIG. 16 . In some such hybrid examples, a first portion of the machine readable instructions represented by the flowcharts of FIGS. 10-11C may be executed by one or more of the cores 1502 of FIG. 15 and a second portion of the machine readable instructions represented by the flowchart of FIGS. 10-11C may be executed by the FPGA circuitry 1600 of FIG. 16 .In some examples, the processor circuitry 1352, 1452 of FIG. 13 and/or 14 may be in one or more packages. For example, the processor circuitry 500 of FIG. 15 and/or the FPGA circuitry 1500 of FIG. 15 may be in one or more packages. In some examples, an XPU may be implemented by the processor circuitry 1352, 1452 of FIGS. 13 and/or 14, which may be in one or more packages. For example, the XPU may include a CPU in one package, a DSP in another package, a GPU in yet another package, and an FPGA in still yet another package.A block diagram illustrating an example software distribution platform 1705 to distribute software such as the example machine readable instructions 1382, 1482 of FIGS. 13 and/or 14 to hardware devices owned and/or operated by third parties is illustrated in FIG. 17 . The example software distribution platform 1705 may be implemented by any computer server, data facility, cloud service, etc., capable of storing and transmitting software to other computing devices. The third parties may be customers of the entity owning and/or operating the software distribution platform 1705. For example, the entity that owns and/or operates the software distribution platform 1705 may be a developer, a seller, and/or a licensor of software such as the example machine readable instructions 1382, 1482 of FIGS. 13 and/or 14. The third parties may be consumers, users, retailers, OEMs, etc., who purchase and/or license the software for use and/or re-sale and/or sub-licensing. In the illustrated example, the software distribution platform 1705 includes one or more servers and one or more storage devices. The storage devices store the machine readable instructions 1382, 1482, which may correspond to the example machine readable instructions 1382, 1482 of FIGS. 13 and/or 14 as described above. The one or more servers of the example software distribution platform 1705 are in communication with a network 1710, which may correspond to any one or more of the Internet and/or any of the example networks described above. In some examples, the one or more servers are responsive to requests to transmit the software to a requesting party as part of a commercial transaction. Payment for the delivery, sale, and/or license of the software may be handled by the one or more servers of the software distribution platform and/or by a third party payment entity. The servers enable purchasers and/or licensors to download the machine readable instructions 1382, 1482 from the software distribution platform 1705. For example, the software, which may correspond to the example machine readable instructions 1382, 1482 of FIGS. 13 and/or 14, may be downloaded to the example processor platform 400, which is to execute the machine readable instructions 1382, 1482 to implement the example IPU sidecar circuitry 714 and/or the infrastructure sidecar circuitry 720 of FIGS. 8 and/or 9. In some example, one or more servers of the software distribution platform 1705 periodically offer, transmit, and/or force updates to the software (e.g., the example machine readable instructions 1382, 1482 of FIGS. 13 and/or 14) to ensure improvements, patches, updates, etc., are distributed and applied to the software at the end user devices.Example methods, apparatus, systems, and articles of manufacture to facilitate service proxying are disclosed herein. Further examples and combinations thereof include the following: Example 1 includes an apparatus to process service requests, the apparatus including interface circuitry to access a service request intercepted by from an infrastructure processing unit, the service request corresponding to a first node, instructions in the apparatus, and infrastructure sidecar circuitry to execute the instructions to identify an active service instance corresponding to the service request, compare first telemetry data corresponding to the active service instance to a service quality metric, select a second node to service the service request based on the comparison and further telemetry data, and cause transmission of the service request to the second node.Example 2 includes the apparatus of example 1, wherein the infrastructure sidecar circuitry is to determine that the active service instance corresponds to the intercepted service request based on a topology mapping of service instances across a plurality of platforms.Example 3 includes the apparatus of example 1, wherein the infrastructure sidecar circuitry is to update a topology mapping based on second telemetry data from the active service instance.Example 4 includes the apparatus of example 1, wherein the infrastructure sidecar circuitry is to select the second node by processing a topology mapping of service instances to identify a first group of service instances capable of servicing the service request, and generating a second group of service instances by filtering out services instances in the first group that do not have capacity to service the service request.Example 5 includes the apparatus of example 4, wherein the infrastructure sidecar circuitry is to determine the first group based on at least one of capacity information or response time information from the service instances.Example 6 includes the apparatus of example 4, wherein the infrastructure sidecar circuitry is to, when the second group of service instances is empty, initiate a new service instance to service the service request.Example 7 includes the apparatus of example 1, wherein the second node is at least one of an edge appliance, an edge device, a virtual machine, or an infrastructure device.Example 8 includes the apparatus of example 1, wherein the infrastructure sidecar circuitry is to, when the service request corresponds to a non-default load balancing protocol, select the second node based on the non-default load balancing protocol.Example 9 includes the apparatus of example 8, wherein the infrastructure sidecar circuitry is to validate that the second node selected based on the non-default load balancing protocol will not result in an error.Example 10 includes the apparatus of example 1, wherein the further telemetry data includes at least one of first telemetry data corresponding to the first node, second telemetry data corresponding to the second node, or third telemetry data corresponding to infrastructure.Example 11 includes a non-transitory computer readable medium comprising instruction which, when executed, cause sidecar circuitry to at least access a service request intercepted by from an infrastructure processing unit, the service request corresponding to a first node, identify an active service instance corresponding to the service request, compare first telemetry data corresponding to the service instance to a service quality metric, select a second node to service the service request based on the comparison and further telemetry data, and cause transmission of the service request to the second node.Example 12 includes the computer readable medium of example 11, wherein the instructions cause the sidecar circuitry to determine that the service instance corresponds to the intercepted service request based on a topology mapping of service instances across a plurality of platforms.Example 13 includes the computer readable medium of example 11, wherein the instructions cause the sidecar circuitry to update a topology mapping based on second telemetry data from the service instance.Example 14 includes the computer readable medium of example 11, wherein the instructions cause the sidecar circuitry to select the second node by processing a topology mapping of service instances to identify a first group of service instances capable of servicing the service request, and generating a second group of service instance by filtering out services instances in the first group that do not have capacity to service the service request.Example 15 includes the computer readable medium of example 14, wherein the instructions cause the sidecar circuitry to determine the first group based on at least one of capacity information or response time information from the service instances.Example 16 includes the computer readable medium of example 14, wherein the instructions cause the sidecar circuitry to, when the second group of service instances is empty, initiate a new service instance to service the service request.Example 17 includes the computer readable medium of example 11, wherein the second node is at least one of an edge appliance, an edge device, a virtual machine, or an infrastructure device.Example 18 includes the computer readable medium of example 11, wherein the instructions cause the sidecar circuitry to, when the service request corresponds to a non-default load balancing protocol, select the second node based on the non-default load balancing protocol.Example 19 includes the computer readable medium of example 18, wherein the instructions cause the sidecar circuitry to validate that the second node selected based on the non-default load balancing protocol will not result in an error.Example 20 includes the computer readable medium of example 11, wherein the further telemetry data includes at least one of first telemetry data corresponding to the first node, second telemetry data corresponding to the second node, or third telemetry data corresponding to infrastructure.Example 21 includes a system to adjust scheduling of service requests, the system comprising an infrastructure processing unit including infrastructure processing unit sidecar circuitry to obtain telemetry information corresponding to a service instances at a plurality of edge devices, cause transmission of the telemetry information to an infrastructure, intercept a service request from one of the plurality of edge devices, and cause transmission of the service request to the infrastructure, and the infrastructure including infrastructure sidecar circuitry to obtain the intercepted service request, and determine whether to adjust scheduling of the service request.Example 22 includes the system of example 21, wherein the processing unit sidecar circuitry is to compare the telemetry information to historic telemetry information, and adjust a frequency of the transmission of the telemetry information to the infrastructure based on the comparison.Example 23 includes the system of example 21, wherein the infrastructure sidecar circuitry adjusts the scheduling of the service by at least one of performing load-balancing or auto-scaling.Example 24 includes the system of example 21, further including generating a service topology mapping based on the telemetry information, the determination of whether to adjust the scheduling of the service request being based on the service topology mapping.Example 25 includes the system of example 24, wherein the service topology mapping maps service instances and the edge devices across a plurality of platforms.Example 26 includes an apparatus to facilitate service proxying, the apparatus including interface circuitry to obtain an intercepted service request, and processor circuitry including one or more of at least one of a central processing unit, a graphic processing unit or a digital signal processor, the at least one of the central processing unit, the graphic processing unit or the digital signal processor having control circuitry to control data movement within the processor circuitry, arithmetic and logic circuitry to perform one or more first operations corresponding to instructions, and one or more registers to store a result of the one or more first operations, the instructions in the apparatus, a Field Programmable Gate Array (FPGA), the FPGA including logic gate circuitry, a plurality of configurable interconnections, and storage circuitry, the logic gate circuitry and interconnections to perform one or more second operations, the storage circuitry to store a result of the one or more second operations, or Application Specific Integrate Circuitry (ASIC) including logic gate circuitry to perform one or more third operations, the processor circuitry to perform at least one of the first operations, the second operations or the third operations to instantiate target endpoint circuitry to identify an active service instance corresponding to the intercepted service request, telemetry monitoring circuitry to compare first telemetry data corresponding to the service instance to a service quality metric, and the target endpoint circuitry to select a node to service the service request based on the comparison and further telemetry data, and cause transmission of the service request to the node.Example 27 includes the apparatus of example 26, wherein the processor circuitry is to determine that the service instance corresponds to the intercepted service request based on a topology mapping of service instances across a plurality of platforms.Example 28 includes the apparatus of example 26, wherein the processor circuitry is to update a topology mapping based on second telemetry data from the service instance.Example 29 includes the apparatus of example 26, wherein the processor circuitry is to select the node by processing a topology mapping of service instances to identify a first group of service instances capable of servicing the service request, and generating a second group of service instance by filtering out services instances in the first group that do not have capacity to service the service request.Example 30 includes the apparatus of example 29, wherein the processor circuitry is to determine the first group based on at least one of capacity information or response time information from the service instances.Example 31 includes the apparatus of example 29, wherein the processor circuitry is to, when the second group of service instances is empty, initiate a new service instance to service the service request.Example 32 includes the apparatus of example 26, wherein the node is at least one of an edge appliance, an edge device, a virtual machine, or an infrastructure device.Example 33 includes the apparatus of example 26, wherein the processor circuitry is to, when the service request corresponds to a non-default load balancing protocol, select the node based on the non-default load balancing protocol.Example 34 includes the apparatus of example 33, wherein the processor circuitry is to validate that the node selected based on the non-default load balancing protocol will not result in an error.Example 35 includes the apparatus of example 26, wherein the further telemetry data includes at least one of second telemetry data corresponding to the node or third telemetry data corresponding to infrastructure.From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed herein to facilitate service proxying. Disclosed examples provide an infrastructure processing unit (IPU) to utilize infrastructure and appliance telemetry to guide smart scheduling across the edge and edge to cloud migration of services. Using examples disclosed herein, the role of the IPU and the infrastructure is extended through the use of IPU sidecar circuitry (e.g., logic implemented in a container) and infrastructure sidecar circuitry to achieve low latency service management with guidance from software, while making service mesh execution efficient and extensible. By including the IPU sidecar circuitry in the IPU , as opposed to having networking sidecar circuitry in each node of a platform, the amount of resources of the nodes in a platform are freed up for other tasks, while the IPU manages network communications. For this integration of dynamic load balancing, auto-scaling, and local orchestration, examples disclosed herein create an "IPU-mesh" in which one or more IPUs that host application sidecars also communicate with an infrastructure extension (e.g., an infrastructure sidecar circuitry). Accordingly, disclosed methods, apparatus and articles of manufacture are directed to one or more improvement(s) in the functioning of a computer.Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.
Apparatuses for providing external terminals of a semiconductor device are described. An example apparatus includes: a pad formation area including a plurality of pads disposed at an edge of the apparatus; a peripheral circuit area including a plurality of circuit blocks coupled to a memory cell array, each circuit block of the plurality of circuit blocks including a via disposed at a side opposite to the pad formation area with respect to each circuit block; and a plurality of conductors, each conductor coupling the via to the corresponding pad, and crossing over, at least in part, an area inthe peripheral circuit area that is outside the circuit block comprising the via.
1.An apparatus comprising a semiconductor chip, wherein the semiconductor chip comprises:An edge defining a terminal of the semiconductor chip;a pad forming region along the edge, the pad forming region comprising a plurality of pads disposed along the edge;a circuit block including a transistor, a via coupled to the transistor, and a first circuit associated with the transistor;a distribution conductor that couples the via to a corresponding one of the plurality of pads,Wherein the first circuit is displaced between the pad forming region and the through hole.2.The apparatus of claim 1 wherein said transistor is disposed between said first circuit and said via.3.The device of claim 1Wherein the circuit block further includes a second circuit associated with the first circuit;Wherein the second circuit is formed in the pad formation region under at least one of the plurality of pads.4.The apparatus of claim 3, wherein the at least one of the plurality of pads is different from the corresponding one of the plurality of pads, and the corresponding one of the plurality of pads The person is positioned away from the circuit block.5.The device of claim 1Wherein the semiconductor chip further comprises a multilayer wiring structure,Wherein the multilayer wiring structure comprises at least a first wiring layer and a second wiring layer, the first wiring layer comprising one or more first conductors and a first covering the one or more first conductors An interlayer insulating film, and the second wiring layer includes one or more second conductors and a second interlayer insulating film covering the one or more second conductors, andWherein the distributed conductive layer has a thickness greater than each of the one or more first and second conductors.6.The apparatus of claim 5 wherein said distributed conductive layer has a thickness that is at least five times thicker than each of said one or more first and second conductors.7.The apparatus of claim 5 wherein said distributed conductive layer has a thickness that is more than five times thicker for each of said one or more first and second conductors.8.The apparatus of claim 1 wherein said pad and said conductor are made of a distributed conductive layer.9.The apparatus of claim 8 wherein said distributed conductive layer is made of an intermediate conductivity material.10.A semiconductor chip comprising:a liner included in the pad formation region, the pad configured to couple to an external circuit;a first circuit including a through hole coupled to the pad,Wherein the through hole is disposed along a first side of the first circuit, the first side being opposite to a second side of the first circuit, wherein the pad forming region is along the first circuit The second side extends.11.The semiconductor chip of claim 10, further comprising:Distributing a conductive layer comprising the liner and a conductor coupling the liner and the via;a first wiring layer including a first metal layer;a second wiring layer including a second metal layer between the first wiring layer and the distributed conductive layer,Wherein the through hole is made of the second metal layer.12.The semiconductor chip of claim 11 wherein said second metal layer is made of an intermediate conductivity material.13.The semiconductor chip according to claim 11, wherein at least a portion of said first circuit is disposed on said first wiring layer.14.A semiconductor chip according to claim 13, further comprising a semiconductor substrate located at an opposite side of said second wiring layer with respect to said first wiring layer,Wherein the first circuit comprises at least one transistor at least partially made up of the semiconductor substrate.15.The semiconductor chip according to claim 11, wherein said first metal layer is made of a low conductivity material;Wherein the first circuit includes at least one resistor made of the first metal layer.16.The semiconductor chip of claim 12 wherein said first circuit comprises:a read path including a first conversion circuit configured to receive first read data from the memory cell array, convert the parallel first read data into serial second read data and Further configured to provide the second read data to the through hole;a write path comprising a second conversion circuit configured to receive serial first write data from the via to convert the first write data to parallel second write data, And further configured to provide the second write data to the array of memory cells.17.An apparatus comprising:a pad forming region comprising a plurality of pads disposed at an edge of the device;a peripheral circuit region including a plurality of circuit blocks coupled to the memory cell array, each of the plurality of circuit blocks including being disposed at a side opposite to the pad forming region with respect to each of the circuit blocks Through hole; anda plurality of conductors, each conductor coupling the via to the corresponding pad and at least partially spaning an area in the peripheral circuit region outside of the circuit block including the via.18.The apparatus of claim 17 wherein each of the conductors coupling the via to the corresponding pad spans an adjacent circuit block of the circuit block including the via.19.The apparatus of claim 17 wherein a total width of said plurality of circuit blocks along said edge is greater than a total width of said plurality of pads along said edge.20.The device of claim 17, further comprising a clock line coupled to each circuit block of the circuit block and configured to provide a clock signal.
Wiring with external terminalsBackground techniqueHigh data reliability, high speed memory access, reduced chip size and reduced power consumption are features required for semiconductor memories.In conventional peripheral circuits for semiconductor devices, for example, pads and data queue circuits (or data input/output circuits) are arranged across layers in a corresponding manner. The data queue circuit or data input/output circuit is hereinafter collectively referred to as a "DQ circuit." 1 is a schematic view of a peripheral circuit surrounding an external terminal in a semiconductor device. Each pad configured to be coupled to an external circuit external to the semiconductor device is located adjacent (eg, immediately above) its respective DQ circuit to maintain the same length of the wire between the pad and the DQ circuit, said length being sufficient Short to have the same low impedance. In recent years, efforts have been made to reduce the area of peripheral circuit regions occupied by peripheral circuits contained on semiconductor dies of memory devices. For example, the size of each DQ circuit becomes smaller to improve the driving capability of faster operation by shorter wiring (for example, clock signal line CLK that supplies a clock signal to the DQ circuit).Summary of the inventionAn example device in accordance with an embodiment of the present invention can include a semiconductor chip. The semiconductor chip may include: an edge defining a terminal of the semiconductor chip; a pad formation region along the edge; a circuit block; and a distribution conductor. The pad forming region can include a plurality of pads disposed along the edges. The circuit block can include a transistor, a via coupled to the transistor, and a first circuit associated with the transistor. The distribution conductor can couple the via to a corresponding one of the plurality of pads. The first circuit is displaceable between the pad forming region and the through hole.An exemplary semiconductor chip in accordance with an embodiment of the present invention can include a pad that can be included in a pad formation region and that can be coupled to an external circuit, and a first circuit that can include a via that is coupled to the pad. The via may be disposed along a first side of the first circuit, the first side being opposite the second side of the first circuit. The pad forming region may extend along a second side of the first circuit.Another example apparatus in accordance with an embodiment of the present invention can include a pad formation region that can include a plurality of pads disposed at an edge of the device; a peripheral circuit region that can include an array coupled to the memory cell a plurality of circuit blocks, wherein each of the plurality of circuit blocks may include a through hole disposed at a side opposite to the pad forming region with respect to each circuit block; and a plurality of conductors, each conductor The via may be coupled to the corresponding pad and may at least partially span an area of the peripheral circuit region that is external to the circuit block including the via.DRAWINGS1 is a schematic diagram of a prior art peripheral circuit around an external terminal in a semiconductor device.2 is a block diagram of a semiconductor device in accordance with the present invention.3 is a layout view of a semiconductor device in accordance with an embodiment of the present invention.4 is a schematic diagram of circuitry around an external terminal in a semiconductor device in accordance with the present invention.Figure 5A is a block diagram of a DQ circuit in a semiconductor device in accordance with the present invention.Figure 5B is a layout diagram of a DQ circuit and pad in a semiconductor device in accordance with one embodiment of the present invention.6 is a layout diagram of a plurality of DQ circuits, DQS circuits, and a plurality of pads over a plurality of DQ circuits and DQS circuits in a semiconductor device in accordance with an embodiment of the present invention.Figure 7 is a circuit diagram of a unit circuit in an output buffer in a DQ circuit in a semiconductor device in accordance with the present invention.Figure 8 is a schematic illustration of circuitry around an external terminal in a semiconductor device in accordance with the present invention.Detailed waysVarious embodiments of the present invention will be described in detail below with reference to the drawings. The detailed description below refers to the accompanying drawings, which illustrate, by way of illustration These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized, and structural, logical, and electrical changes may be made without departing from the scope of the invention. The various embodiments disclosed herein are not necessarily mutually exclusive, as some disclosed embodiments may be combined with one or more other disclosed embodiments to form a new embodiment.As previously described, the size of each DQ circuit has become smaller; however, reducing the size of each pad is still challenging. Due to the difference in size between the pads and the DQ circuit, the wiring between the DQ circuits and between the DQ circuits and the pads is longer. Since the inherent impedance of each wiring is based on its length, longer wiring tends to result in higher power consumption.2 is a block diagram of a semiconductor device 10 in accordance with one embodiment of the present invention. For example, the semiconductor device 10 can be a DDR4 SDRAM integrated into a single semiconductor chip. The semiconductor device 10 can be mounted on an external substrate 2, which is a memory module substrate, a mother board, or the like. The external substrate 2 employs an external resistor RZQ which is connected to the calibration terminal ZQ 27 of the semiconductor device 10. The external resistor RZQ is the reference impedance of the ZQ calibration circuit 38. In this embodiment, the external resistor RZQ is coupled to a ground potential.As shown in FIG. 2, the semiconductor device 10 includes a memory cell array 11. The memory cell array 11 includes a plurality of memory banks each including a plurality of word lines WL, a plurality of bit lines BL, and a plurality of memory cells disposed at intersections of the plurality of word lines WL and the plurality of bit lines BL MC. The selection of the word line WL is performed by the row decoder 12, and the selection of the bit line BL is performed by the column decoder 13. Sense amplifier 18 is coupled to a corresponding bit line BL and to a local I/O line pair LIOT/B. The local IO line pair LIOT/B is connected to the main IO line pair MIOT/B via a transfer gate TG 19 serving as a switch.Turning to the explanation of a plurality of external terminals included in the semiconductor device 10, the plurality of external terminals include an address terminal 21, a command terminal 22, a clock terminal 23, a data terminal 24, power supply terminals 25 and 26, and a calibration terminal ZQ27. According to an embodiment, the input signal block 41 may include an address terminal 21, and the command terminal 22 and the clock terminal 23 may include an input buffer to be described later. Data interface block 42 includes data terminal 24. Data terminal 24 can be coupled to an output buffer for a read operation of the memory. Alternatively, data terminal 24 can be coupled to an input buffer for read/write access to the memory. 2 shows an example of a dynamic random access memory (DRAM), however, any device having an external terminal for signal input/output may be included as an external terminal of an embodiment of the present invention.The address terminal 21 is supplied with an address signal ADD and a bank address signal BADD. The address signal ADD and the bank address signal BADD supplied to the address terminal 21 are transferred to the address decoder 32 via the address input circuit 31. The address decoder 32 receives the address signal ADD and supplies the decoded row address signal XADD to the row decoder 12, and supplies the decoded column address signal YADD to the column decoder 13. The address decoder 32 also receives the bank address signal BADD and supplies the bank address signal BADD to the row decoder 12, the column decoder 13, and the switch control circuit 14.The command signal COM is supplied to the command terminal 22. The command signal COM can include one or more separate signals. The command signal COM input to the command terminal 21 is input to the command decoder 34 via the command input circuit 33. The command decoder 34 decodes the command signal COM to generate various internal command signals. For example, the internal commands may include a row command signal for selecting a word line and a column command signal (eg, a read command or a write command) for selecting a bit line, and a calibration signal ZQC provided to the ZQ calibration circuit 38.Therefore, when a read command is issued and a read command is supplied to the row address and the column address in time, the read data is read from the memory cells MC in the memory cell array 11 specified by these row addresses and column addresses. The read data DQ is output from the outside of the data terminal 24 via the read/write amplifier 15 and the input/output circuit 17. Similarly, when a write command is issued and this command is supplied to the row address and the column address in time, and then the write data DQ is supplied to the data terminal 24, the write data DQ is input and/or written via the input/output circuit 17 The amplifier 15 is supplied to the memory cell array 11 and written in the memory cell MC designated by the row address and the column address.The external clock signals CK and /CK are supplied to the clock terminal 23, respectively. These external clock signals CK and /CK are complementary to each other and supplied to the clock input circuit 35. The clock input circuit 35 receives the external clock signals CK and /CK and generates an internal clock signal ICLK. The internal clock signal ICLK is supplied to the internal clock generator 36, and thus the phase-controlled internal clock signal LCLK is generated based on the received internal clock signal ICLK and the clock enable signal CKE from the command input circuit 33. Although not limited thereto, the DLL circuit can be used as the internal clock generator 36. The phase-controlled internal clock signal LCLK is supplied to the input/output circuit 17 and serves as a timing signal for determining the output timing of the read data DQ. The internal clock signal ICLK is also supplied to the timing generator 37, and thus various internal clock signals can be generated.The power supply potentials VDD and VSS are supplied to the power supply terminal 25. These power supply potentials VDD and VSS are supplied to the internal power supply circuit 39. The internal power supply circuit 39 generates various internal potentials VPP, VOD, VARY, VPERI, and the like and a reference potential ZQVREF based on the power supply potentials VDD and VSS. The internal potential VPP is mainly used for the row decoder 12, and the internal potentials VOD and VARY are mainly used in the sense amplifier 18 included in the memory cell array 11, and the internal potential VPERI is used in many other circuit blocks. The reference potential ZQVREF is used in the ZQ calibration circuit 38.The power supply potentials VDDQ and VSSQ are supplied to the power supply terminal 26. These power supply potentials VDDQ and VSSQ are supplied to the input/output circuit 17. The power supply potentials VDDQ and VSSQ may be the same as the potentials of the power supply potentials VDD and VSS supplied to the power supply terminal 25, respectively. However, the power supply potentials VDDQ and VSSQ can be used for the input/output circuit 17, so that the power supply noise generated by the input/output circuit 17 does not propagate to other circuit blocks.The calibration terminal ZQ is connected to the calibration circuit 38. When activated by the calibration signal ZQ_COM, the calibration circuit 38 performs a calibration operation with reference to the impedance of the external resistor Re and the reference potential ZQVREF. The impedance code ZQCODE obtained by the calibration operation is supplied to the input/output circuit 17, and thus the impedance of the output buffer (not shown) included in the input/output circuit 17 is specified.FIG. 3 is a layout view of a semiconductor device 10 in accordance with an embodiment of the present invention. The semiconductor device 10 can have edges 50a, 50b, 50c, and 50d that define ends of the semiconductor device 10. The edges 50b and 50d can extend along the first direction 57a, and the edges 50a and 50c can extend along the second direction 57b, which is substantially perpendicular to the first direction 57a. For example, FIG. 3 can be a plan view of a layout of semiconductor device 10 from a third direction (not shown) perpendicular to first direction 57a and second direction 57b, including circuitry and array regions. The semiconductor device 10 may include a pad formation region 51, a peripheral circuit region 52, and a memory cell array region 53 aligned in the first direction 57a in this order. The data interface block 42 of FIG. 2 can be disposed on the pad forming region 51 along the edge 50a. The peripheral circuit region 52 may be disposed between the pad formation region 51 and the memory cell array region 53. The pad forming region 51 can include a plurality of pads 54 disposed along the edge 50a. For example, the plurality of pads 54 may include the external terminals 24 and the power terminals 26 of FIG. The memory cell array region 53 may include, for example, the memory cell array 11 in FIG.4 is a schematic diagram of circuitry around an external terminal in a semiconductor device 10, in accordance with an embodiment of the present invention. For example, FIG. 4 may be a plan view of circuitry around an external terminal in semiconductor 10 from a third direction (not shown). The semiconductor device 10 may include a pad formation region 51, a peripheral circuit region 52, and a memory cell array region 53 in the first direction 57a. A plurality of pads 54 on the pad forming region 51 can be disposed along the edge 50a, the edges extending along a second direction 57b, the second direction being substantially perpendicular to the first direction 57a. The plurality of pads 54 may include DQ0 to DQ7 pads for reading or writing 8-bit data data queues (DQ0 to DQ7) for receiving a plurality of VDDQ pads of the first power supply voltage (VDD). a plurality of VSS pads for receiving a second supply voltage (VSS, such as a ground voltage), DQS_t pads and DQS_c pads for receiving real and complementary data strobe signals (DQS_t and DQS_c), and for receiving data masks DM pad for (DM) signal. A plurality of circuit blocks including a data queue (DQ) circuit 60 for data queues (DQ0 to DQ7) for reading or writing 8-bit data aligned in the second direction 57b, a data strobe (DQS) circuit 60 'and data mask (DM) circuit 60" may be disposed across pad formation region 51 and peripheral circuit region 52. The total width of plurality of pads 54 along edge 50a of second direction 57b may be substantially greater than The total width of the plurality of DQ circuits 60, 60' and 60" of the edge 50a of the two directions 57b. Each of the plurality of DQ circuits 60 can include a plurality of vias disposed along a first side of each DQ circuit 60, the first side being opposite the second side of each DQ circuit 60. The pad formation region 51 extends along a second side of the plurality of DQ circuits 60, 60' and 60". For example, each of the plurality of vias may be coupled to the plurality of corresponding pads 54 by corresponding wirings 56. Corresponding pads. For example, the wiring 56 may be a conductor made of a distributed conductive layer (for example, an embedded redistribution layer [iRDL]. For example, the DQ1 circuit 60 of DQ1 may include a via 55a and a via 55b. 55a may be coupled to pad 54 (DQ1 pad) for DQ1, and via 55b may be coupled to pad 54 (VSS pad) for VSS. Cross section 100 may be by DQ1 pad 54 and via 55a The width of the line and the line extending upward in the third direction are defined, the third direction being perpendicular to the first direction 57a and the second direction 57b. The cross section 100 may be mentioned later in the present invention. Crossing the area outside the corresponding circuit block. For example, the wiring 56 coupling the DQ1 pad 54 and the via 55a may span the DQ0 circuit 60 for DQ1 and the area outside the circuit block. For example, the DQ5 pad 54 may be placed at DQ5. The DQ4 circuit 60 of the DQ4 of the adjacent circuit block of the circuit 60 and the area above and across the adjacent circuit block DM circuit 60 of the DQ4 circuit 60 . Accordingly, the wiring 56 that couples the DQ5 pad and the via 55 for the DQ 5 in the DQ5 circuit 60 can straddle the adjacent DQ4 circuit 60. The DQ6 pad 54 can be disposed in a region above the DQ5 circuit 60, and the wiring 56 coupling the DQ6 pad 54 and the DQ6 circuit 60 can span at least a portion of the DQ6 circuit 60 and the adjacent DQ5 circuit 60.FIG. 5A is a block diagram of a DQ circuit 60 in a semiconductor device 10 in accordance with the present invention. For example, the DQ circuit 60 can perform a read operation of data from a plurality of memory cells in the memory cell array region 53 to the vias 1 55a via the read data path 60a. The DQ circuit 60 can perform a write operation of data from the vias 1 55a to the plurality of memory cells in the memory cell array region 53 via the write data path 60b. The via 2 55b can be positioned proximate to the via 1 55a, however, the via 2 55b can be coupled to a VDDQ pad or VSS pad to receive the supply voltage, and thus external to the DQ circuit 60.For example, the read data path 60a may include a read data storage circuit (RDSC) 61, a read clock synchronization circuit (RCSC) 62, a driver circuit (DC) 63, an output buffer (OB) 68, and an output ESD (electrostatic discharge). Protection Circuit (OEP) 68'. A read data storage circuit (RDSC) 61 can receive data read from a plurality of memory cells in the memory cell array region 53 and store the data. For example, the read clock synchronization circuit (RCSC) 62 can receive a clock signal for a read operation (read CLK) and data from a read data storage circuit (RDSC) 61. A read clock synchronization circuit (RCSC) 62 can convert data in multiple bits into data in chronological order (in serial format) in parallel and provide chronological data using the read CLK signal. Driver circuit (DC) 63 may adjust the pass rate of output buffer (OB) 68 based at least in part on calibration signal ZQ (eg, via calibration terminal ZQ 27 in FIG. 2). An output buffer (OB) 68 can provide data in a serial format to the via 1 55a. The output ESD protection circuit (OEP) 68' protects data transmitted from the output buffer (OB) 68 to the via 155a from electrostatic charge.For example, the write data path 60b may include a write data drive circuit (WDDC) 64, a write clock synchronization circuit (WCSC) 62, a timing adjustment circuit (TAC) 66, an input buffer (IB) 67, and an input ESD protection circuit ( IEP) 67'. The input ESD protection circuit (IEP) 67' protects data transmitted from the via 1 55a to the input buffer (IB) 67 from electrostatic charge. The input buffer (IB) 67 can receive data from the via 1 55a, the reference voltage REF, and the data strobe clock signal (DQS CLK). In response to the data strobe clock signal (DQS CLK), the input buffer (IB) 67 can use the reference voltage (VREF) to latch data from the via 1 55a to determine the value of the data (eg, logic high or logic low) Level). In view of the data setup time tDS and the data hold time tDH, the time adjustment circuit (TAC) 66 can adjust the timing to provide data from the input buffer (IB) 67 to the subsequent stage of the write data path 60b. For example, the data settling time tDS can describe the setup time of the input data pins at the pad 54 of the rising and falling edges of the data strobe signal DQS. The data hold time tDS can describe the hold time of the input data pins at the pad 54 of the rising and falling edges of the data strobe signal DQS. For example, write clock synchronization circuit (WCSC) 65 can receive a clock signal (write CLK) for write operations and data from time adjustment circuit (TAC) 66. A write clock synchronization circuit (WCSC) 65 can convert data in a serial format into data of a plurality of bits in parallel, and supply data of a plurality of bits in parallel to the write data drive circuit (WDDC) in response to the write CLK signal. 64. The write data driver circuit (WDDC) 64 can include a plurality of drivers that can provide parallel data to memory cells in the memory cell array region 53.FIG. 5B is a layout diagram of a DQ circuit 60 and a pad 54 included in a semiconductor device 10, in accordance with an embodiment of the present invention. For example, FIG. 5B may be the DQ circuit 60, the pad 54, the through hole 1 15a, and the through hole 2 55b in the semiconductor device 10 from the third direction (not shown) perpendicular to the first direction 57a and the second direction 57b. The floor plan of the layout. For example, DQ circuit 60 can be DQ circuit 60 in Figure 5A. For example, read data storage circuit (RDSC) 61 and write data driver circuit (WDDC) 64 may be disposed under pad 54, which is coupled to a plurality of DQ circuits 60, DQS circuits 60' and DM in FIG. One of the circuits 60. For example, one of the plurality of DQ circuits 60, DQS circuits 60' and DM circuits 60" may be disposed across the pad formation region 51 and the peripheral circuit region 52. The write data drive circuit (WDDC) 64 and the read data storage circuit At least a portion of the (RDSC) 61 may be formed in the pad forming region 51.For example, a read clock synchronization circuit (RCSC) 62 can be placed adjacent to a read data storage circuit (RDSC) 61 positioned in a first direction 57a. A drive circuit (DC) 63 can be placed adjacent to a read clock synchronization circuit (RCSC) 62 located in a first direction 57a. An output buffer (OB) 68 can be disposed between the driver circuit (DC) aperture 63 in the first direction 57a and the output ESD protection circuit (OEP) 68' below the via 155a. Therefore, the circuit components of the read data path 60a include a read data storage circuit (RDSC) 61, a read clock synchronization circuit (RCSC) 62, a driver circuit (DC) 63, an output buffer (OB) 68, and an output ESD protection. An electrical circuit (OEP) 68' can be placed in the region between the pad 54 and the through hole 1 55a as viewed from the third direction.For example, a write clock synchronization circuit (WCSC) 65 can be placed adjacent to a read data storage circuit (RDSC) 61 positioned in a first direction 57a. A timing adjustment circuit (TAC) 66 can be placed adjacent to a write clock synchronization circuit (WCSC) 65 located in a first direction 57a. The input buffer (IB) 67 can be placed in the first direction 57a by the timing adjustment circuit (TAC) 66 and the input ESD in the first direction 57a of the VDDQ/VSS ESD protection circuit (VVEP) 69 below the via 2 55b. Between the protection circuits (IEP) 67', the input ESD protection circuit can protect the voltage signal from the via 2 55b (having the power supply potential VDDQ or VSS) from electrostatic charge-induced failure. For example, via 1 55a can be located in a second direction 57b from via 2 55b, and output ESD protection circuit (OEP) 68' can be located in a second direction 57b from VDDQ/VSS ESD protection circuit (VVEP) 69. Therefore, the circuit components of the write data path 60b include a write data driver circuit (WDDC) 64, a write clock synchronization circuit (WCSC) 65, a timing adjustment circuit (TAC) 63, an input buffer (IB) 67, and an input ESD. An protection circuit (IEP) 67' can be placed in the region between the pad 54 and the through hole 2 55b as viewed from the third direction.6 is a layout diagram of a plurality of DQ circuits 60, DQS circuits 60', and a plurality of pads 54 over a plurality of DQ circuits 60 and DQS circuits 60' in a semiconductor device 10, in accordance with an embodiment of the present invention. For example, FIG. 6 may be a plan view of a plurality of pads 54 over a plurality of DQ circuits 60 and DQS circuits 60' in a semiconductor device 10 from a third direction (not shown). The plurality of pads 54 may include a DQS_t pad 54a and a DQS_c pad 54b that may be located above the read data storage circuit (RDSC) and the write data driver circuit (WDDC) of the DQ circuit 60 for DQ0. The plurality of pads 54 may include DQS_c pads and VSS pads, which may be located on the read data storage circuit (RDSC) and the write data driver circuit (WDDC) of the DQ circuit 60 for DQ1. The plurality of pads 54 can include a VSS pad, a DQ4 pad and a VDDQ pad that can be placed over the DQS circuit 60'. The plurality of pads 54 may include a VDDQ pad and a DQ7 pad, which may be located above the read data storage circuit (RDSC) and the write data driver circuit (WDDC) of the DQ circuit 60 for DQ7. As previously shown in FIG. 4, the DQS_t pad 54a on the DQ circuit 60 for DQ0 can be coupled to the via 55c in the DQS circuit 60' and the DQS_c pad on the DQ circuit 60 for DQ0 and DQ1. 54b can be coupled to a via 55c in the DQS circuit 60'. The write clock line (write CLK) can be coupled to a write clock synchronization circuit (WCSC) in DQ circuit 60 to provide a clock signal (write CLK) for the write operation. The read clock line (read CLK) can be coupled to a read clock synchronization circuit (RCSC) in DQ circuit 60 to provide a clock signal (read CLK) for the read operation. The data strobe clock signal (DQS CLK) may be provided from the DQS circuit 60' to the input buffer (IB) in the DQ circuit 60 via a data strobe clock line (DQS CLK).The DQ circuit 60 can be located between the pad and the via in the DQ circuit 50, as viewed from a third direction, wherein the pad can be coupled to the DQ circuit 60 that is not under the pad. In other words, the pad coupled to the DQ circuit 60 can be located external to the DQ circuit 60 as viewed from the third direction.Figure 7 is a circuit diagram of a unit circuit 70 in an output buffer 68 in a DQ circuit 60 in a semiconductor device 10 in accordance with the present invention. For example, output buffer 68 in Figures 5A and 5B can include a plurality of unit circuits 70 (not shown). Each unit circuit 70 can include a plurality of transistor circuits to represent a desired output impedance based on ZQ calibration and a desired pass rate based on pass rate calibration, as adjusted by driver circuit (DC) 63. For example, the unit circuit 70 of the output buffer 68 may include a transistor T1 that receives the adjustment signal (adj-sig) sequentially coupled in series between the power supply potentials VDDQ and VSS, and a transistor T2 that receives the control signal (ctrl-sig), which is pulled up. Resistor R1, pull-down resistor R2 and transistor T3 receiving a pull-down control signal (PullDownctrl-sig). For example, one of the transistors T1, T2, and T3 may be of an N-channel type. A node coupling the pull-up transistor R1 and the pull-down transistor R2 can be coupled to a via 55 that can be further coupled to a pad 54 of DQ (eg, DQ0, DQ1, ..., DQ7) that reads data.Figure 8 is a schematic illustration of circuitry around an external terminal in a semiconductor device in accordance with the present invention. For example, FIG. 8 may be a cross-sectional view of circuitry around an external terminal in semiconductor 10 along cross-section 100 in FIG. The semiconductor 10 may include a semiconductor substrate 89, an insulating material 87, a conductor 85 that electrically insulates the semiconductor substrate 89 from a plurality of wiring layers (including the first wiring layer 81 to the fourth wiring layer 84) in the multilayer wiring structure, and Passivation layer 86. Each of the first to fourth wiring layers 81 to 84 may include a metal layer for forming a conductive wiring and an interlayer insulating film as an insulator to isolate the metal layer from the metal layers of the other wiring layers. The circuit element in the metal layer and the other of the metal layers of the other wiring layer may be coupled through contact plugs and/or conductive vias. The DQ circuit 60, the DQS circuit 60', and the DM circuit 60' may be provided through the first wiring layer 81 to the fourth wiring layer 84.Table 1 shows an example of the material and thickness of the wiring layer.Table 1For example, the gate 91a of the transistor in the DQ circuit 60 may be disposed in the insulating material 87, and the source/drain diffusion (source or drain region) 91b of the transistor may be disposed in the semiconductor substrate 89. One of the source/drain diffusion regions 91b may be coupled to the second wiring layer via a contact plug 880, a low conductivity metal layer (metal 0, low conductivity material such as tungsten) 81a, and a conductive plug 881. A circuit component made of a high conductivity metal layer (metal 1, high conductivity material, such as copper) 82a in 82. The circuit elements in the metal layer (metal 1) 82a may be coupled to a conductor made of a metal layer (metal 0) 81a. The metal layer (metal 0) 81a is usually very thin and has a high impedance, such as tungsten, which is formed in the first layer wiring layer 81 via another contact plug 881. The conductor can be, for example, a pull up resistor R1 or a pull down resistor R2. The first interlayer insulating film 81b may cover a conductor made of a metal layer (metal 0) 81a, and includes a pull-up resistor R1 or a pull-down resistor R2. The resistor may be coupled to another conductor made of a metal layer (metal 1) 82a in the second wiring layer 82. The second interlayer insulating film 82b may cover another conductor made of a metal layer (metal 1) 82a. Another conductor made of a metal layer (metal 1) 82a may be coupled through a conductive via 882 to a highly conductive metal layer (metal 2, high conductivity material such as copper) 83a in the third wiring layer 83. a circuit component. The third interlayer insulating film 83b may cover a circuit element made of a metal layer (metal 2) 83a. The circuit component may be coupled to a via hole 155 made of an intermediate conductivity metal layer (metal 3, intermediate conductivity material such as aluminum) 84a in the fourth wiring layer 84. The fourth interlayer insulating film 84b may cover the through holes 1 55 made of the metal layer (metal 3) 84a. In this manner, the source or drain region 91b of the transistor in the DQ circuit 60 in the semiconductor substrate 89 can pass through the first layer wiring layer 81 to the fourth wiring layer via the contact plugs 880 and 881 and the contact via 882. The coupling hole 84 is coupled to the through hole 1 55 in the fourth wiring layer 84. Similarly, the DQ circuit 60 made of the metal layer (metal 3) 84a in the fourth wiring layer 84 and the via 55 in the DQS circuit 61 can be coupled into the semiconductor substrate 89 via the conductive plug and the conductive via. Transistor.The fourth wiring layer 84 may include a fourth interlayer insulating film 84b, which is usually very thick, covering the metal layer (metal 3) 84a. The fourth interlayer insulating film 84b may have a hole, and the through hole 1 55 made of the metal layer (metal 3) 84a may be in contact with the conductor 85 at the hole. The conductor 85 may be made of a distributed conductive layer (for example, an embedded redistribution layer [iRDL]) formed on the interlayer insulating film at the fourth wiring layer. For example, the distributed conductive layer can be made of a medium conductivity material, such as aluminum, having a thickness of about 4.5 um. For example, conductor 85 can have a width of approximately 8 um in order to reduce the impedance of conductor 85. A DQ liner 54 (eg, DQ1 liner 54) can be disposed over conductor 85 surrounded by a passivation layer 86 made of polyimide (PI). Therefore, the impedance of the longest conductor 85 can be reduced, such as the wiring 56 between the DQ0 pad 54 of FIG. 4 and the via 55 of the DQ circuit 60 for DQ0, and the impedance of the shortest conductor 85 can be further reduced. For example, the wiring 56 between the DQ7 pad 54 and the via 55 for the DQ circuit 60 of the DQ7. Therefore, the impedance difference of the read data path 60a or the write data path 60b of DQ0 to DQ7 (as shown in FIG. 5A) can be reduced within an acceptable range.Although the impedance of the conductor 85 depends on the thickness, the width, and/or the material, when the metal layer (metal 3) 84a and the conductor 85 can be formed close to each other, it is possible to control the thickness of the conductor 85 instead of the width of the conductor 85. For example, the thickness of the conductor 85 may be multiple (eg, at least 5 times) or more of the metal layer (metal 3) 84a. As discussed above, since the conductors 85 can be formed to have an increased thickness, it may be desirable to use a fabrication machine to form the conductors 85 that distribute the conductive layers independently of the plurality of wiring layers (including the first layer wiring). The machine of the process of layer 81 to fourth layer wiring layer 84) is dedicated to the iRDL forming process.Although the present invention has been disclosed in the context of certain preferred embodiments and examples, those skilled in the art will appreciate that the invention extends beyond the specific disclosed embodiments to other alternative embodiments and/or uses of the invention and Modifications and their equivalents. In addition, other modifications within the scope of the invention will be apparent to those skilled in the <RTIgt; It is also contemplated that various combinations or sub-combinations of the specific features and aspects of the embodiments may be made and still be within the scope of the invention. It is to be understood that various features and aspects of the disclosed embodiments can be combined or substituted with each other to form various modes of the disclosed invention. Therefore, it is intended that the scope of the invention disclosed in the claims
The invention relates to reducing latency of a hardware trusted execution environment. Example methods and systems are directed to reducing latency in providing a trusted execution environment (TEE). Initializing the TEE includes a plurality of steps prior to the TEE starting execution. In addition to a workload-dependent initialization, a workload-independent initialization, such as adding memory to a TEE, is to be performed. In a function as a service (FaaS) environment, a substantial portion of the TEE is workload independent so as to be executed before receiving the workload. Certain steps performed during TEE initialization are the same for certain classes of workloads. Thus, a common portion of the TEE initialization sequence may be executed before the TEE is requested. When a TEE is requested for a workload in the category, and it is known that a portion of the TEE is specialized for its particular purpose, a final step of initializing the TEE is performed.
1.A system for providing a Trusted Execution Environment (TEE), the system comprising:processor; anda storage device coupled to the processor to store instructions that, when executed by the processor, cause the processor to:Pre-initializing a pool of TEEs, where the pre-initialization of each TEE in the pool of TEEs includes allocating the memory of the storage device for the TEE;receiving a request for a TEE after pre-initialization of the pool of TEEs;select the TEE from a pool of pre-initialized TEEs; andAccess to the selected TEE is provided in response to the request.2.The system of claim 1, wherein the instructions further cause the processor to:The selected TEE is modified based on the information in the request before providing access to the selected TEE.3.2. The system of claim 2, wherein modifying the selected TEE includes enabling the selected TEE.4.3. The system of claim 2, wherein modifying the selected TEE includes copying data or code to memory allocated for the TEE.5.The system of claim 2, wherein modifying the selected TEE comprises:assigning an encryption key to the selected TEE; andThe memory allocated to the TEE is encrypted with the encryption key.6.The system of claim 2, wherein modifying the selected TEE comprises:assigning an encryption key identifier to the selected TEE; andThe memory assigned to the TEE is encrypted with the encryption key corresponding to the encryption key identifier.7.The system of claim 2, wherein modifying the selected TEE comprises:A Security Extended Page Table (EPT) branch is created for the selected TEE, which derives the code mapping from the template TEE.8.The system of any of claims 1-7, wherein:Pre-initialization of the pool of TEEs includes copying the state of a template TEE to each TEE in the pool of TEEs.9.9. The system of claim 8, wherein copying the state of the template TEE to each TEE in the pool of TEEs comprises copying with a different ephemeral key pair for each TEE in the pool of TEEs state is encrypted.10.The system of any of claims 1-7, wherein the instructions are further for:Based on the determination that execution of the selected TEE is complete, the selected TEE is restored to the state of the template TEE.11.The system of any of claims 1-7, wherein the instructions further cause the processor to:receiving a request to release the selected TEE; andIn response to the request to release the selected TEE, the selected TEE is returned to the pool of TEEs.12.The system of any of claims 1-7, wherein the instructions further cause the processor to:receive a precomputed hash value;determining a hash value of the binary memory state; andThe binary memory state is copied from unprotected memory to the selected TEE based on the determined hash value and the precomputed hash value.13.The system of claim 12, wherein the instructions further cause the processor to:assigning an access-controlled key identifier to the selected TEE; andIt is ensured that the access-controlled key identifier assigned during the lifetime of the selected TEE is not assigned to any other TEE.14.A system for providing a Trusted Execution Environment (TEE), the system comprising:processor; anda storage device coupled to the processor to store instructions that, when executed by the processor, cause the processor to:A pool of pre-initialized TEEs;creating a template TEE that is stored in said storage device and marked read-only;receive requests; andIn response to the request:copying the template TEE to create a TEE; andProvides access to the created TEE.15.15. The system of claim 14, wherein the template TEE includes initial memory content and layout for a function as a service (FaaS).16.The system of claim 14 or 15, wherein the processor prevents execution of the template TEE.17.A method of providing a Trusted Execution Environment (TEE), the method comprising:encrypting the data and code with the first encryption key by the processor;Store encrypted data and code in storage devices;receiving a request by the processor;In response to the request:assigning a second encryption key to the TEE;decrypting encrypted data and codes using the first encryption key;encrypting the decrypted data and code with the second encryption key; andProvides access to the TEE.18.A method of providing a Trusted Execution Environment (TEE), the method comprising:The pool of TEEs is pre-initialized by the processor, and the pre-initialization of each TEE in the pool of TEEs includes allocating the memory of the storage device for the TEE;receiving a request by the processor after pre-initialization of the pool of TEEs; andIn response to the request:selecting, by the processor, a TEE from a pool of pre-initialized TEEs; andAccess to the selected TEE is provided by the processor.19.The method of claim 18, further comprising:The selected TEE is modified based on the information in the request before providing access to the selected TEE.20.19. The method of claim 19, wherein modifying the selected TEE comprises enabling the selected TEE.21.19. The method of claim 19, wherein modifying the selected TEE includes copying data or code to memory allocated for the TEE.22.The method of any of claims 19-21, wherein modifying the selected TEE comprises:assigning an encryption key to the selected TEE; andThe memory allocated to the TEE is encrypted with the encryption key.23.The method of any of claims 18-21, wherein:Pre-initialization of the pool of TEEs includes copying the state of a template TEE to each TEE in the pool of TEEs.24.The method of any of claims 18-21, further comprising:Based on the determination that execution of the selected TEE is complete, the selected TEE is restored to the state of the template TEE.25.The method of any of claims 18-21, further comprising:receiving a request to release the selected TEE; andIn response to the request to release the selected TEE, the selected TEE is returned to the pool of TEEs.26.The method of any of claims 18-21, further comprising:receive a precomputed hash value;determining a hash value of the binary memory state; andThe binary memory state is copied from unprotected memory to the selected TEE based on the determined hash value and the precomputed hash value.27.A method of providing a Trusted Execution Environment (TEE), the method comprising:A template TEE is created by the processor, which is stored in the storage device and marked as read-only;receiving a request by the processor; andIn response to the request:copying the template TEE to create a TEE; andProvides access to the created TEE.28.28. The method of claim 27, wherein the template TEE includes initial memory contents and layout for a function as a service (FaaS).
Reduce the latency of hardware trusted execution environmenttechnical fieldThe subject matter disclosed herein relates generally to hardware trusted execution environments (TEEs). In particular, the present disclosure relates to systems and methods for reducing the latency of hardware TEEs.Background techniqueHardware permission levels may be used by the processor to restrict memory access by applications running on the device. The operating system runs at a higher privilege level and has access to all of the device's memory and defines memory ranges for other applications. Applications running at lower privilege levels are restricted from accessing memory within the scope defined by the operating system, and cannot access the memory of other applications or operating systems. However, applications are not protected against malicious or compromised operating systems.The TEE is enabled by processor protections, which guarantee that code and data loaded inside the TEE are protected from being accessed by code executing outside the TEE. Thus, the TEE provides an isolated execution environment that prevents the data and code contained in the TEE from being accessed by malicious software, including the operating system, at the hardware level.SUMMARY OF THE INVENTIONAccording to one aspect of the present disclosure, there is provided a system for providing a Trusted Execution Environment (TEE), the system comprising: a processor; and a storage device coupled with the processor to store instructions, the instructions when executed by the processor; When the processor executes, the processor: pre-initializes the pool of TEEs, and the pre-initialization of each TEE in the pool of TEEs includes allocating the memory of the storage device for the TEE; After pre-initialization, a request for a TEE is received; the TEE is selected from a pool of pre-initialized TEEs; and access to the selected TEE is provided in response to the request.According to one aspect of the present disclosure, there is provided a system for providing a Trusted Execution Environment (TEE), the system comprising: a processor; and a storage device coupled with the processor to store instructions, the instructions when executed by the processor; The processor, when executed, causes the processor to: pre-initialize a pool of TEEs; create a template TEE that is stored in the storage device and marked read-only; receives a request; and in response to the request: The template TEE is copied to create a TEE; and access to the created TEE is provided.According to an aspect of the present disclosure, there is provided a method of providing a Trusted Execution Environment (TEE), the method comprising: encrypting, by a processor, data and code using a first encryption key; encrypting the encrypted data and code storing in a storage device; receiving a request by the processor; in response to the request: assigning a second encryption key to the TEE; decrypting encrypted data and code using the first encryption key; using the A second encryption key to encrypt the decrypted data and code; and to provide access to the TEE.According to an aspect of the present disclosure, there is provided a method of providing a Trusted Execution Environment (TEE), the method comprising: pre-initializing, by a processor, a pool of TEEs, the pre-initialization of each TEE in the pool of TEEs comprising: Allocating memory of a storage device for the TEE; receiving a request by the processor after pre-initialization of the pool of TEEs; and in response to the request: selecting, by the processor, a TEE from the pool of pre-initialized TEEs ; and access to the selected TEE is provided by the processor.According to an aspect of the present disclosure, there is provided a method of providing a Trusted Execution Environment (TEE), the method comprising: creating, by a processor, a template TEE stored in a storage device and marked read-only; receiving a request by the processor; and in response to the request: copying the template TEE to create a TEE; and providing access to the created TEE.Description of drawingsSome embodiments are illustrated by way of example and not limitation in the accompanying drawings.FIG. 1 is a network diagram illustrating a network environment suitable for providing a function-as-a-service server using a TEE, according to some example embodiments.2 is a block diagram of a function-as-a-service server suitable for reducing latency of a TEE according to some example embodiments, according to some example embodiments.3 is a block diagram of a prior art ring-based memory protection.4 is a block diagram of enclave-based memory protection suitable for reducing latency of TEEs, according to some example embodiments.5 is a block diagram of a database schema suitable for reducing the latency of a TEE, according to some example embodiments.6 is a block diagram of a sequence of operations performed in building a TEE in accordance with some example embodiments.7 is a flowchart illustrating operations of a method suitable for initializing a TEE and providing access to the TEE, according to some example embodiments.8 is a flowchart illustrating operations of a method suitable for initializing a TEE and providing access to the TEE in accordance with some example embodiments.9 is a flowchart illustrating operations of a method suitable for initializing and providing access to a TEE, according to some example embodiments.10 is a block diagram illustrating one example of a software architecture of a computing device.11 is a block diagram of a machine in the example form of a computer system within which instructions may be executed to cause the machine to perform any one or more of the methods discussed herein.Detailed waysExample methods and systems are directed to reducing latency in providing TEE. In the most general sense, a TEE is any trusted execution environment, regardless of how that trust is obtained. However, as used herein, a TEE is provided by executing code within a portion of memory that is protected from access by processes outside the TEE, even if those processes are running at elevated privilege levels. Example TEEs include enclaves created by Software Guard Extensions (SGX) and trust domains created by Trust Domain Extensions (TDX).A TEE can be used to enable secure processing of confidential information by protecting it from all software other than the TEE. TEEs can also be used for modular programming, where each module contains everything necessary for its own functionality without being exposed to vulnerabilities created by other modules. For example, a successful code injection attack against one TEE cannot affect the code of another TEE.Total memory encryption (TME) protects data in memory from being accessed bypassing the processor. The encryption key is generated within the processor at system startup and is never stored outside the processor. The TME encryption key is a temporary key because it does not persist across reboots and is never stored outside the processor. All data written to memory by the processor is encrypted using this encryption key and decrypted when it is read back from memory. Thus, hardware-based attacks that attempt to read data directly from memory without processor intermediary will fail.Multi-key TME (MKTME) extends TME to utilize multiple keys. Individual memory pages can be encrypted using the TME's ephemeral key or using a software-provided key. In terms of software-based attacks, this may provide greater security than TME, as the attacker would need to identify the specific key being used by the target software, rather than having the processor automatically decrypt any memory that the attacking software has gained access to.Initializing a TEE requires multiple steps before it can start executing, which can cause delays in applications that repeatedly create and destroy TEEs. In addition to workload-dependent initialization, perform workload-independent initialization, such as adding storage to an enclave. In a function-as-a-service (FaaS) environment, a large portion of the TEE is workload-independent, and thus can be executed before the workload is received.FaaS platforms provide cloud computing services that execute application logic but do not store data. In contrast to platform-as-a-service (PaaS) hosting providers, FaaS platforms do not have continuously running server processes. Therefore, an initial request to a FaaS platform may take longer to process than an equivalent request to a PaaS host, but the benefits are reduced idle time and higher scalability. As described in this paper, reducing the latency of processing initial requests increases the attractiveness of FaaS solutions.Certain steps performed during enclave initialization are the same for certain classes of workloads. For example, each enclave in a class may use heap memory. Thus, common parts of the enclave initialization sequence (eg, adding heap memory) can be performed before the enclave is requested. The final step of initializing the enclave is performed when an enclave is requested for a workload in that class, and the parts of the enclave that are known to be specialized for its specific purpose. This reduces latency compared to performing all initialization steps in response to a request for an enclave.TEEs can be initialized ahead of time for specific workloads. This TEE is considered a template TEE. When a TEE for this workload is requested, the template TEE is forked, and a new copy is provided as the requested TEE. Since forking an existing TEE is faster than creating a new TEE from scratch, the latency is reduced.A TEE can be initialized in advance for a specific workload and marked as read-only. This TEE is considered a template TEE. When a TEE for this workload is requested, a new TEE is created with read-only access to the template TEE. Multiple TEEs may be able to safely access the template TEE as long as the template TEE is read-only. Latency is reduced since the creation of a new TEE with access to the template TEE is faster than creating a new TEE from scratch with all the code and data of the template TEE.In some example embodiments, as described herein, a FaaS image is used to create a TEE using an ephemeral key. When a TEE for FaaS is requested, the ephemeral key is assigned an access-controlled key identifier, allowing the TEE to be provisioned quickly in response.The methods and systems discussed herein reduce latency compared to existing methods of initializing an enclave. The reduction in latency may allow additional functionality to be protected in the TEE, use a more fine-grained TEE, or both, increasing system security. When these effects are taken into account, one or more of the methods described herein may avoid the need for certain efforts or resources that would otherwise be involved in initializing a TEE. Computing resources used by one or more machines, databases, or networks can be similarly reduced. Examples of such computing resources include processor cycles, network traffic, memory usage, data storage capacity, power consumption, and cooling capacity.1 is a network diagram illustrating a network environment 100 suitable for providing a function-as-a-service server using a TEE, according to some example embodiments. Network environment 100 includes FaaS servers 110A and 110B, client devices 120A and 120B, and network 130 . FaaS servers 110A-110B provide functions via network 130 to client devices 120A-120B. Client devices 120A and 120B may be devices of different tenants, such that each tenant wishes to ensure that their tenant-specific data and code are not accessible by other tenants. Thus, the FaaS servers 110A-110B can use an enclave for each FaaS provided.To reduce the latency of providing functions, the systems and methods described herein for reducing the latency of TEE creation can be used. For example, before client device 120 requests a function, a TEE for that function may be created in part or in whole by FaaS server 110 .The FaaS servers 110A-110B and client devices 120A and 120B may each be implemented in whole or in part in a computer system, as described below with respect to FIG. 9 . FaaS servers 110A and 110B may be collectively referred to as FaaS servers 110 , or collectively as FaaS servers 110 . Client devices 120A and 120B may be collectively referred to as client device 120 , or collectively as client device 120 .Any of the machines, databases, or devices shown in FIG. 1 can be implemented in a general-purpose computer modified (eg, configured or programmed) by software as a special-purpose computer to perform the functions described herein for the machine, database, or device. For example, a computer system capable of implementing any one or more of the methods described herein is discussed below with reference to FIG. 9 . As used herein, a "database" is a data storage resource and can store data structured as text files, tables, spreadsheets, relational databases (eg, object-relational databases), triple storage, hierarchical Data storage, document-oriented NoSQL database, file storage, or any suitable combination of these. The database may be an in-memory database. Furthermore, any two or more of the machines, databases or devices shown in Figure 1 may be combined into a single machine, database or device and the functions described herein for any single machine, database or device may be detailed Distributed across multiple machines, databases, or devices.FaaS server 110 and client device 120 are connected by network 130 . Network 130 may be any network that enables communication between machines, databases, and devices. Thus, network 130 may be a wired network, a wireless network (eg, a mobile or cellular network), or any suitable combination thereof. Network 130 may include one or more parts that make up a private network, a public network (eg, the Internet), or any suitable combination thereof.2 is a block diagram of a FaaS server 110A suitable for reducing the latency of TEEs according to some example embodiments, according to some example embodiments. The FaaS server 110A is shown to include a communication module 210, an application untrusted component 220, an application trusted component 230, a trusted domain module 240, a reference enclave 250, a shared memory 260, and a private memory 270, all of which are configured To communicate with each other (eg, via a bus, shared memory, or switch). Any one or more of the modules described herein may be implemented using hardware (eg, a processor of a machine). For example, any module described herein may be implemented by a processor configured to perform the operations described herein for that module. Furthermore, any two or more of these modules may be combined into a single module, and functionality described herein for a single module may be subdivided into multiple modules. Furthermore, according to various example embodiments, modules described herein as being implemented within a single machine, database or device may be distributed across multiple machines, databases or devices.The communication module 210 receives data sent to the FaaS server 110A and sends data from the FaaS server 110A. For example, communication module 210 may receive a request from client device 120A to perform a function. After the function is executed, the result of the function is provided by the communication module 210 to the client device 120A. Communications sent and received by communications module 210 may be mediated by network 130 .Untrusted component 220 executes outside the enclave. Thus, if the operating system or other untrusted components are compromised, the untrusted component 220 is vulnerable to attack. Trusted component 230 executes within the enclave. Thus, even if the operating system or the untrusted component 220 is compromised, the data and code of the trusted component 230 remain safe.Trust domain module 240 creates and secures enclaves and is responsible for transitioning execution between untrusted components 220 and trusted components 230 . The signed code can be provided to the trust domain module 240, which verifies that the code has not been modified since it was signed. The signed code is loaded into a portion of physical memory marked as part of the enclave. Thereafter, hardware safeguards prevent untrusted software from accessing, modifying, executing, or any suitable combination of these to the enclave memory. The code may be encrypted using a key that is only available to the trust domain module 240 .Once trusted component 230 is initialized, untrusted component 220 can use trust domain module 240's special processor instructions to transition from untrusted mode to trusted mode to invoke functions of trusted component 230. Trusted component 230 performs parameter validation, executes the requested function if the parameter is valid, and returns control to non-trusted component 220 via trust domain module 240 .Trust zone module 240 may be implemented as one or more components of a hardware processor that provides SGX, TDX, secure encrypted virtualization (SEV), TrustZone, or any suitable combination of these. In SGX, an attestation is a mechanism by which a third-party entity determines that the software entity is on an SGX-capable platform protected within an enclave before providing the software with secret and protected data In progress. Attestation relies on the platform's ability to generate a certificate that accurately reflects the enclave's signature, which includes information about the enclave's security attributes. The SGX architecture provides mechanisms to support two forms of proofs. There is a mechanism for creating basic assertions between enclaves running on the same platform, which supports local or in-platform proofs, and then another mechanism, which provides the basis for proofs between enclaves and remote third parties.Reference enclave 250 generates attestation for the enclave (eg, trusted component 230). The attestation is an evidence structure that uniquely identifies the attested enclave and host (eg, FaaS server 110A), uses asymmetric encryption and is supported by built-in processor capabilities. This attestation may be provided to client device 120 via communication module 210, allowing client device 120 to confirm that trusted component 230 has not been compromised. For example, a processor can be manufactured with a built-in private key using hardware that prevents access to the key. Using the private key, the attestation structure can be signed by the processor, and the signature can be confirmed by the client device 120 using the corresponding public key published by the hardware manufacturer. This allows client device 120 to ensure that the enclave on the remote device (eg, FaaS server 110A) has actually been created and not tampered with.Both untrusted components 220 and trusted components 230 can access and modify shared memory 260 , but only trusted component 230 can access and modify private memory 270 . Although only one untrusted component 220, one trusted component 230, and one private memory 270 are shown in FIG. 2, each application may have multiple trusted components 230, each with a corresponding private memory 270, and A number of untrusted components 220 cannot access any private storage 270. Additionally, multiple applications can run with separate memory spaces and thus run with separate shared memory 260 . In this context, "shared" means that memory is accessible by all software and hardware (eg, applications and their operating systems) that have access to the memory space, and not necessarily by all applications running on the system.3 is a block diagram 300 of prior art ring-based memory protection. Block diagram 300 includes applications 310 and 320 and operating system 330 . Operating system 330 executes processor commands in ring 0 (and processor), exception level 1 (processor), or equivalent privilege level. Applications 310-320 execute processor commands in ring 3 (and processor), exception level 0 (processor), or equivalent privilege level.The hardware processor prevents code executing at a lower privilege level from accessing memory outside the memory range defined by the operating system. Thus, the code of the application 310 cannot directly access the memory of the operating system 330 or the application 320 (as indicated by the "X" in Figure 3). Operating system 330 provides applications 310- 320 exposes some functionality.Since operating system 330 has access to all memory, applications 310 and 320 have no protection against a malicious operating system. For example, a competitor may modify the operating system prior to running the application 310 to gain access to the code and data of the application 310, allowing reverse engineering.Additionally, if an application were able to exploit a vulnerability in operating system 330 and elevate itself to the operating system's privilege level, the application would be able to access all memory. For example, application 310 (as shown by the X between applications 310 and 320 in Figure 3), which normally cannot access application 320's memory, will be able to access application 320's memory after elevating itself to ring 0 or exception level 1 . Thus, if a user is tricked into running a malicious program (eg, application 310), private data of the user or application provider may be accessed directly from memory (eg, bank passwords used by application 320).4 is a block diagram 400 of enclave-based memory protection suitable for reducing latency of TEEs, according to some example embodiments. Block diagram 400 includes application 410 , enclave 420 , and operating system 430 . Operating system 430 executes processor commands in ring 0 (and processor), exception level 1 (processor), or equivalent privilege level. Application 410 and enclave 420 execute processor commands in ring 3 (and processor), exception level 0 (processor), or equivalent privilege level.Operating system 430 allocates the memory of enclave 420 and indicates to the processor the code and data to be loaded into enclave 420 . However, once instantiated, operating system 430 cannot access the memory of enclave 420. Thus, even if the operating system 430 is malicious or compromised, the code and data of the enclave 420 remain safe.Enclave 420 may provide functions to application 410 . Operating system 430 may control whether application 410 is allowed to call functions of enclave 420 (eg, by using the ECALL instruction). Thus, a malicious application may be able to gain the ability to call functions of enclave 420 by corrupting operating system 430 . Nonetheless, the hardware processor will prevent malicious applications from directly accessing the enclave 420's memory or code. Thus, while the code in the enclave 420 cannot assume that the function is called correctly or by a non-attacker, the code in the enclave 420 has complete control over parameter checking and other internal security measures and is only limited by its internal security vulnerabilities. influences.5 is a block diagram of a database schema suitable for reducing the latency of a TEE, according to some example embodiments. Database schema 500 includes enclave table 510 . Enclave table 510 includes rows 530A, 530B, 530C, and 530D of format 520 .The format 520 of the enclave table 510 includes an enclave identifier field, a status field, a read-only field, and a template identifier field. Each of rows 530A-530D stores data for a single enclave. An enclave identifier is a unique identifier for an enclave. For example, when an enclave is created, the trust domain module 240 may assign the next unused identifier to the created enclave. The state field indicates the state of the enclave, such as initializing (created but not ready to use), initialized (ready to use but not yet used), and allocated (in use). The read-only field indicates whether the enclave is read-only. The Template Identifier field contains the enclave identifier of another enclave to which this enclave has read-only access.Thus, in the example of FIG. 5 , four enclaves are shown in enclave table 510 . One of the enclaves is initializing, two are initialized, and one is allocated. Enclave 0 of row 530A is a read-only enclave and is used as a template for enclave 1 of row 530B. Thus, the processor prevents enclave 0 from being executed, but enclave 1 is able to access the data and code of enclave 0. Additional enclaves can be created that also use enclave 0 as a template, allowing multiple enclaves to access enclave 0's data and code without increasing the amount of memory consumed. Enclaves 1-3 of lines 530B-530D are not read-only and thus can be executed.6 is a block diagram 600 of a sequence of operations performed by trust domain module 240 in building a TEE in accordance with some example embodiments. As shown in Figure 6, the sequence of operations includes ECREATE, EADD/EEXTEND, EINIT, EENTER, and FUNCTIONSTART. The ECREATE operation creates the enclave. The EADD operation adds initial heap memory to the enclave. Additional memory can be added using the EEXTEND operation. The EINIT operation initializes the TEE for execution. Thereafter, the untrusted component 220 transfers execution to the TEE by requesting the trust domain module 240 to perform an EENTER operation. The trusted FUNCTION of the TEE is executed by executing code within the TEE in response to the FUNCTION START CALL.As shown in Figure 6, these operations can be divided in at least two ways. One division states that ECREATE, EADD/EEXTEND, and EINIT operations are performed by the host application (eg, untrusted component 220), and that the EENTER operation transfers control to the TEE, which performs the FUNCTION. Another division shows that the creation of the TEE and the allocation of heap memory for the TEE ("workload-independent operations") can be performed regardless of the specific code and data to be added to the TEE, while the initialization of the TEE and access to TEE functions Subsequent calls depend on the specific code and data loaded ("workload-dependent operations").A pool of TEEs can be pre-initialized by performing workload-independent operations before requesting TEEs. As used herein, a pre-initialized TEE is a TEE for which at least one operation is initiated before an application requests the TEE. For example, a TEE can be created by an ECREATE operation before the TEE is requested. In response to receiving a request for the TEE, a workload-dependent operation is performed. Latency is reduced compared to solutions that do not perform workload-independent operations until a request is received. In some example embodiments, the operations for pre-initialization of the TEE are performed in parallel with receiving the request for the TEE. For example, an ECREATE operation of a TEE may begin, and a request for a TEE is received before the ECREATE operation completes. Thus, pre-initialization is not defined by completing workload-independent operations within a specific amount of time before a request for a TEE is received, but by starting workload-independent operations before a request for a TEE is received of.For a FaaS environment, each function can share a common runtime environment that is workload-independent and initialized before workload-dependent operations are performed. The startup time of a FaaS function is an important metric in FaaS services, as a shorter startup time can make the service more resilient.7 is a flow diagram illustrating the operations of a method 700 suitable for initializing a TEE and providing access to the TEE, according to some example embodiments. Method 700 includes operations 710 , 720 , 730 and 740 . By way of example and not limitation, method 700 may be performed by FaaS server 110A of FIG. 1 , using the modules, databases, and structures shown in FIGS. 2-4 .In operation 710, the trust domain module 240 pre-initializes the pool of enclaves. For example, creating an enclave and allocating heap memory for the enclave can be performed for each enclave in the enclave pool. In some example embodiments, the enclave pool includes 16-512 enclaves, 16 enclaves, 32 enclaves, or 128 enclaves.In various example embodiments, the enclaves in the enclave pool are partially pre-initialized or fully pre-initialized. A fully pre-initialized enclave has at least one workload-dependent operation performed before the enclave is requested. A partially pre-initialized enclave has only workload-independent operations performed before the enclave is requested. Pre-initialized enclaves reduce the result time of any enclave, but they are especially valuable for short-lived ephemeral enclaves (e.g., FaaS workloads), where initialization overhead dominates the overall execution time.Pre-initialized enclaves can be created by forking or copying template enclaves. The template enclave is first created, with the desired state of the pre-initialized enclave. Then fork or copy the template enclave for each pre-initialized enclave in the pool. In some example embodiments, the template enclave itself is part of the pool. In other example embodiments, template enclaves are read-only, non-executable, and reserved for later use as templates. A template enclave may include the memory content and layout of the FaaS.The memory of each enclave in the enclave pool may be encrypted with a key stored in the processor. The key may be an ephemeral key (eg, a TME ephemeral key) or a key with a key identifier accessible outside the processor. The trust domain module 240 or the MKTME module can generate this key and assign it to the enclave. Thus, the key itself is never exposed outside the processor. An enclave is assigned a portion of physical memory. Memory access requests originating from the enclave's physical memory are associated with the enclave's key identifier and thus the enclave's key. The processor will not apply the enclave's keys to memory accesses originating outside the enclave's physical memory. Thus, a memory access by an untrusted application or component (eg, untrusted component 220) may only receive encrypted data or code for the enclave.In some example embodiments, each enclave in the pool of enclaves is encrypted using a different ephemeral key (eg, an MKTME key) without a key identifier. Thereafter, when an enclave of FaaS is requested, an access-controlled key identifier is assigned to the ephemeral key by the trust domain module 240, allowing the enclave to be provided promptly in response.Trust domain module 240 receives a request for an enclave in operation 720 . For example, the untrusted component 220 of the application may provide data identifying the enclave to the trusted domain module 240 as part of the request. The data identifying the enclave may include pointers to addresses in shared memory 260 that may be accessed by untrusted components 220 .The request may include a precomputed hash value for the enclave and indicate a portion of shared memory 260 (eg, the portion identified by the address and size included in the request) that contains the code and data for the enclave . Trust domain module 240 may perform a hash function on the binary memory state (eg, the portion of shared memory 260 indicated in the request) to confirm that the hash value provided in the request matches the calculated hash value. If the hash values match, trust domain module 240 has confirmed that the indicated memory actually contains the code and data for the requested enclave, and method 700 may continue. If the hash values do not match, the trust domain module 240 may return an error value, preventing the modified memory from being loaded into the enclave.In some example embodiments, the request includes an identifier for the template enclave. Trust domain module 240 creates the requested enclave with read-only permissions to access the template enclave. This allows the requested enclave to read the template enclave's data and execute the template enclave's functions without modifying the template enclave's data or code. Thus, multiple enclaves can access the template enclave without conflict, and the data and code for the template enclave are stored only once (rather than once for each of the multiple enclaves). Therefore, less memory is copied during the creation of the visitor enclave, reducing latency.In operation 730, in response to the received request, the trust domain module 240 selects an enclave from a pool of pre-initialized enclaves. Trust domain module 240 may modify the selected enclave by performing additional operations on the selected enclave based on data received with the request identifying the enclave, such as the workload-specific operations shown in FIG. 4 . Additional operations may include copying data or code from addresses in shared memory 260 indicated in the request to private memory 270 allocated to the enclave.In some example embodiments, the additional operations include re-encrypting the physical memory assigned to the enclave with the new key. For example, the pre-initialization step may have encrypted the enclave's physical memory using the ephemeral key, and the enclave may be re-encrypted using the enclave's unique key, which has a corresponding unique key identifier. In systems where key identifiers are a limited resource (e.g., a fixed number of key identifiers are available), the use of ephemeral keys for pre-initialized enclaves can increase the maximum size of the enclave pool (e.g., to exceeds the size of a fixed number of available key identifiers). Additional operations may also include creating a secure extended page table (EPT) branch for the selected TEE that derives a code map from the template TEE.The trusted domain module 240 provides access to the selected enclave in response to the request (operation 740). For example, a unique identifier of the initialized enclave can be returned, which can be used as a parameter to a subsequent request to the trust domain module 240 to execute a function within the enclave (eg, an EENTER command).Thereafter, the trust domain module 240 may determine that execution of the selected enclave is complete (eg, in response to receiving an enclave exit instruction). Memory assigned to completed enclaves can be freed. Alternatively, the state of a completed enclave can be restored to a pre-initialized state, and the enclave can be returned to the pool. For example, a template enclave can be copied to an enclave, an operation that reverses a workload-dependent operation can be performed, a checkpoint of the enclave that was performed before the workload-dependent operation is performed, and an operation is performed after the execution is complete Recovery from checkpoints, or any suitable combination of these.Compared to prior art implementations that do not perform pre-initialization of the enclave (operation 710 ) prior to receiving the request for the enclave (operation 720 ), the delay between receiving the request and providing access (operation 740 ) is reduced. The reduction in latency may allow for the protection of additional functions in the enclave, the use of more fine-grained enclaves, or both, increasing system security. Furthermore, when the enclave is invoked by client device 120 over network 130, the processor cycles of client device 120 used while waiting for a response from FaaS server 110 are reduced, improving responsiveness and reducing power consumption .8 is a flow diagram illustrating the operations of a method 800 suitable for initializing a TEE and providing access to the TEE, according to some example embodiments. Method 800 includes operations 810 , 820 , 830 and 840 . By way of example and not limitation, method 800 may be performed by FaaS server 110A of FIG. 1 , using the modules, databases, and structures shown in FIGS. 2-4 .In operation 810, the trust domain module 240 creates a template enclave that is marked read-only. For example, the enclave may be completely created, with its code and data loaded into private memory 270 . However, since template enclaves are read-only, the functions of template enclaves cannot be called directly from untrusted components 220 . Referring to the enclave table 310 of Figure 3, row 330A shows a read-only template enclave.Trust domain module 240 receives a request for an enclave in operation 820 . For example, the untrusted component 220 of the application may provide data identifying the enclave to the trusted domain module 240 as part of the request. The data identifying the enclave may include pointers to addresses in shared memory 260 that may be accessed by untrusted components 220 .In operation 830, in response to the received request, the trust domain module 240 copies the template enclave to create the requested enclave. For example, the trust domain module 240 may determine that the data identifying the enclave indicates that the requested enclave is for the same code and data as the template enclave. This determination may be based on a signature of the enclave code and data, a message authentication code (MAC) of the enclave code and data, asymmetric encryption, or any suitable combination of these.The trusted domain module 240 provides access to the selected enclave in response to the request (operation 840). For example, a unique identifier of the initialized enclave can be returned, which can be used as a parameter to a subsequent request to the trust domain module 240 to execute a function within the enclave (eg, an EENTER command).In contrast to prior art implementations that create an enclave in response to receiving a request for an enclave (operation 820) rather than copying a template enclave to create the requested enclave (operation 830), receiving the request and providing access (operation 830). The delay between operations 840) is reduced. The reduction in latency may allow for the protection of additional functions in the enclave, the use of more fine-grained enclaves, or both, increasing system security. Furthermore, when the enclave is invoked by client device 120 over network 130, the processor cycles of client device 120 used while waiting for a response from FaaS server 110 are reduced, improving responsiveness and reducing power consumption .FIG. 9 is a flowchart illustrating the operations of a method 900 suitable for initializing a TEE and providing access to the TEE, according to some example embodiments. Method 900 includes operations 910 , 920 , 930 and 940 . By way of example and not limitation, method 900 may be performed by FaaS server 110A of FIG. 1 , using the modules, databases, and structures shown in FIGS. 2-4 .In operation 910, the trust domain module 240 pre-initializes a first pool of enclaves of the first category and a second pool of enclaves of the second category. Pre-initialization is complete or partial. For full initialization, the enclaves in the pool are fully ready for use and have been loaded with the enclave's code and data. Thus, all members of the class are the same. For partial initialization, the enclaves in the pool share one or more characteristics, such as the amount of heap memory used. The enclave is initialized for shared features, but the actual code and data for the enclave are not loaded during pre-initialization. Additionally, further customizations can be performed in later steps. Therefore, different enclaves can be members of this class when performing partial initialization.Trust domain module 240 receives a request for a first category of enclaves in operation 920 . For example, the untrusted component 220 of the application may provide data identifying the enclave to the trusted domain module 240 as part of the request. The data identifying the enclave may include pointers to addresses in shared memory 260 that may be accessed by untrusted components 220 . Data identifying the class of the enclave may be included in the enclave or in the request.In operation 930, in response to the received request, the trust domain module 240 selects an enclave from a pre-initialized pool of enclaves of the first category. Trust domain module 240 may modify the selected enclave by performing additional operations on the selected enclave based on data received with the request identifying the enclave, such as the workload-specific operations shown in FIG. 4 . Additional operations may include copying data or code from addresses in shared memory 260 indicated in the request to private memory 270 allocated to the enclave.The trusted domain module 240 provides access to the selected enclave in response to the request (operation 940). For example, a unique identifier of the initialized enclave can be returned, which can be used as a parameter to a subsequent request to the trust domain module 240 to execute a function within the enclave (eg, an EENTER command).By using pools of different classes, similar but relatively low-demand enclaves can be placed into a common class for partial pre-initialization, reducing latency while only consuming resources proportional to their needs. At the same time, high-demand enclaves can be fully pre-initialized, further reducing latency.In view of the above-described implementation of the subject matter, the present application discloses the following list of examples, of which a single feature of an example, or more than one feature of an example, is combined, and optionally with one of one or more further examples Combinations of features or features are further examples which also fall within the scope of the disclosure of the present application.Example 1 is a system that provides a Trusted Execution Environment (TEE), the system comprising: a processor; and a storage device coupled to the processor to store instructions that, when executed by the processor, cause The processor: pre-initializes a pool of TEEs, where the pre-initialization of each TEE in the pool of TEEs includes allocating the memory of the storage device for the TEE; after the pre-initialization of the pool of TEEs, receiving a TEE selects the TEE from a pool of pre-initialized TEEs; and provides access to the selected TEE in response to the request.In example 2, the subject matter of example 1 includes wherein the instructions further cause the processor to: prior to providing access to the selected TEE, modify the selected TEE based on information in the request TEE.In Example 3, the subject matter of Example 2 includes wherein modifying the selected TEE includes launching the selected TEE.In Example 4, the subject matter of Examples 2-3 includes wherein modifying the selected TEE includes copying data or code to memory allocated for the TEE.In Example 5, the subject matter of Examples 2-4 includes, wherein modifying the selected TEE comprises: assigning an encryption key to the selected TEE; and pairing memory allocated to the TEE with the encryption key to encrypt.In Example 6, the subject matter of Examples 1-5 includes wherein the pre-initialization of the pool of TEEs includes copying the state of a template TEE to each TEE in the pool of TEEs.In Example 7, the subject matter of Examples 1-6 includes wherein the instructions further cause the processor to: based on determining that execution of the selected TEE is complete, restore the selected TEE to a state of a template TEE .In Example 8, the subject matter of Examples 1-7 includes, wherein the instructions further cause the processor to: receive a request to release the selected TEE; and respond to the request to release the selected TEE , returning the selected TEE to the pool of TEEs.In Example 9, the subject matter of Examples 1-8 includes wherein the instructions further cause the processor to: receive a precomputed hash value; determine a hash value for a binary memory state; and based on the determined hash value A hash value and a precomputed hash value, the binary memory state is copied from unprotected memory to the selected TEE.Example 10 is a system for providing a TEE, the system comprising: a processor; and a storage device coupled with the processor to store instructions that, when executed by the processor, cause the processor to: pre- initializing a pool of TEEs; creating a template TEE that is stored in the storage device and marked read-only; receiving a request; and in response to the request: copying the template TEE to create a TEE; and providing access to all Access to the created TEE.In Example 11, the subject matter of Example 10 includes wherein the template TEE includes initial memory content and layout for a function as a service (FaaS).In Example 12, the subject matter of Examples 10-11 includes wherein the processor prevents execution of the template TEE.Example 13 is a method of providing a TEE, the method comprising: pre-initializing, by a processor, a pool of TEEs, the pre-initialization of each TEE in the pool of TEEs including allocating memory of a storage device for the TEE; After pre-initialization of the pool of TEEs, a request is received by the processor; and in response to the request: a TEE is selected by the processor from the pool of pre-initialized TEEs; and a request for the selected TEE is provided by the processor access.In Example 15, the subject matter of Example 14 includes, prior to providing access to the selected TEE, modifying the selected TEE based on information in the request.In Example 16, the subject matter of Example 15 includes wherein modifying the selected TEE includes launching the selected TEE.In Example 17, the subject matter of Examples 15-16 includes wherein modifying the selected TEE includes copying data or code to memory allocated for the TEE.In Example 18, the subject matter of Examples 15-17 includes wherein modifying the selected TEE comprises: assigning an encryption key to the selected TEE; and pairing memory allocated to the TEE using the encryption key to encrypt.In Example 19, the subject matter of Examples 14-18 includes wherein the pre-initialization of the pool of TEEs includes copying the state of a template TEE to each TEE in the pool of TEEs.In Example 20, the subject matter of Examples 14-19 includes restoring the selected TEE to a state of a template TEE based on determining that execution of the selected TEE is complete.In Example 21, the subject matter of Examples 14-20 includes receiving a request to release the selected TEE; and in response to the request to release the selected TEE, returning the selected TEE to the Pool of TEE.In Example 22, the subject matter of Examples 14-21 includes, receiving a precomputed hash value; determining a hash value for a binary memory state; and based on the determined hash value and the precomputed hash value, Copy the binary memory state from unprotected memory to the selected TEE.Example 23 is a method of providing a Trusted Execution Environment (TEE), the method comprising: creating, by a processor, a template TEE stored in a storage device and marked read-only; receiving a request by the processor ; and in response to the request: copy the template TEE to create a TEE; and provide access to the created TEE.In Example 24, the subject matter of Example 23 includes wherein the template TEE includes initial memory content and layout for a function as a service (FaaS).In Example 25, the subject matter of Examples 23-24 includes wherein the processor prevents execution of the template TEE.Example 26 is a method of providing a TEE, the method comprising: encrypting, by a processor, data and code using a first encryption key; storing the encrypted data and code in a storage device; receiving, by the processor, a request ; In response to the request: assign a second encryption key to the TEE; use the first encryption key to decrypt the encrypted data and code; use the second encryption key to decrypt the decrypted data and code encryption; and provide access to the TEE.Example 27 is a non-transitory computer-readable medium having instructions for causing a processor to provide a TEE by performing operations comprising: pre-initializing a pool of TEEs, each TEE in the pool of TEEs having a Pre-initializing includes allocating memory of a storage device for the TEE; receiving a request for a TEE after the pre-initialization of the pool of TEEs; selecting the TEE from the pool of pre-initialized TEEs; and providing a request to the TEE in response to the request Access to the selected TEE.In Example 28, the subject matter of Example 27 includes wherein the operations further include modifying the selected TEE based on the information in the request before providing access to the selected TEE.In Example 29, the subject matter of Example 28 includes wherein modifying the selected TEE includes launching the selected TEE.In Example 30, the subject matter of Examples 28-29 includes wherein modifying the selected TEE includes copying data or code to memory allocated for the TEE.In Example 31, the subject matter of Examples 28-30 includes wherein modifying the selected TEE comprises: assigning an encryption key to the selected TEE; and pairing memory allocated to the TEE using the encryption key to encrypt.In Example 32, the subject matter of Examples 27-31 includes wherein the pre-initialization of the pool of TEEs includes copying the state of a template TEE to each TEE in the pool of TEEs.In Example 33, the subject matter of Examples 27-32 includes wherein the operations further include restoring the selected TEE to a state of a template TEE based on determining that execution of the selected TEE is complete.In Example 34, the subject matter of Examples 27-33 includes, wherein the operations further comprise: receiving a request to release the selected TEE; and in response to the request to release the selected TEE, releasing the selected TEE The selected TEE is returned to the pool of TEEs.In Example 35, the subject matter of Examples 27-34 includes, wherein the operations further comprise: receiving a precomputed hash value; determining a hash value of the binary memory state; and based on the determined hash value and A precomputed hash value to copy the binary memory state from unprotected memory to the selected TEE.Example 36 is a non-transitory computer readable medium having instructions for causing a processor to provide a TEE by performing operations comprising: creating a template TEE stored in a storage device and marked as read-only; receive a request; and in response to the request: copy the template TEE to create a TEE; and provide access to the created TEE.In Example 37, the subject matter of Example 36 includes wherein the template TEE includes initial memory content and layout for a function as a service (FaaS).In Example 38, the subject matter of Examples 36-37 includes wherein the operations further include preventing execution of the template TEE.Example 39 is a non-transitory computer readable medium having instructions for causing a processor to provide a TEE by performing operations comprising: encrypting data and code with a first encryption key; storing data and code in a storage device; receiving a request; in response to the request: assigning a second encryption key to the TEE; decrypting the encrypted data and code using the first encryption key; using the second encryption key encryption keys to encrypt decrypted data and codes; and provide access to the TEE.Example 40 is a system for providing a TEE, the system comprising: storage means; and processing means for pre-initializing a pool of TEEs, the pre-initialization of each TEE in the pool of TEEs including allocating the TEE for the TEE memory of the storage device; receiving a request for a TEE; selecting the TEE from a pool of pre-initialized TEEs; and providing access to the selected TEE in response to the request.In example 41, the subject matter of example 40 includes wherein the processing means is further for modifying the selected TEE based on the information in the request before providing access to the selected TEE.In Example 42, the subject matter of Example 41 includes wherein modifying the selected TEE includes launching the selected TEE.In Example 43, the subject matter of Examples 41-42 includes wherein modifying the selected TEE includes copying data or code to memory allocated for the TEE.In Example 44, the subject matter of Examples 41-43 includes wherein modifying the selected TEE comprises: assigning an encryption key to the selected TEE; and pairing memory allocated to the TEE using the encryption key to encrypt.In Example 45, the subject matter of Examples 40-44 includes wherein the pre-initialization of the pool of TEEs includes copying the state of a template TEE to each TEE in the pool of TEEs.In Example 46, the subject matter of Examples 40-45 includes wherein the processing means is further configured to restore the selected TEE to a state of a template TEE based on determining that execution of the selected TEE is complete.In Example 47, the subject matter of Examples 40-46 includes, wherein the processing means is further configured to: receive a request to release the selected TEE; and in response to the request to release the selected TEE, to The selected TEE is returned to the pool of TEEs.In Example 48, the subject matter of Examples 40-47 includes, wherein the processing means is further for: receiving a precomputed hash value; determining a hash value for a binary memory state; and based on the determined hash value value and a precomputed hash value, copy the binary memory state from unprotected memory to the selected TEE.Example 49 is a system for providing a TEE, the system comprising: storage means; and processing means for: creating a template TEE stored in the storage means and marked read-only; receiving a request; and In response to the request: copy the template TEE to create a TEE; and provide access to the TEE.In Example 50, the subject matter of Example 49 includes wherein the template TEE includes initial memory content and layout for a function as a service (FaaS).In Example 51, the subject matter of Examples 49-50 includes wherein the processing means prevents execution of the template TEE.Example 52 is a system for providing a TEE, the system comprising: a storage device; and a processing device for: encrypting data and code with a first encryption key; storing the encrypted data and code in the storage device in; receiving a request; in response to the request: assigning a second encryption key to the TEE; decrypting encrypted data and codes using the first encryption key; decrypting the decrypted data using the second encryption key Data and code are encrypted; and access to the TEE is provided.Example 53 is at least one machine-readable medium comprising instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement any of Examples 1-52.Example 54 is an apparatus comprising means for implementing any of Examples 1-52.Example 55 is a system for implementing any of Examples 1-52.Example 56 is a method for implementing any of Examples 1-52.10 is a block diagram 1000 illustrating one example of a software architecture 1002 of a computing device. Architecture 1002 may be used in conjunction with various hardware architectures such as those described herein. Figure 10 is only a non-limiting example of a software architecture, and many other architectures can be implemented to facilitate the functionality described herein. A representative hardware layer 1004 is illustrated and may represent, for example, any of the computing devices mentioned above. In some examples, hardware layer 1004 may be implemented according to the architecture of the computer system of FIG. 10 .The representative hardware layer 1004 includes one or more processing units 1006 with associated executable instructions 1008 . Executable instructions 1008 represent executable instructions of software architecture 1002, including implementations of the methods, modules, subsystems, and components, etc. described herein, and may also include memory and/or storage module 1010, memory and/or storage module 1010 There are also executable instructions 1008 . Hardware layer 1004 may also include other hardware as indicated by other hardware 1012 , which may represent any other hardware of hardware layer 1004 , such as the other hardware illustrated as part of software architecture 1002 .In the example architecture of Figure 10, the software architecture 1002 can be conceptualized as a stack of layers, where each layer provides specific functionality. For example, software architecture 1002 may include layers such as operating system 1014 , libraries 1016 , frameworks/middleware 1018 , applications 1020 , and presentation layer 1044 . Operationally, the application 1020 and/or other components within the layer may call an application programming interface (API) call 1024 through the software stack, and in response to the API call 1024 access the response shown as message 1026 , the returned value and so on. The illustrated layers are representative, and not all software architectures have all layers. For example, some mobile or specialized operating systems may not provide a framework/middleware layer 1018, while others may provide such a layer. Other software architectures may include additional or different layers.The operating system 1014 can manage hardware resources and provide common services. Operating system 1014 may include, for example, kernel 1028 , services 1030 , and drivers 1032 . Kernel 1028 may act as an abstraction layer between hardware and other software layers. For example, the kernel 1028 may be responsible for memory management, processor management (eg, scheduling), component management, networking, security settings, and the like. Services 1030 may provide other common services for other software layers. In some examples, servicing 1030 includes interrupt servicing. An interrupt service may detect receipt of an interrupt and, in response, cause the architecture 1002 to suspend its current processing and execute an interrupt service routine (ISR) when the interrupt is accessed.Driver 1032 may be responsible for controlling or interfacing with the underlying hardware. For example, depending on the hardware configuration, drivers 1032 may include display drivers, camera drivers, drivers, flash drives, serial communication drivers (eg, Universal Serial Bus (USB) drivers), drivers, NFC drivers, audio drivers , power management drives, and more.Libraries 1016 may provide common infrastructure that may be utilized by applications 1020 and/or other components and/or layers. Libraries 1016 typically provide functionality that allows other software modules to perform tasks in a manner that is easier than interfacing directly with underlying operating system 1014 functionality (eg, kernel 1028, services 1030, and/or drivers 1032). Libraries 1016 may include system libraries 1034 (eg, the C standard library) that may provide functions such as memory allocation functions, string manipulation functions, math functions, and the like. Additionally, libraries 1016 may include API libraries 1036, such as media libraries (eg, libraries that support rendering and manipulation of various media formats such as MPEG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (eg, an OpenGL framework that can be used to render 2D and 3D graphics content on a display), a database library (eg, SQLite, which provides various relational database functions), a web library (eg, WebKit, which provides web browsing capabilities) ),and many more. Libraries 1016 may also include various other libraries 1038 to provide many other APIs to applications 1020 and other software components/modules.Framework/middleware 1018 may provide higher-level common infrastructure that may be utilized by applications 1020 and/or other software components/modules. For example, framework/middleware 1018 may provide various graphical user interface (GUI) functions, high-level resource management, high-level location services, and the like. Framework/middleware 1018 may provide a wide range of other APIs that may be utilized by applications 1020 and/or other software components/modules, some of which may be specific to a particular operating system or platform.Applications 1020 include built-in applications 1040 and/or third-party applications 1042 . Examples of representative built-in applications 1040 may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a gaming application. The third-party applications 1042 may include any of the built-in applications 1040 as well as a wide variety of other applications. In a particular example, a third-party application 1042 (eg, an application developed using AndroidTM or an iOSTM software development kit (SDK) by an entity that is not a vendor of a particular platform) may be a mobile application such as iOSTM, AndroidTM, Phone or other mobile Mobile software running on a mobile operating system such as a computing device operating system. In this example, third-party applications 1042 may invoke API calls 1024 provided by a mobile operating system, such as operating system 1014, to facilitate the functionality described herein.Applications 1020 may utilize built-in operating system functions (eg, kernel 1028, services 1030, and/or drivers 1032), libraries (eg, system libraries 1034, API libraries 1036, and other libraries 1038), frameworks/middleware 1018 to create user interfaces to interact with users of the system. Alternatively or additionally, in some systems, interaction with the user may occur through a presentation layer (eg, presentation layer 1044). In these systems, the application/module "logic" may be separated from the user-interactive aspects of the application/module.Some software architectures utilize virtual machines. In the example of FIG. 10 , this is illustrated by virtual machine 1048 . A virtual machine creates a software environment in which applications/modules can execute as if they were executing on a hardware computing device. The virtual machine is hosted by the host operating system (operating system 1014) and typically, although not always, has a virtual machine monitor 1046 that manages the operation of the virtual machine 1048 and its interaction with the host operating system (ie, interface to the operating system 1014). Software architecture executes within virtual machine 1048 , such as operating system 1050 , libraries 1052 , frameworks/middleware 1054 , applications 1056 , and/or presentation layer 1058 . These layers of the software architecture executing within virtual machine 1048 may be the same as the corresponding layers previously described or may be different.Modules, Components and LogicCertain embodiments are described herein as including logic or several components, modules, or mechanisms. A module may constitute a software module (eg, (1) code embodied on a non-transitory machine-readable medium or (2) in a transmission signal) or a hardware implemented module. A hardware-implemented module is a tangible unit capable of performing certain operations and may be configured or arranged in a particular manner. In example embodiments, one or more computer systems (eg, stand-alone, client, or server computer systems) or one or more processors may be configured by software (eg, applications or application portions) to operate to perform operations as described herein Hardware implementation modules for some of the operations described.In various embodiments, hardware-implemented modules may be implemented mechanically or electronically. For example, hardware-implemented modules may include special-purpose circuits or logic (eg, special-purpose processors such as field programmable gate arrays (FPGAs) or application-specific integrated circuits (application-specific integrated circuits) that are permanently configured to perform certain operations , ASIC)). A hardware-implemented module may also include programmable logic or circuitry (eg, contained within a general-purpose processor or another programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision as to whether to implement a hardware-implemented module mechanically, with dedicated and permanently configured circuitry, or with temporarily configured circuitry (eg, configured by software) may be driven by cost and time considerations.Accordingly, the term "hardware-implemented module" should be understood to encompass a tangible entity, whether physically constructed, permanently configured (eg, hardwired), or temporarily or temporarily configured (eg, programmed) to An entity that operates in a certain way and/or performs some of the operations described herein. Considering embodiments in which hardware-implemented modules are temporarily configured (eg, programmed), each hardware-implemented module need not be configured or instantiated at any one time. For example, where the hardware-implemented modules include a general-purpose processor configured using software, the general-purpose processor may be configured as various hardware-implemented modules at different times. Software may accordingly configure the processor to constitute a particular hardware-implemented module at one time and a different hardware-implemented module at a different time, for example.Hardware-implemented modules may provide information to and may receive information from other hardware-implemented modules. Accordingly, the described hardware-implemented modules may be considered to be communicatively coupled. Where multiple such hardware-implemented modules exist concurrently, communication may be accomplished through signal transmission (eg, by connecting appropriate circuits and buses of the hardware-implemented modules). In embodiments where multiple hardware-implemented modules are configured or instantiated at different times, communication between such hardware-implemented modules may be accomplished, for example, by storing and retrieving information in memory structures accessible to the multiple hardware-implemented modules . For example, a hardware-implemented module may perform an operation and store the output of the operation in a memory device to which it is communicatively coupled. Another hardware-implemented module may then access the memory device at some later time to retrieve and process the stored output. Hardware-implemented modules may also initiate communications with input or output devices, and may operate on resources (eg, collections of information).The various operations of the example methods described herein may be performed, at least in part, by one or more processors that are temporarily configured (eg, by software) or permanently configured to perform the associated operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.Similarly, the methods described herein may be implemented, at least in part, by a processor. For example, at least some of the operations of the methods may be performed by one or more processors or processor-implemented modules. The execution of certain operations may be distributed among one or more processors, not only residing within a single machine, but deployed across multiple machines. In some example embodiments, one or more processors may be located in a single location (eg, within a home environment, office environment, or server farm), while in other embodiments processors may be distributed across multiple locations.One or more processors are also operable to support the execution of related operations in a "cloud computing" environment or as a "software as a service" (SaaS). For example, at least some of these operations may be performed by a set of computers (as an example of a machine including a processor) via a network (eg, the Internet) and via one or more suitable interfaces (eg, an API) Accessible.Electronic Devices and SystemsExample embodiments may be implemented in digital electronic circuitry, or in computer hardware, firmware, or software, or in combinations of them. Example embodiments may be implemented using a computer program product, eg, a computer program tangibly embodied in an information carrier (eg, in a machine-readable medium), for execution by or to control the operation of a data processing apparatus, wherein the data processing apparatus For example, a programmable processor, a computer or multiple computers.Computer programs may be written in any form of programming language, including compiled or interpreted languages, and may be deployed in any form, including as stand-alone programs or as modules, subroutines, or suitable Other units used in the computing environment. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.In example embodiments, operations may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method operations can also be performed by, and the apparatus of, example embodiments can be implemented as, special purpose logic circuitry (eg, an FPGA or ASIC).A computing system may include clients and servers. Clients and servers are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and by virtue of a client-server relationship to each other. In embodiments deploying a programmable computing system, it will be appreciated that both hardware and software architectures merit consideration. In particular, it will be understood whether the implementation is in permanently configured hardware (eg, an ASIC), in temporarily configured hardware (eg, a combination of software and a programmable processor), or in a combination of permanently and temporarily configured hardware The choice of a function can be a design choice. The hardware (eg, machine) and software architectures that may be deployed in various example embodiments are set forth below.Example Machine Architecture and Machine-Readable Media11 is a block diagram of a machine in the example form of a computer system 1100 within which instructions 1124 are executable to cause the machine to perform any one or more of the methods discussed herein. In alternate embodiments, the machines may operate as stand-alone devices, or may be connected (eg, networked) to other machines. In a networked deployment, a machine may operate as a server or client machine in a server-client network environment, or as a peer-to-peer machine in a peer-to-peer (or distributed) network environment. The machine can be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, web appliance, network router, switch or bridge, Or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by the machine. Additionally, although only a single machine is illustrated, the term "machine" should also be understood to include any collection of machines that, individually or jointly, execute a set (or sets) of instructions to perform any one or more of the methods discussed herein.The example computer system 1100 includes a processor 1102 (eg, a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 1104 , and a static memory 1106 , which are connected via a bus 1108 to the processor 1102 . communicate with each other. The computer system 1100 may also include a video display unit 1110 (eg, a liquid crystal display (graphics processing unit, LCD) or a cathode ray tube (CRT)). The computer system 1100 also includes an alphanumeric input device 1112 (eg, a keyboard or touch-sensitive display screen), a user interface (UI) navigation (or cursor control) device 1114 (eg, a mouse), a storage unit 1116, a signal generation device 1118 (eg, speakers), and network interface device 1120.machine readable mediumThe storage unit 1116 includes a machine-readable medium 1122 on which are stored one or more sets of data structures and data structures implemented or utilized by any one or more of the methods or functions described herein. Instructions 1124 (eg, software). Instructions 1124 may reside entirely or at least partially within main memory 1104 and/or within processor 1102 during their execution by computer system 1100 , where main memory 1104 and processor 1102 also constitute machine-readable medium 1122 .Although machine-readable medium 1122 is shown in example embodiments as a single medium, the term "machine-readable medium" may include a single medium or multiple mediums (eg, centralized or distributed databases, and/or associated caches and servers). The term "machine-readable medium" should be understood to include any tangible medium capable of storing, encoding, or carrying instructions 1124 for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure, or capable of storing, encoding, or carrying Or carry data structures utilized by or associated with such instructions 1124 . The term "machine-readable medium" should thus be understood to include, but not be limited to, solid-state memory as well as optical and magnetic media. Specific examples of machine-readable media 1122 include non-volatile memory, including, for example, semiconductor memory devices such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and compact disc read-only memory (CD-ROM) and digital versatile disks Read-only memory (digital versatile disc read-only memory, DVD-ROM) disc. Machine-readable media are not transmission media.Transmission mediumInstructions 1124 may also be sent or received over a communications network 1126 utilizing a transmission medium. Instructions 1124 may be transmitted using network interface device 1120 and any of a variety of well-known transport protocols (eg, hypertext transport protocol (HTTP)). Examples of communication networks include local area networks (LANs), wide area networks (WANs), the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks such as WiFi and WiMax network). The term "transmission medium" should be understood to include any intangible medium capable of storing, encoding, or carrying instructions 1124 for execution by a machine, and includes digital or analog communication signals or other intangible media to facilitate communication of such software.Although specific example embodiments have been described herein, it will be understood that various modifications and changes can be made to these embodiments without departing from the broader spirit and scope of the present disclosure. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings, which form a part hereof, show, by way of illustration and not limitation, specific embodiments in which the subject matter may be practiced. The illustrated embodiments are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, and structural and logical substitutions and changes may be made without departing from the scope of the present disclosure. This "Detailed Description" section is therefore not to be taken in a limiting sense, and the scope of the various embodiments is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled.Such embodiments of the inventive subject matter may be referred to herein individually and/or collectively by the term "invention" for convenience only and not intended to actively limit the scope of this application to any single invention or inventive concept, If in fact more than one is disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be understood that any arrangement, which is calculated to achieve the same purpose, may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above-described embodiments, as well as other embodiments not specifically described herein, will be apparent to those skilled in the art upon reading the above description.Portions of the subject matter discussed herein may be presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a machine memory (eg, a computer memory). Such algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an "algorithm" is a self-consistent sequence of operations or similar processes that bring about a desired result. In this context, algorithms and operations involve physical manipulations of physical quantities. Usually, though not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. Occasionally, mainly for reasons of idiom, expressions such as "data", "content", "bit", "value", "element", "symbol", "character", "term", "number", " It is convenient to refer to such signals in terms such as "value", etc. However, these words are merely convenient labels and are to be associated with the appropriate physical quantities.Unless specifically stated otherwise, discussions herein using words such as "processing," "computing," "operating," "determining," "presenting," "displaying," etc. may refer to machines that manipulate or transform data ( For example, a computer) acts or processes that store data in one or more memories (eg, volatile memory, non-volatile memory, or any suitable combination thereof), registers, or other devices that receive, store, transmit, or display information Represented within a machine component as a physical (eg, electrical, magnetic, or optical) quantity. In addition, unless specifically stated otherwise, the word "a" is used herein as commonly used in patent literature to include one or more than one instance. Finally, as used herein, the conjunction "or" refers to a non-exclusive "or" unless specifically stated otherwise.
A method, apparatus, system and medium for reading data from a tiled memory. In some embodiments a method may, include for one tiled-X cache read request, requesting two cache lines from the tiled memory without fragmenting the tiled-X cache read request, and returning data associated with the two requested cache lines.
WHAT IS CLAIMED IS: 1. A method for reading data from a tiled memory, comprising: for one tiled-X cache read request, requesting two cache lines from the tiled memory without fragmenting the tiled-X cache read request; and returning data associated with the two requested cache lines. 2. The method of claim 1 , further comprising: allocating the two cache lines in parallel; maintaining coherency of the two cache lines in parallel; and reading data associated with the two cache lines from a data cache in parallel. 3. The method of claim 2, wherein the allocating comprises: checking, in parallel, an address for the tiled-X read request against two cache tag Random Access Memories (RAMs) to determine whether the tiled X read request address is a hit or a miss relative to each of the two cache tag RAMs; in an instance the tiled-X read request address is a miss for one of the two cache tag RAMs, writing a tag for the tiled-X read request address in the missed one of the two cache tag RAMs; and in an instance the tiled-X read request address is a miss in both of the two cache tag RAMs1writing a tag for the tiled-X read request address in both of the two cache tag RAMs. 4. The method of claim 3, wherein the two cache tag RAMs comprise a first bank and a second bank of memory, respectively. 5. The method of claim 1 , further comprising providing an indication of a full hit, full miss, and a partial hit regarding the two requested cache lines. 6. The method of claim 2, wherein the maintaining coherency comprises an in-use check of pending cache read requests. 7. The method of claim 2, wherein the maintaining coherency is performed by two read latency first in, first out buffers (FIFOs). 8. The method of claim 2, wherein the reading data comprises reading cache entries for each of the two cache lines from the data cache at the same time, wherein the data cache includes two banks of memory associated therewith. 9. An apparatus for reading data from a tiled memory comprising: a host processor; a graphics engine coupled with the host processor; and a tiled memory coupled to the graphics engine, wherein the graphic engine is operative to: request two cache lines from the tiled memory for one tiled-X cache read request, without fragmenting the tiled-X cache read request; and return data associated with the two requested cache lines. 10. The apparatus of claim 9, further comprising: cache tag random access memory (RAM) having a first bank and a second bank to allocate the two cache lines in parallel; a pair of first in, first out buffers (FIFOs) to maintain coherency of the two cache lines in parallel; and cache data ram having a first bank and a second bank to read data associated with the two cache lines therefrom in parallel. 11. The apparatus of claim 9, wherein an indication of a full hit, full miss, and a partial hit regarding the two requested cache lines is provided. 12. The apparatus of claim 10, wherein the graphics engine is operative to: check, in parallel, the tiled-X read request address against the two cache tag RAMs to determine whether the tiled X read request address is a hit or a miss relative to each of the two cache tag RAMs; in an instance the tiled X read request address is a miss for one of the two cache tag RAMs, write a tag for the tiled X read request address in the missed one of the two cache tag RAMs; and in an instance the tiled X read request address is a miss in both of the two cache tag RAMs, write a tag for the tiled X read request address in both of the two cache tag RAMs. 13. A storage medium having executable programming instructions stored thereon, the stored program instructions comprising: instructions to request two cache lines from the tiled memory for one tiled-X cache read request, without fragmenting the tiled-X cache read request; and instructions to return data associated with the two requested cache lines. 14. The medium of claim 13, further comprising: instructions to allocate the two cache lines in parallel; instructions to maintain coherency of the two cache lines in parallel; and instructions to read data associated with the two cache lines from a data cache in parallel. 15. The medium of claim 14, wherein the instruction to allocate comprises: instructions to check, in parallel, the tiled-X read request address against two cache tag Random Access Memories (RAMs) to determine whether the tiled X read request address is a hit or a miss relative to each of the two cache tag RAMs; in an instance the tiled X read request address is a miss for one of the two cache tag RAMs, instructions to write a tag for the tiled X read request address in the missed one of the two cache tag RAMs; and in an instance the tiled X read request address is a miss in both of the two cache tag RAMs, instructions to write a tag for the tiled X read request address in both of the two cache tag RAMs. 16. The medium of claim 13, further comprising instructions to provide an indication of a full hit, full miss, and a partial hit regarding the two requested cache lines. 17. The medium of claim 14, wherein the instructions to maintain coherency comprises an in-use check of pending cache read requests. 18. The medium of claim 14, wherein the instructions to read data comprises instructions to read cache entries for each of the two cache lines from the data cache at the same time, wherein the data cache has two banks of memory associated therewith. 19. A system for reading data from a tiled memory comprising: a host processor; a graphics engine coupled with the host processor; and a tiled memory coupled to the graphics engine, wherein the graphics engine is operative to: for one tiled-X cache read request, request two cache lines from the tiled memory without fragmenting the tiled-X cache read request; and return data associated with the two requested cache lines; and a double data rate memory coupled to the host processor. 20. The system of claim 19, wherein the graphics engine and the tiled memory are co-located on a common printed circuit board (PCB). 21. The system of claim 19, wherein the graphics engine is operative to: check, in parallel, the tiled-X read request address against the two cache tag RAMs to determine whether the tiled X read request address is a hit or a miss relative to each of the two cache tag RAMs; in an instance the tiled X read request address is a miss for one of the two cache tag RAMs, write a tag for the tiled X read request address in the missed one of the two cache tag RAMs; and in an instance the tiled X read request address is a miss in both of the two cache tag RAMs, write a tag for the tiled X read request address in both of the two cache tag RAMs.
METHOD AND SYSTEM FOR SYMMETRIC ALLOCATION FOR A SHARED L2 MAPPING CACHEBACKGROUND OF THE INVENTIONCache memory systems may be used to generally improve memory access speeds in computer or other electronic systems. Increasing cache size and speed may tend to improve system performance. However, increased cache size and speed may be costly and/or limited by advancements in cache size and speed. Additionally, there may be a desire to balance overall system performance gains with overall system costs.Different types of mapping methods may be used in a cache memory system, such as direct mapping, fully associative mapping, and set-associative mapping. For a set-associative mapping system, the cache memory is divided into a number of "sets" where each set contains number of "ways" or cache lines. Within each set, searching for an address is fully associative. There may be n locations or ways in each set. For example, in a 4-way, set-associative cache memory, an address at the data source may be mapped to any one of 4 ways 0, 1 , 2, or 3 of a given set, depending on availability. For an 8-way set-associative cache memory, the address at the data source may be mapped to one of 8 ways or locations within a given set.Memory management is crucial to graphics processing and manipulating the large amounts of data encountered therewith. As processing requirements increase for graphics, including 3-D (three dimensional) texturing, various aspects of memory allocation and mapping have been considered for improvement to increase graphics processing. In some instances, a memory for graphics data may be organized in tiles. Tile organized memory may allow faster access of graphics data as compared to linearly organized memory.In some instances, the systems and methods for tile organized memory mapping may be directed to optimizing processing of "y-major" or "tiled-Y" read request operations. In a tiled-Y tiling scheme, two contiguous data structures in the Y-direction are consecutive in memory. Also, Y-major tiling may be an efficient method of organizing memory for graphics texture applications. Some graphics engines and graphics systems load texture data from a processor into cache memory in the Y-direction. Accordingly, a graphics engine or system may be configured or optimized to read graphics data (e.g., texture data) in the Y-direction. Such optimization may allow be tiled-Y read requests to be processed in one clock cycle. However, optimization for one type of operation may have undesired results regarding other operations.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example in the following drawings in which like references indicate similar elements. The following drawings disclose various embodiments of the present invention for purposes of illustration only and are not intended to limit the scope of the invention.FIG. 1 is an illustrative depiction of a computer system, in accordance with some embodiments herein;FIG. 2 is an exemplary flow diagram, in accordance with some embodiments herein; FIG. 3 an exemplary flow diagram, in accordance with some embodiments herein;FIG. 4 illustrates an exemplary functional implementation, according to some embodiments herein; andFIG. 5 is an exemplary illustration of some aspects of tiled cache memory management, in accordance with some embodiments herein.DETAILED DESCRIPTIONThe. following detailed description sets forth numerous specific details to provide a thorough understanding of the invention. However, those of ordinary skill in the art will appreciate that the invention may be practiced without these specific details. In other instances, well-known methods, procedures, protocols, components, and circuits have not been described in detail so as not to obscure the invention.Embodiments herein may be implemented in hardware or software, or a combination of both. Some embodiments may be implemented as a computer program executing on programmable systems comprising at least one processor, a data storage system (including volatile and nonvolatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to input data to perform the functions described herein and generate output information. The output information may be applied to one or more output devices. For purposes herein, a processing system may include any system that has a processor, such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor. The instructions and programs may be stored on a storage media or device (e.g., hard disk drive, floppy disk drive, read only memory (ROM), CD-ROM device, flash memory device, digital versatile disk (DVD), or other storage device) readable by a general or special purpose programmable processing system, for configuring and operating the processing system when the storage media or device is read by the processing system to perform operations described herein. Some embodiments may also be considered to be implemented as a machine-readable storage medium, configured for use with a processing system, where the storage medium so configured causes the processing system to operate in a specific and predefined manner to perform the functions described herein.In some embodiments, a graphics processing system may include memory organized in tiles. Such a memory organization may offer fast and efficient memory accesses since, for example, a display screen may be efficiently divided into rectangular regions or tiles. Tiling may allow a graphics engine to access memory faster without causing an excessive number of page misses in the memory subsystem (i.e., improve hits). Graphics memory may be tiled using a number of different formats such as X-major tiling and Y-major tiling. In X-major tiling or tiled X format, two contiguous data structures (e.g., quadwords) in the X-direction are consecutive in physical memory. In Y-major tiling, two contiguous data structures (e.g., quadwords) in the Y-direction are consecutive in memory. Those skilled in the art will appreciate that a graphics engine, processor, system or subsystem may operate on a square region of graphics texture, a texel.For a cache read request of a tiled organized memory in the X- direction (i.e., a tiled-X format read request), some memory systems may fragment the cache read request into two read requests because the requested data may encompass cache lines that are not contiguous in the memory. For example, a cache read request for 32B (byte) may correspond to two cache lines in cache since the cache may be accessed using only entire cache lines. In the event a cache line is 64B, the two cache lines may be 128B. However, fragmenting the cache read request into two read cache request to read the two cache lines will introduce additional latency since the two read cache requests are processed sequentially.In some environments, back-to-back (i.e., consecutive) tiled-Y read requests may be symmetrically pipelined through a cache memory at 32 bytes per clock cycle whereas back-to-back tiled-X read requests may each be fragmented into two (2) read cache requests. An additional clock of latency may thus be introduced into the cache's pipeline for the tiled-X read request. For example, a cache read request for 32B (byte) may correspond to two cache lines in cache since the cache may be accessed using only entire cache lines. In the event a cache line is 64B, the two cache lines may be 128B. The tiled-X read request that is fragmented into two read cache requests may result in 32B of data being returned every 2 clock cycles, instead of 32 B of data being returned every clock cycle as provided for a (non-fragmented) tiled-Y read request. There may be a desire to process tiled-X read requests as efficiently as tiled-Y read requests since, for example, graphics overlay and display use tiled-X format.Embodiments herein may provide for selectively reading data from a tiled memory by one cache read request for two cache lines of data without fragmenting the cache read request. For example, one cache read request that requests two cache lines from memory may read the two caches lines in parallel. In this manner, the two cache lines may be read at the same time. This may allow a reduction or elimination of latency regarding the cache read request since, in accordance herewith, the one cache read request is not fragmented into two read requests, one for each desired cache line. This may provide for a reduction in latency associated with the cache read request made to the tiled memory.FIG. 1 illustrates a functional block diagram of an embodiment 100 of an exemplary computer system including a graphics processor, system, subsystem, or generally, graphics engine 120 embodying some of the embodiments herein. System 100 may generally include a processing unit such as CPU (central processor unit) 105, a main memory controller 110, a main memory 115, an input/output controller 135, a Level 2 (L2) texture cache 120, a 3-D graphics engine 125, a 2-D graphics engine 130, a display 145, and a variety of input devices 140. 2-D graphics engine 145 may determine the graphical information that is to be sent to display 145, based on inputs from CPU 105 and data in main memory 115 and L2 texture cache 120. CPU 105 may access data stored on disk, networks, or CD-ROM, etc., programs booted at start up, and user inputs by the input devices. CPU 105 may determine a data stream (e.g., a read request) sent to graphics engine 120.Memory 115 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, or other memory device. Memory 115 may store instructions and/or data represented by data signals that may be executed by CPU 105, 3-D graphics engine 125, 2-D graphics engine 130, or some other device. The instructions and/or data may comprise code for performing any and/or all of the techniques of the present invention. Memory 115 may also contain software and/or data. L2 texture cache 12 may be used to speed up memory accesses by the 3- D graphics engine 125 by taking advantage of its locality of access. Those skilled in the art will appreciate that l_2 cache memory 120 may reside internal or external to the CPU 105 or graphics engines 125, 130.FIG. 1 is for illustrative purposes only. For example, various embodiments of the methods and systems herein may be used in an integrated, discrete, or other graphics configuration.FIG. 2 is an exemplary flow diagram of a method or process 200 in accordance with some embodiments herein. In some embodiments herein, a cache read request may request two cache lines from the memory. The request for two cache lines may result from the organization of the memory. For example, it may be the case that the memory may be accessed in cache line sized units. As an example, a cache read request for 32B may invoke read requests of two 64B cache lines, where a desired 16B of the cache read request is located in each cache line. Since the memory system uses cache lines, the request for 128B may be made to memory.At operation 205, for one cache read request to a tiled cache memory, two cache lines are requested from the memory. It is noted that in accordance with embodiments herein, the one cache read request is not fragmented into two discrete cache read requests, one for each cache line request.At operation 205, data associated with the two cache lines is returned to the requestor. The requestor may be a CPU or other processor or other device.Process 200, by not fragmenting the one cache read request for two cache lines into two cache read requests, may be used to avoids, reduce, or eliminate a need to fragment cache memory read requests. The avoidance of a fragmentation of the cache read request may improve an efficiency and performance of the memory system by not introducing additional latencies attributed to a fragmentation operation. In some embodiments, the cache read request is for a tiled-X format read request.FIG. 3 is an exemplary flow diagram of a method or process 300, in accordance with some embodiments herein. Operations 305 and 325 of process 300 may be substantially similar to operations 205 and 210 of FIG. 2, respectively. Accordingly, an understanding of operations 305 and 325 may be readily had by referring to the discussion of operations 205 and 210 hereinabove.In accordance with some embodiments herein, operations 310, 315, and 325 of process 300 illustrate some mechanisms that facilitate the reading of data from a tiled memory wherein one a cache read request may be used to request two cache lines from the memory.At operation 310, the two cache lines corresponding to or associated with the one cache read request may be allocated in parallel. That is, a determination of whether there is a hit or miss of the requested cache read request in either of the two cache lines associated therewith and which cache lines to replace or use may be done for both cache line at the same time (i.e., in parallel). Gains in efficiency may be realized due, at least in part, to operation 310 since the cache lines may be allocated in parallel, as opposed to sequentially.At operation 315, a coherency of the two requested cache line entries may be maintained in parallel. In this manner, the need to check the two cache line entries sequentially to protect or avoid over-writing to a cache line due to, for example, a miss, may be accomplished on both cache lines at the same time (i.e., in parallel). For example, a determination of a miss on either of the cache line entries may cause a stall until the matched cache entry for either is complete.At operation 320, data associated with the two cache lines may be read to fulfill the cache read request in parallel. The parallel data read may be accomplished using multiple cache data memory devise or a cache data device organized to accommodate parallel reads therefrom.Operations 310, 315, and 320 may combine to effectuate an effective and efficient cache read request of two cache lines for the one cache read request. The aspects of parallel allocation, coherency maintenance, and data reads for the two cache lines, and the avoidance of a fragmentation of the cache read request may allow, provide, or otherwise contribute to a reduction in latency in a graphics system.FIG. 4 is an exemplary implementation of a pipeline, device, or system for some embodiments herein, generally referenced by numeral 400. A tiled-X read request address may be received, provide or otherwise obtained by system 400. In an instance a read request address is provided as a linear memory address or otherwise not provided as a tiled memory address, module 405 may provide a mechanism to translate a source memory address into a tiled-X format memory address. Functionally, module 405 may operate to fence, fragment, tile, and perform other functions on the source memory address to obtain a tiled-X memory address that may be further processed and mapped by the tile organized memory system herein.At module 410, the tiled-X read request address is checked against two cache tag random access memories (RAMs) in parallel to determine whether the tiled-X read request address is located in cache or is to be retrieved from a system or main memory. For the desired read request, a check is done to determine whether or not the memory location is in the fast cache (e.g., level 2, L2, cache). A comparison of the tiled-X read request address to all of the tags in the cache that might contain the address is performed. The tag of the tiled-X read request address is read by 415 and compared against each of the two banks (BankO and Banki) of the cache tag RAM 435 by tag compare device 425. In an instance the memory location is in the tag cache, a cache hit is said to have occurred. In the instance of the cache hit, the processor may read the data indicated by the tiled-X read request address.In the instance the memory location is not in the tag cache, a cache miss is said to have occurred. In this case, module 410 may allocate a new tag entry for the memory address just missed. Device 430 may be used to write the tag to cache tag RAM 435. The allocation may include a tag for the just missed memory address and a copy of the data from the just missed memory address.In some embodiments, a 2-bit hit indicator may be used to indicate whether a hit or miss occurs for the tiled-X read request address regarding the cache tag RAM 435. Table 1 below is an exemplary table that includes a listing of the various resufts that may be provided by the comparison of module 410. As shown, the 2 bit hit indicator is sufficient to include all possible outcomes of the cache tag RAM comparison.RESULTS FROM BOTH TAG CAMS HIT[1 :0]Hit to Both 64 Byte Cache Entries TABLE 1In the event there is a miss in either one of the two banks of 64B cache tag RAM, the tiled-X read request address is written to one of the 8 ways for a given set in the missed bank of cache tag RAM 335. This assumes each bank of cache tag RAM is organized as an 8-way set- associative tag. Other configurations of the banks of the cache tag RAM may be used. In this manner, both banks of cache tag RAM, for a given set, will have the same tiled-X read request address (e.g., tag) in one of their ways for the given set.In the event there is a miss in both of the two banks of 64B cache tag RAM 335, the tiled-X read request address is written to one of the 8 ways for the same set in both banks of cache tag RAM 335. In this manner, both banks of cache tag RAM will have the same tiled-X read request address (e.g., tag) in the same one of their ways for the same set in both banks of cache tag RAM 335.LRU RAM device 320 may be used to select the set to which the tags are written to the banks of cache tag RAM 335. The replacement policy for determining where to write the tags may be governed by a least recently used (LRU) algorithm. It should be appreciated that control other replacement policies may be used (e.g., random, etc.).The tag RAM comparison of 310, using two banks in cache tag RAM 335, may be performed in parallel. That is, two banks of banks in cache tag RAM may be checked in parallel to gain increases herein, including an improvement in hit performance. This aspect of pipeline 400 may correspond to operation 310 of process 300, FIG. 3.Pipeline 400 proceeds to forward the tiled-X read request address to an in-use check device or mechanism 440. In-use check device 440 may be provided to protected from using or attempting to use a cache line that is currently in used by other processes.At 445, it is seen that two FIFOs may be written to simultaneously. In this manner, the two requested cache lines need not be processed sequentially, thereby adding latency to pipeline 400. The two memory read latency FIFOs 440 and 445 are checked regarding a missed tiled-X cache entry request and an adjacent entry (i.e., two cache lines). That is, the missed tiled-X cache entry (e.g., 64B) request and a second adjacent entry (e.g., 64B) are checked using the latency FIFOs 440 and 445. In an instance it is determined that the there is a cache entry match for either of the two latency FIFOs 440, 445, the missed tiled-X cache entry (i.e., the first cache entry) is stalled until the matched cache entry in either of the latency FIFOs 440, 445 is complete. In this manner, data coherency may be maintained. The missed tiled-X cache entry is stalled until the matched cache entry in either of the latency FIFOs 440, 445 can completely obtain other needed data. This aspect of pipeline 400 may correspond to operation 315 of process 300, FIG. 3. Referring to the MISS path of pipeline 400, in the event the tiled-X cache read request is a miss to both of the 64B cache entries, the tiled-X cache read request is divided into two (2) memory miss requests by X-tile fragment device or mechanism 450. For example, the second of the two memory miss requests may be incremented by 64B. In the event there is only a miss to the second of the 64B cache entries, the single miss request address may be incremented by 64B.Using the two memory missed cached entry requests, module 455 operates to convert a virtual address of the miss cache entry requests to a physical address mapped to system or main memory (MM). Module 455 provides an exemplary implementation, along the miss path, of how the missed tiled-X cache entries may be fragmented and retrieved from physical memory using translation lookaside buffers (TLBs) and other address translation mechanisms. The translation from the virtual tiled-X cache entry request address to the physical address in main memory may be accomplished by module 450 using a number of virtual to physical address translation mechanisms, including TLBs of cached mappings of the system's main memory, TLB request FIFO, TLB data buffer, and a page gathering FIFO (PGF). Module 455 may retrieve the requested address locations, MM req., as shown.Module 470 of pipeline 400 provides a mechanism to write the requested address data obtained by module 455 from main memory to a cache data RAM 475. The MM write of module 470 corresponds with the MM reg. of module 455. Cache data RAM 475 is organized, partitioned, or otherwise configured and managed to have data read therefrom in parallel. For example, cache data RAM 475 have two banks of RAM, a first BankO and a second Banki. Such a configuration and management of cache data RAM 475 facilitates or provides a mechanism for reading data associated with the two cache lines of the cache read request to be read in parallel.Accordingly, read device 480 effectively reads data associated with both read request cache lines in parallel. This aspect of pipeline 400 may relate to oeration 320 of FIG. 3.Pipeline 400 proceeds to assemble the requested data at 485 and further decompress the requested read for use by the CPU, display or other devices, or requestors.FIG. 5 is an exemplary illustration of some aspects of tiled cache memory management, in accordance with some embodiments herein. For example, FIG. 5 provides a table 500 of various aspects of a cache read request in accordance with various aspects and embodiments herein. Aspects of a cache line type and allocation are shown in column 510, a cache line request from memory is illustrated in column 515, exemplary cache writing is depicted in column 520, and illustrative cache reading is shown in column 525. As illustrated in column 510 a read request for two 16B (32B total) are allocated consecutively or interleaved for X-tiled texture frame mode and X-tiled texture frame mode, respectively. At column 515, two cache lines 1a and 1b, each 64B are requested from memory. At column 520, a depiction of BankO and Banki cache writing is illustrated. It is noted that the banks are flipped so that BankO and Banki may be written to in parallel. At column, exemplary cache read formats are provided for illustrative purposes.It should be appreciated by those skilled in the art that format of various address bits used or referenced in table 500 are provided for illustrative purposes only. That is different sizes of tiles, cache lines, allocation bits, referenced address bits, flip bits, etc. may be used without departing from the scope of some embodiments herein.In some embodiments, instructions that when executed by a machine perform methods discussed in conjunction with some of the embodiments herein may be embodied in a medium or an article of manufacture. The article of manufacture may include a CD-ROM, fixed or removable storage mechanisms, random access memory (RAM), read only memory (ROM), flash memory, and other data storage and data delivery mechanisms.The foregoing disclosure has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope set forth in the appended claims.
A method for estimating a state associated with a process includes receiving a state observation associated with the process. The state observation has an associated process time. A weighting factor to discount the state observation is generated based on the process time. A state estimate is generated based on the discounted state observation. A system includes a process tool, a metrology tool, and a process controller. The process tool is operable to perform a process in accordance with an operating recipe. The metrology tool is operable to generate a state observation associated with the process. The process controller is operable to receive the state observation, the state observation having an associated process time, generate a weighting factor to discount the state observation based on the process time, generate a state estimate based on the discounted state observation, and determine at least one parameter of the operating recipe based on the state estimate.
1、A method for evaluating a state associated with a process, comprising the following steps:Receiving a state observation associated with the process, the state observation having an associated processing time;Generating a weighting factor that discriminates the state observation based on the processing time;A state assessment is generated based on the discretionary state observation.2、The method of claim 1 wherein the weighting factor comprises an exponential weighting factor, and wherein the exponential weighting factor is defined by wherein tp is the processing time, t is the current time, and τ is the predetermined time constant.3、The method of claim 1 wherein generating the state assessment further comprises generating the state assessment using a time weighted exponential weighted moving average.4、The method of claim 1 wherein generating the state assessment further comprises generating the state assessment using a recursive time weighted exponential weighted moving average.5、The method of claim 4 wherein generating the status assessment further comprises:The weighting factor is subtracted from the previous state update as a function of the elapsed time since the previous state update;Generating the updated state assessment as a function of the discretionary state observation, the discretionary weighting factor, and the previous state assessment;The weighting factor is updated based on the discretionary weighting factor and the weighting factor used to discriminate the state observation.6、The method of claim 1, wherein the status observation comprises a first status observation, the processing time includes a first processing time, and the method further comprises:Receiving a second state observation associated with the process, the second state observation having a second associated process time earlier than the first process time;Determining the second state observation according to the second processing time; andThe status assessment is updated based on the discretionary second state observation.7、The method of claim 1 further comprising:Receiving a plurality of state observations associated with the process, each state observation having a processing time;Discretion of each state observation as a function of its associated processing time;This state assessment is generated based on the discretionary state observation.8、The method of claim 7 wherein generating the status assessment further comprises reacting to receiving each status observation to generate an updated status assessment, and wherein generating the updated status assessment further comprises:Generating a new weighting factor for the state assessment of the update as a function of the processing time associated with the state observation being incorporated into the updated state assessment;Decreasing the weighting factor associated with the previous updated state assessment as a function of the elapsed time since the state evaluation of the previous update was generated;Generating the updated state assessment as the new weighting factor, the state observations incorporated into the updated state assessment, the discretion weighting factor associated with the previous updated state assessment, and the previous updated state The function of the evaluation.9、The method of claim 8 further comprising removing the weighting factor for the selected state observation as a function of the processing time of the association, and removing the using the selected state observation and the removal weighting factor The selected state is observed to remove one of the selected ones of the state observations.10、The method of claim 1 further comprising controlling the process based on the status assessment.11、The method of claim 8 wherein the process comprises a semiconductor fabrication process, the controlling the process further comprising determining at least one parameter of an operational scheme for the semiconductor fabrication process, and the method further comprising determining the operation The at least one parameter of the solution is followed by processing the semiconductor device.12、A system (100) comprising:Processing process tools (120, 140) operable to perform processing in accordance with an operational scenario;A metrology tool (130) operable to generate a state observation associated with the process;Processing a process controller (150) operative to receive a state observation having an associated processing time, generating a weighting factor for subtracting the state observation based on the processing time, and generating a state assessment based on the subtracted state observation, And determining at least one parameter determining the operational scenario based on the status.
Time-weighted moving average filterTechnical fieldThe present invention relates generally to manufacturing, and more particularly to time-weighted moving average filteringUse of the device.Background techniqueThere is a continuing drive in the semiconductor industry to improve the quality, reliability and yield of integrated circuit devices such as microprocessors, memory devices and the like. Consumer demand for higher quality computers and electronic devices that are more reliable to operate is driving this drive. These demands have led to continuous improvements in the fabrication of semiconductor devices such as transistors and in the fabrication of integrated circuit devices incorporating such transistors. In addition, the drawbacks of reducing the components used to fabricate typical transistors also reduce the overall cost per transistor and the cost of integrated circuit devices incorporating such transistors.In general, a set of process steps is performed on a group of (sometimes referred to as a "lot") wafers using different process tools, including photolithography steppers (photolithography) Stepper), etching tools, deposition tools, polishing tools, rapid heat treatment process tools, implant tools, and the like. The technologies that have formed the basis of semiconductor processing tools in recent years have attracted more attention, resulting in a substantially refined product. However, despite the progress made in this regard, many of the process tools available on the market today are missing. In particular, these tools often lack advanced process data monitoring capabilities, such as providing historical parameter data in a user-friendly format, as well as event logging, current process parameters, and The real-time graphical display of both the process parameters of the overall run, and the ability to monitor remotely (ie, local location and globally). These deletions can result in non-optimal control of key process parameters such as yield, accuracy, stability and repeatability, process temperature, machine tool parameters, and the like. This variability shows itself as the same-rundisparity, run-to-run disparity, and tool-to-tool difference (tool-to-) that can spread to product quality and performance deviations (deviation) Tooldisparity), whereas the ideal monitoring and diagnostic system for these tools will provide a mechanism to monitor this variability and provide an optimal control of key parameters.One technique used to improve the operation of semiconductor processing lines involves the use of a factory wide control system to automatically control the operation of various process tools. Manufacturing tools communicate with the network of manufacturing frameworks or process modules. Each manufacturing tool is typically connected to an equipment interface. The equipment interface is connected to a machine interface that facilitates communication between the manufacturing tool and the manufacturing architecture. The machine interface can generally be part of an advanced process control (APC) system. The APC system initiates a control script based on the manufacturing mode, which can be a software program that automatically retrieves the material from which the manufacturing process is to be performed. Frequently, a semiconductor device generates data relating to the quality of the processed semiconductor device in stages through a plurality of manufacturing tools for a plurality of processing processes.During the manufacturing process, various events that affect the performance of the manufactured device may occur. That is, changes in the manufacturing process steps result in changes in device performance. Factors such as feature critical dimension, doping level, contact resistance, particle contamination, etc., all potentially affect the final performance of the device. The various tools on the process line are controlled according to performance patterns to reduce process variations. Common control tools include photolithography steppers, grinding tools, etching tools, and deposition tools. Pre-processing and/or post-processing measurement data is supplied to the process controller for the tool. The operating recipe parameter, such as the processing time, is calculated by the process controller based on the performance mode and the measurement information that attempts to achieve a post-processing result as close as possible to the target value. The reduced variation in this manner results in increased yield, reduced cost, higher device performance, etc., all of which are equal to the increased profit. The measurement data collected before, during (or in) the process wafer or batch wafer can be used to generate feedback and/or feedforward information. Used to determine the control behavior for the previous processing tool (ie, feedback), the subsequent processing tool (ie, feedforward), or both.Typically, the controller adjusts the operating scheme of the control tool for use with feedback or feed forward measurement information. Control behavior is typically generated using a control pattern that tracks one or more process state variables associated with manufacturing. For example, the controller can adjust the photolithography protocol parameters to control the critical dimension (CD) of the device being fabricated. Likewise, the controller can control the etch tool to affect the trench depth or spacer width characteristics.To provide stability to the controller, control behavior is generally not generated solely based on the most recently observed process state variables. Thus, the previous measurements are typically performed by an exponentially weighted moving average (EWMA) filter that outputs an average value for the process state based on the current and previous values. The EWMA filter is a weighted average that is affected by the more recent state values.EWMA filters have been used in the semiconductor industry to evaluate process states for many years. The general equation for the EWMA filter is:Where the weighting factor, ωi = (1 - λ)i, discards the older measurements, and λ is the tuning parameter affecting the level of discounting (ie, 0 < λ < 1) (tuningparameter) ).In the case of a large number of measurements, the oldest measurement has a negligible contribution to the filtered value, and a recursive EWMA filter can be used:The recursive EWMA filter tracks only the previous process state evaluation and updates the evaluation to new data received according to the weighting factor (λ).The EWMA filter has several limitations when used in a semiconductor environment. In a semiconductor manufacturing environment, separate processes are performed on individual wafers or groups of wafers (ie, batches). The measurement data used to determine the state of the process is collected for individual separation events. Share measurement resources to collect information about different types and wafers completed at different stages. Therefore, the measurement data collected in association with the specified process state is not received by the controller at a fixed update interval.Moreover, due to the number of independent process tools and measurement tools, the measurement data does not have to arrive in sequence time order. In other words, when batches are processed, they are not always measured in the same order. The recursive EWMA filter is not responsible for out-of-order processing when it is assumed that the wafers are processed sequentially. When sampling is received, the samples are utilized to generate a new process state assessment. Moreover, once the sample has been added to the recursive EWMA state assessment, it cannot be cancelled. Moreover, the EWMA filter is not responsible for the significant time gap time between sequence processing. When it is determined whether there is a large time gap or a relatively short gap between the sequence processing, the EWMA equally reduces the data. When the EWMA filter does not receive process state updates, the tool may drift during large time slots. Finally, the EWMA filter does not provide quality measurements for process state evaluation. EWMA status assessments using old and scarce data are equally processed for new and good characteristics.This section is intended to describe various aspects of the present invention, which may be related to various aspects of the invention as described below and/or claimed. This paragraph provides background information to facilitate a better understanding of the various aspects of the invention. It will be appreciated that the description in this section of this document will be read in this light, and not as recognized in the prior art. The present invention is directed to overcoming, or at least reducing, the effects of one or more of the problems set forth above.Summary of the inventionA brief summary of the invention is set forth below to provide a basic understanding of certain aspects of the invention. This summary is not an exhaustive overview of the invention. It is not intended to identify key or critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts in a simplified form.One aspect of the present invention is a method for evaluating a state associated with a process, the method comprising receiving a state observation associated with the process. This state observation has an associated processing time. A weighting factor for discretionary state observation is generated based on the processing time. A state assessment is generated based on a discretionary state observation.Another aspect of the invention is a system that includes a process tool, a metrology tool, and a process controller. The process tool is operable to perform the process in accordance with an operational scenario. The metrology tool is operable to generate a state observation associated with the process. The process controller is operative to receive a state observation having an associated processing time, generate a weighting factor for discretionary state observation based on the processing time, generate a state assessment based on the discretionary state observation, and determine a decision based on the state At least one parameter of the operating plan.DRAWINGSThe invention has been described in detail above with reference to the drawings, in which like reference1 is a simplified block diagram of a process line in accordance with an illustrative embodiment of the present invention;Figure 2 is a diagram illustrating the application to a particular observation as a function of observed age;Figures 3 and 4 illustrate the weighting applied to the demonstration new observation;5 is a simplified flow diagram of a method for performing a recursive time-weighted EWMA in accordance with another illustrative embodiment of the present invention.While the invention may be susceptible to various modifications and alternative forms, the specific embodiments of the invention are shown and described in detail. It should be understood, however, that the invention is not limited by the description of the invention Modifications, equivalents, and alternatives within the spirit and scope of the invention.Main component symbol description100  Process line 110 wafer120  First Process Tool 130 Measurement Tool140  Second Process Tool 150 Process Controller160  Control mode 500, 510, 520, 530 stepsDetailed waysOne or more specific embodiments of the invention are described below. In particular, it is to be understood that the invention is not to be construed as limited to It is within the scope of the following patent application. It should be understood that in developing any such real work, such as any engineering or design plan, many implementation-related decisions must be made in order to achieve the inventor's specific objectives, such as compliance with the actual implementation. Changing system-related and business-related constraints. In addition, it should be appreciated that such development work can be complex and time consuming, however, it will still be a routine practice for those of ordinary skill in the art having the benefit of the present disclosure. Design, manufacturing, processing work. Nothing in this patent specification is considered critical or indispensable unless it is clearly stated in the specification as "critical" or "essential".The invention will now be described with reference to the drawings. The various structures, systems, and devices are exemplified in the drawings for the purpose of illustration and description. However, the attached drawings are still included to describe and explain exemplary embodiments of the invention. It should be understood that the terms and expressions herein have the same meaning as the terms and terms understood by those of ordinary skill in the art. There is no specific term or definition of a term (i.e., a different definition from a customary definition as understood by those of ordinary skill in the art) is implied by the continued use of the term or term. However, terms or terms of special significance (i.e., definitions that are different from those of ordinary skill in the art) will be set forth in the specification in a manner that is specifically and clearly defined.Reference is made to the drawings in which like reference numerals Referring in more detail to Figure 1, the present invention will be described in terms of an exemplary process line 100 for processing wafer 110 in accordance with the present invention. Process line 100 includes a first process tool 120, a metrology tool 130, a second process tool 140, and a process controller 150. The process controller 150 receives the data from the metrology tool 130 and adjusts one or both of the process tools 120, 140 to reduce the characteristic variations of the processed wafer 110. Of course, a separate process controller 150 can be provided to each tool 120, 140. The particular control behavior obtained by process controller 150 is dependent upon the particular process performed by process tool 120, 140 and the output characteristics measured by metrology tool 130. In the illustrated embodiment, process controller 150 uses time-reduced index weighted filtering techniques to evaluate process states associated with wafer 110. In general, the measurement data collected by the metrology tool 130 provides information for adjusting the evaluation status used by the process controller 150. The evaluation status is used by the process controller 150 to adjust the operational scheme of the controlled tools 120, 140.Although the present invention is described as being applicable to semiconductor manufacturing equipment, the invention is not so limited and can be applied to other manufacturing environments, as well as to virtually any EWMA application. The techniques described herein are applicable to a variety of workpieces including, but not limited to, microprocessors, memory devices, digital signal processors, application specific integrated circuits (ASICs), or other similar devices. In addition to being applied to semiconductor devices, this technique can be applied to workpieces.Process controller 150 may use control mode 160 of process tools 120, 140 that are controlled to generate control behavior. The control mode 160 can be developed empirically using commonly known linear or non-linear techniques. Control mode 160 can be a fairly simple equation-based mode, or a more complex mode, such as a neural network mode, a principal component analysis (PCA) mode, a partial least squares/predictive potential structure (partial) Leastsquares/projection to latent structures; PLS) mode. The particular executed control mode 160 can be changed based on the selected patterning technique and the process being controlled. Using control mode 160, process controller 150 may determine operating scheme parameters to reduce variations in characteristics of wafer 110 being processed.In the illustrated embodiment, process controller 150 is a computer programmed with software to perform the functions recited. However, as will be appreciated by those skilled in the art, hardware controllers designed to perform a particular function can also be used. Moreover, the functions performed by process controller 150 as described herein may be implemented by a plurality of controller devices distributed throughout the system. Moreover, process controller 150 can be a stand-alone controller that can be disposed on process tool 120, 140, or it can be a component of a system that controls operation in an integrated circuit manufacturing device. Portions of the present invention and corresponding detailed descriptions are presented in terms of software on the data bits in the computer memory, or algorithms and algorithms. These descriptions and representations are intended to convey the substance of their work to others skilled in the art. As the term is used herein, as is also the case with a user, the algorithm is conceived to be a sequenced sequence of steps leading to the desired result. These steps require physical quantities of physical manipulation. Usually, though not necessarily, these quantities are made in the form of light, electricity, or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Mainly based on the reasons for common use, it has been convenient to associate these signals with bits, values, components, symbols, characters, terms, numbers, etc. for a long time.However, it should be borne in mind that all of these and similar terms are related to the appropriate physical quantities, and only the convenience labels are applied to these quantities. Unless specifically stated otherwise, or apparent from the discussion, these terms are referred to as "processing" or "computing" or "calculating" or "determining" or "displaying" Displaying", etc., with reference to the operation and processing of a computer system, or a similar electronic computing device, operating and converting the electronic quantity data actually embodied in the buffer and memory of the computer system into a memory or buffer in the computer system Internal or other such information is stored, transmitted or otherwise displayed in the device as a similar physical quantity.In general, process controller 150 implements a time weighting technique to generate a process state assessment. The process controller 150 addresses the process of out-of-order processing by discretionary processing state observations from the processing time (i.e., the time at which the wafer or batch wafer is processed in the tools 120, 140). Moreover, in recursive applications, because of the last state update, the time weighting technique is also discretioned according to time, thereby allowing the process controller 150 to address the issued observation gap.The general equation for the time-weighted EWMA filter (t-EWMA) is:Where τ represents a predetermined time constant associated with the process being monitored, the process determines how quickly the previous state observation information is subtracted, and ti is processed since the wafer/batch in tools 120, 140 Elapsed time (ie, ti=t-tp). Figure 2 shows the weighting applied to a particular observation as a function of the age of the observation (so-called elapsed time since processing).The equation for the recursive Rt-EWMA filter is:among them(Δt = time since the laststate update) (5)as well asReferring now to Figure 3, a time delay observation with an observational age of about 1.4 is considered. Due to the time delay, the process state observation is reduced, such that a weighting factor of approximately 0.25 is obtained in accordance with Equation 6. Figure 4 shows a new observation with no metrology delay. When the process state is observed as current, the weighting factor is 1.0. In both cases, the previous weighting factor is discretioned according to time according to Equation 5 because of the last state update.Turning to Figure 5, a simplified flow chart of the method for performing Rt-EWMA is shown. To perform Rt-EWMA, you only need to store the previous weights ωk, old, the previous state evaluationand the previous status update time stamp. Consider updating the stored Rt-EWMA parameters, the new observation ynew, the timestamp tp of the processing operation, and the current timestamp.In method step 500, the previous weighting factor ωk.old (Equation 5) is subtracted based on the elapsed time since the previous status update. In method step 510, the weighting factor ωnew for the new observation is determined according to the processing time tp (Equation 6). In method step 520, a new state estimate is calculated as the newly observed weighted average yk and the previous state estimate(Equation 4). In method step 530, the weighting factor is updated by adding the previous weighting factor to the new weighting factor:Ωk+1=ωk+ωnew. (7)An obvious advantage of the Rt-EWMA technology is that observations can be evaluated as "undone" status. For example, consider an outlier state observation from an error condition that does not reflect the general process state. In some cases, the acknowledgment state observation is associated with the outlier state and may be delayed before the state evaluation is determined to be ignored. However, in the conventional recursive EWMA setting, observations cannot be removed from the state assessment once the observation has gathered into the state assessment. On the other hand, when the Rt-EWMA technique avoids the problem of the out-of-sequence processing, if the observation of yx and the time stamp tp is known as the following procedure, the observation may not be completed.It should be noted that the weighting factor used to remove the state observation is the same as the weighting factor used to add the state observation, depending on the processing time. At the same time, the weighting factor of the composition is adjusted by subtracting only the weighting factor added when the state observation is initially added.The following examples illustrate exemplary implementations of t-EWMA or Rt-EWMA technology in semiconductor manufacturing settings. In the illustrated example, process tool 120 is an etch tool and metrology tool 130 is configured to measure the depth of the trench formed by the etch process. The feedback control equation used by process controller 150 in accordance with control mode 160 is used to determine the etch time TE as:Where TB is the basic etch time corresponding to the default etch time value, k is the tuning parameter, TDT is the target trench depth, andis the observed trench depth. Individual trench depth observations are observed by the process state filtered by the t-EWMA or Rt-EWMA techniques in accordance with the present invention to produce a state assessment. The difference between the target depth of the trench and the state assessment for the trench depth reflects the error value. The gain constant k represents how the process controller 150 actively reacts to errors in the groove depth.When expressed in a semiconductor manufacturing environment, t-EWMA and Rt-EWMA have many advantages for their conventional counterparts when used in the absence of regular production pitch and sequential state observation. Since the incoming observation is reduced according to its processing time, the order of the received observations does not significantly affect the reliability of the state evaluation, thus enhancing the performance of the process controller 150. At the same time, during recursive execution, the effective time delay between state updates can be minimized due to the discretion applied to the previous state weighting factors. Therefore, a closer observation will give a higher weighting relative to a longer observation to determine the state assessment. Moreover, if it is determined that the observation indicates the outlier data, the observation is removed from the state evaluation.The specific embodiments disclosed above are illustrative only, and it is apparent that the invention may be modified or practiced in a different and equivalent manner. In addition, the detailed construction or design of the present invention is not intended to limit the present invention, except as described in the appended claims. Therefore, it is apparent that the specific embodiments disclosed above may be modified and changed without departing from the spirit and scope of the invention, and all such modifications are considered within the spirit and scope of the invention. Therefore, the scope of protection of the present invention should be as described in the following patent application.
Methods, systems, and devices for a decoder are described. The memory device may include a substrate, an array of memory cells coupled with the substrate, and a decoder coupled with the substrate. The decoder may be configured to apply a voltage to an access line of the array of memory cells as part of an access operation. The decoder may include a first conductive line configured to carry the voltage applied to the access line of the array of memory cells. In some cases, the decoder may include a doped material extending between the first conductive line and the access line of the array of memory cells in a first direction (e.g., away from a surface of the substrate) and the doped material may be configured to selectively couple the first conductive line of the decoder with the access line of the array of memory cells.
CLAIMSWhat is claimed is:1. A memory device, comprising:a substrate;an array of memory cells coupled with the substrate; anda decoder coupled with the substrate and configured to apply a voltage to an access line of the array of memory cells as part of an access operation, the decoder comprising:a first conductive line configured to carry the voltage applied to the access line of the array of memory cells; anda doped material extending between the first conductive line and the access line of the array of memory cells in a first direction away from a surface of the substrate, the doped material configured to selectively couple the first conductive line of the decoder with the access line of the array of memory cells.2. The memory device of claim 1, further comprising:a contact extending between the doped material and the access line of the array of memory cells, wherein the doped material selectively couples the first conductive line of the decoder with the contact.3. The memory device of claim 2, wherein the doped material is directly coupled with the first conductive line.4. The memory device of claim 1, wherein the decoder comprises:a conductive material coupled with the doped material and configured to carry a second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line of the array of memory cells.5. The memory device of claim 4, wherein the conductive material is directly coupled with a surface of the doped material.6. The memory device of claim 4, wherein the conductive material extends in a second direction parallel to the surface of the substrate.7. The memory device of claim 4, wherein the decoder comprises:a second conductive line configured to carry the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line of the array of memory cells.8. The memory device of claim 7, wherein the decoder comprises:a contact extending between the second conductive line and the conductive material, the contact configured to carry the second voltage from the second conductive line to the conductive material as part of the access operation.9. The memory device of claim 4, wherein the doped material and the conductive material comprise a transistor configured to selectively couple the first conductive line of the decoder and the access line of the array of memory cells.10. The memory device of claim 1, wherein the doped material extends orthogonally from a plane defined by the surface of the substrate.11. The memory device of claim 1, wherein the doped material has a first doped region and a second doped region, wherein the first doped region is a first distance away from the surface of the substrate and the second doped region is a second distance away from the surface of the substrate different than the first distance.12. The memory device of claim 1, wherein the doped material is polysilicon.13. The memory device of claim 1, wherein the array of memory cells comprises self-selecting memory cells.14. A memory device, comprising:a substrate; anda decoder coupled with the substrate and configured to select a memory cell as part of an access operation, the decoder comprising:a first conductive line configured to carry a voltage for selecting the memory cell as part of the access operation; and a doped material extending between the first conductive line and a contact that couples the decoder with the memory cell and configured to selectively couple the first conductive line with the contact as part of the access operation.15. The memory device of claim 14, wherein the first conductive line is directly coupled with the doped material.16. The memory device of claim 14, wherein the decoder comprises: a conductive material coupled with the doped material and configured to carry a second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the memory cell.17. The memory device of claim 16, wherein the conductive material extends parallel to a plane defined by a surface of the substrate.18. The memory device of claim 16, wherein the decoder comprises: a second conductive line configured to carry the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with an access line of the memory cell.19. The memory device of claim 14, wherein the doped material is poly silicon and extends orthogonally from a plane defined by a surface of the substrate.20. A memory device, comprising:a substrate;an array of memory cells coupled with the substrate and comprising a first set of access lines and a second set of access lines;a first decoder coupled with the substrate and the array of memory cells, the first decoder configured to apply a first voltage to a first access line of the first set as part of an access operation, the first decoder comprising:a first conductive line configured to carry the first voltage for the first access line as part of the access operation;a doped material extending between the first conductive line and one of the first set of access lines in a first direction perpendicular to a surface of the substrate, the doped material configured to selectively couple the first conductive line with the first access line as part of the access operation; and a second decoder coupled with the substrate and the array of memory cells, the second decoder configured to apply a second voltage to a second access line of the second set as part of the access operation.21. The memory device of claim 20, wherein the second decoder comprises:a second conductive line configured to carry the second voltage for selecting a memory cell of the array of memory cells as part of the access operation; anda second doped material extending between the second conductive line and one of the second set of access lines of the array of memory cells in the first direction perpendicular to the surface of the substrate, the second doped material configured to selectively couple the second conductive line with the second access line of the array of memory cells as part of the access operation.22. The memory device of claim 20, wherein the second decoder comprises:a second conductive line configured to carry the second voltage for selecting a memory cell of the array of memory cells as part of the access operation; anda second doped material extending in a second direction parallel to the surface of the substrate, the second doped material configured to selectively couple the second conductive line with the second access line of the array of memory cells as part of the access operation.23. The memory device of claim 20, wherein the first decoder is positioned between the substrate and the array of memory cells.24. The memory device of claim 20, wherein the array of memory cells is positioned between the substrate and the first decoder.25. The memory device of claim 20, wherein the first decoder comprises a plurality of nMOS transistors and the second decoder comprises a plurality of a pMOS transistors.26. The memory device of claim 20, wherein the first set of access lines comprise word lines.27. The memory device of claim 20, wherein the array of memory cells comprises a cross-point architecture, a pillar architecture, or a planar architecture.28. A method, comprising:applying a first voltage for selecting a memory cell to a first conductive line of a decoder as part of an access operation of the memory cell;coupling, based at least in part on applying the first voltage and using a doped material of the decoder extending between the first conductive line and an access line in a first direction, the first conductive line with the access line associated with the memory cell as part of the access operation; andapplying the first voltage to the memory cell as part of the access operation based at least in part on coupling the first conductive line of the decoder with the access line.29. The method of claim 28, further comprising:applying a second voltage to a second conductive line of the decoder as part of the access operation, the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line associated with the memory cell, wherein applying the first voltage to the memory cell is based at least in part on applying the second voltage to the second conductive line.30. The method of claim 29, further comprising:selecting the memory cell based at least in part on an intersection of the first voltage and the second voltage, wherein a signal applied to the memory cell as part of the access operation has a positive polarity or a negative polarity.31. The method of claim 29, further comprising:receiving a command comprising an instruction to perform the access operation on the memory cell; andidentifying an address of the memory cell based at least in part on receiving the command, wherein applying the second voltage to the second conductive line is based at least in part on identifying the address.32. The method of claim 28, wherein the access operation is a read operation, and the method further comprises: outputting a logic state stored in the memory cell based at least in part on applying the first voltage to the memory cell.33. The method of claim 28, wherein the access operation is a write operation, and the method further comprises:storing a logic state in the memory cell based at least in part on applying the first voltage to the memory cell.34. An apparatus comprising:a decoder configured to apply a voltage as part of an access operation of a memory cell, the decoder comprising:a first conductive line configured to carry the voltage for selecting the memory cell as part of the access operation;a doped material coupled with the first conductive line and a contact, the doped material configured to selectively couple the first conductive line with the contact; anda controller operable, as part of the access operation of the memory cell, to: select the memory cell by applying a first voltage to the first conductive line of the decoder;couple the first conductive line of the decoder with an access line associated with the memory cell based at least in part on selecting the memory cell; andapply the first voltage to the memory cell based at least in part on coupling the first conductive line of the decoder with the access line.35. The apparatus of claim 34, wherein the controller is further operable to:apply a second voltage to a second conductive line of the decoder as part of the access operation, the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line associated with the memory cell, wherein applying the first voltage to the memory cell is based at least in part on applying the second voltage to the second conductive line.
VERTICAL DECODERCROSS REFERENCE[0001] The present Application for Patent claims priority to U.S. Patent Application No. 16/206,006 by Redaelli, et ak, entitled“VERTICAL DECODER”, filed November 30, 2018, assigned to the assignee hereof and is expressly incorporated by reference in its entirety herein.BACKGROUND[0002] The following relates generally to operating a memory array and more specifically to a vertical decoder.[0003] Memory devices are widely used to store information in various electronic devices such as computers, cameras, digital displays, and the like. Information is stored by programing different states of a memory device. For example, binary devices have two states, often denoted by a logic“1” or a logic“0.” In other systems, more than two states may be stored. To access the stored information, a component of the electronic device may read, or sense, the stored state in the memory device. To store information, a component of the electronic device may write, or program, the state in the memory device.[0004] Various types of memory devices exist, including magnetic hard disks, random access memory (RAM), read only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), and others. Memory devices may be volatile or non-volatile. Non-volatile memory cells may maintain their stored logic state for extended periods of time even in the absence of an external power source. Volatile memory cells may lose their stored state over time unless they are periodically refreshed by an external power source.[0005] Improving memory devices, generally, may include increasing memory cell density, increasing read/write speeds, increasing reliability, increasing data retention, reducing power consumption, or reducing manufacturing costs, among other metrics.Improved solutions for saving space in the memory array, increasing the memory cell density, or decreasing overall power usage of the memory array may be desired. BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 illustrates an example memory device as disclosed herein.[0007] FIG. 2 illustrates an example of a memory array that supports a vertical decoder as disclosed herein. [0008] FIG. 3 illustrates an example of a top-down view of a decoder as disclosed herein.[0009] FIG. 4 illustrates an example of a cross-sectional view of a portion of a memory array that supports a vertical decoder as disclosed herein.[0010] FIGs. 5 and 6 illustrate examples of memory arrays that support a vertical decoder as disclosed herein. [0011] FIGs. 7A and 7B illustrate examples of memory device configurations that support a vertical decoder as disclosed herein.[0012] FIG. 8 shows a block diagram of a device that supports a vertical decoder as disclosed herein.[0013] FIGs. 9 and 10 shows a flowchart illustrating a method or methods that support a vertical decoder as disclosed herein.DETAILED DESCRIPTION[0014] Some memory devices may include a decoder coupled with the memory array. In some cases, the decoder may include one or more doped materials formed in a specific orientation to reduce the array size of the die used by the decoder. For example, the decoder may include doped materials that extend in a direction different from (e.g., perpendicular to) a surface of a substrate. In some cases, the decoder may also include a conductive line. The doped material may extend from the conductive line of the decoder to an access line associated with the memory array. In accordance with teachings herein, the decoder may be coupled with the substrate and configured to apply a voltage to the access line of the memory array. In some cases, the conductive line may be configured to carry the voltage applied to the access line, and the doped material may be coupled with the first conductive line of the decoder with the access line of the memory array.[0015] In some cases, the memory array may be an example of a self-selecting memory array. In some cases, a self-selecting memory array may be fabricated in a three-dimensional fashion and may include vertical memory cells. To save space and resources, the decoder that includes vertical doped materials may be implemented as part of or in the self-selecting memory array. In some examples, the decoders may be examples of row decoders implemented to bias one or more word-lines or examples of column decoders implemented to bias one or more a bit-lines or both. The decoders may be positioned above the memory array, below the memory array, or both. In such cases, the size of the memory array may be reduced based on the placement and/or orientation of the one or more decoders. These and other techniques and advantages described herein may thus improve the size and density of the memory array.[0016] Features of the disclosure introduced above are further described below in the context of a memory array. Specific examples are then described for operating the memory array related to a vertical decoder in some examples. These and other features of the disclosure are further illustrated by and described with reference to an apparatus diagram and flowcharts that relate to techniques for a vertical decoder.[0017] FIG. 1 illustrates an example memory device 100 as disclosed herein. Memory device 100 may also be referred to as an electronic memory apparatus. FIG. 1 is an illustrative representation of various components and features of the memory device 100. As such, it should be appreciated that the components and features of the memory device 100 shown to illustrate functional interrelationships, not their actual physical positions within the memory device 100. In the illustrative example of FIG. 1, the memory device 100 includes a three-dimensional (3D) memory array 102. The 3D memory array 102 includes memory cells 105 that may be programmable to store different states. In some examples, each memory cell 105 may be programmable to store two states, denoted as a logic 0 and a logic 1. In some examples, a memory cell 105 may be configured to store more than two logic states. A memory cell 105 may, in some examples, include a self-selecting memory cell. Although some elements included in FIG. 1 are labeled with a numeric indicator, other corresponding elements are not labeled, though they are the same or would be understood to be similar, in an effort to increase visibility and clarity of the depicted features.[0018] The 3D memory array 102 may include two or more two-dimensional (2D) memory arrays 103 formed on top of one another. This may increase a number of memory cells that may be placed or created on a single die or substrate as compared with 2D arrays, which in turn may reduce production costs, or increase the performance of the memory device, or both. The memory array 102 may include two levels of memory cells 105 and may thus be considered a 3D memory array; however, the number of levels is not limited to two. Each level may be aligned or positioned so that memory cells 105 may be aligned (exactly, overlapping, or approximately) with one another across each level, forming a memory cell stack 145. In some cases, the memory cell stack 145 may include multiple self-selecting memory cells laid on top of another while sharing an access line for both as explained below. In some cases, the self-selecting memory cells may be multi-level self-selecting memory cells configured to store more than one bit of data using multi-level storage techniques.[0019] In some examples, each row of memory cells 105 is connected to an access line 110, and each column of memory cells 105 is connected to a bit line 115. Access lines 110 and bit lines 115 may be substantially perpendicular to one another and may create an array of memory cells. As shown in FIG. 1, the two memory cells 105 in a memory cell stack 145 may share a common conductive line such as a bit line 115. That is, a bit line 115 may be in electronic communication with the bottom electrode of the upper memory cell 105 and the top electrode of the lower memory cell 105. Other configurations may be possible, for example, a third layer may share an access line 110 with a lower layer. In general, one memory cell 105 may be located at the intersection of two conductive lines such as an access line 110 and a bit line 115. This intersection may be referred to as a memory cell’s address. A target memory cell 105 may be a memory cell 105 located at the intersection of an energized access line 110 and bit line 115; that is, access line 110 and bit line 115 may be energized to read or write a memory cell 105 at their intersection. Other memory cells 105 that are in electronic communication with (e.g., connected to) the same access line 110 or bit line 115 may be referred to as untargeted memory cells 105.[0020] As discussed above, electrodes may be coupled to a memory cell 105 and an access line 110 or a bit line 115. The term electrode may refer to an electrical conductor, and in some cases, may be employed as an electrical contact to a memory cell 105. An electrode may include a trace, wire, conductive line, conductive layer, or the like that provides a conductive path between elements or components of memory device 100. In some examples, a memory cell 105 may include a chalcogenide material positioned between a first electrode and a second electrode. One side of the first electrode may be coupled to an access line 110 and the other side of the first electrode to the chalcogenide material. In addition, one side of the second electrode may be coupled to a bit line 115 and the other side of the second electrode to the chalcogenide material. The first electrode and the second electrode may be the same material (e.g., carbon) or different.[0021] Operations such as reading and writing may be performed on memory cells 105 by activating or selecting access line 110 and bit line 115. In some examples, access lines 110 may also be known as word lines 110, and bit lines 115 may also be known digit lines 115. References to access lines, word lines, and bit lines, or their analogues, are interchangeable without loss of understanding or operation. Activating or selecting a word line 110 or a bit line 115 may include applying a voltage to the respective line. Word lines 110 and bit lines 115 may be made of conductive materials such as metals (e.g., copper (Cu), aluminum (Al), gold (Au), tungsten (W), titanium (Ti)), metal alloys, carbon, conductively-dopedsemiconductors, or other conductive materials, alloys, compounds, or the like.[0022] Accessing memory cells 105 may be controlled through a row decoder 120 and a column decoder 130. For example, a row decoder 120 may receive a row address from the memory controller 140 and activate the appropriate word line 110 based on the received row address. Similarly, a column decoder 130 may receive a column address from the memory controller 140 and activate the appropriate bit line 115. For example, memory array 102 may include multiple word lines 110, labeled WL_1 through WL_M, and multiple digit lines 115, labeled DL_1 through DL_N, where M and N depend on the array size. Thus, by activating a word line 110 and a bit line 115, e.g., WL_2 and DL_3, the memory cell 105 at their intersection may be accessed. As discussed below in more detail, accessing memory cells 105 may be controlled through a row decoder 120 and a column decoder 130 that may include one or more doped materials that extend in a direction away from a surface of a substrate coupled to the memory array 102.[0023] Upon accessing, a memory cell 105 may be read, or sensed, by sense component 125 to determine the stored state of the memory cell 105. For example, a voltage may be applied to a memory cell 105 (using the corresponding word line 110 and bit line 115) and the presence of a resulting current may depend on the applied voltage and the threshold voltage of the memory cell 105. In some cases, more than one voltage may be applied.Additionally, if an applied voltage does not result in current flow, other voltages may be applied until a current is detected by sense component 125. By assessing the voltage that resulted in current flow, the stored logic state of the memory cell 105 may be determined. In some cases, the voltage may be ramped up in magnitude until a current flow is detected. In other cases, predetermined voltages may be applied sequentially until a current is detected. Likewise, a current may be applied to a memory cell 105 and the magnitude of the voltage to create the current may depend on the electrical resistance or the threshold voltage of the memory cell 105.[0024] In some examples, a memory cell may be programmed by providing an electric pulse to the cell, which may include a memory storage element. The pulse may be provided via a first access line (e.g., word line 110) or a second access line (e.g., bit line 115), or a combination thereof. In some cases, upon providing the pulse, ions may migrate within the memory storage element, depending on the polarity of the memory cell 105. Thus, a concentration of ions relative to the first side or the second side of the memory storage element may be based at least in part on a polarity of a voltage between the first access line and the second access line. In some cases, asymmetrically shaped memory storage elements may cause ions to be more crowded at portions of an element having more area. Certain portions of the memory storage element may have a higher resistivity and thus may give rise to a higher threshold voltage than other portions of the memory storage element. This description of ion migration represents an example of a mechanism of the self-selecting memory cell for achieving the results described herein. This example of a mechanism should not be considered limiting. This disclosure also includes other examples of mechanisms of the self-selecting memory cell for achieving the results described herein.[0025] Sense component 125 may include various transistors or amplifiers to detect and amplify a difference in the signals, which may be referred to as latching. The detected logic state of memory cell 105 may then be output through column decoder 130 as output 135. In some cases, sense component 125 may be part of a column decoder 130 or row decoder 120. Or, sense component 125 may be connected to or in electronic communication with column decoder 130 or row decoder 120. An ordinary person skilled in the art would appreciate that sense component may be associated either with column decoder or row decoder without losing its functional purposes.[0026] A memory cell 105 may be set or written by similarly activating the relevant word line 110 and bit line 115 and at least one logic value may be stored in the memory cell 105. Column decoder 130 or row decoder 120 may accept data, for example input/output 135, to be written to the memory cells 105. In the case of a self-selecting memory cell including a chalcogenide material, a memory cell 105 may be written to store a logic state in the memory cell 105 by applying the first voltage to the memory cell 105 as part of the access operation based on coupling the first conductive line of the decoder (e.g., row decoder 120 or column decoder 130) with the access line (e.g., word line 110 or bit line 115).[0027] The memory controller 140 may control the operation (e.g., read, write, re-write, refresh, discharge) of memory cells 105 through the various components, for example, row decoder 120, column decoder 130, and sense component 125. In some cases, one or more of the row decoder 120, column decoder 130, and sense component 125 may be co-located with the memory controller 140. Memory controller 140 may generate row and column address signals to activate the desired word line 110 and bit line 115. Memory controller 140 may also generate and control various voltages or currents used during the operation of memory device 100.[0028] The memory controller 140 may be configured to select the memory cell 105 by applying a first voltage to the first conductive line of the decoder (e.g., row decoder 120 or column decoder 130). In some cases, the memory controller 140 may be configured to couple the first conductive line of the decoder with an access line (e.g., word line 110 or bit line 115) associated with the memory cell 105 based on selecting the memory cell 105. The memory controller 140 may be configured to apply the first voltage to the memory cell 105 based at least in part on coupling the first conductive line of the decoder with the access line.[0029] In some examples, the memory controller 140 may be configured to apply a second voltage to a second conductive line of the decoder as part of the access operation. In some cases, the second voltage may cause the doped material to selectively couple the first conductive line of the decoder with the access line associated with the memory cell 105. Applying the first voltage to the memory cell 105 may be based on applying the second voltage to the second conductive line. For example, the memory controller 140 may select the memory cell 105 based on an intersection of the first voltage and the second voltage. In some cases, a signal applied to the memory cell 105 as part of the access operation may have a positive polarity or a negative polarity.[0030] In some examples, the memory controller 140 may receive a command comprising an instruction to perform the access operation on the memory cell 105 and identify an address of the memory cell 105 based on receiving the command. In some cases, applying the second voltage to the second conductive line may be based on identifying the address. If the access operation is a read operation, the memory controller 140 may be configured to output a logic state stored in the memory cell 105 based on applying the first voltage to the memory cell 105. If the access operation is a write operation, the memory controller 140 may be store a logic state in the memory cell 105 based on applying the first voltage to the memory cell 105.[0031] FIG. 2 illustrates an example of a 3D memory array 200 that supports a vertical decoder as disclosed herein. Memory array 200 may be an example of portions of memory array 102 described with reference to FIG. 1. Memory array 200 may include a first array or deck 205 of memory cells that is positioned above a substrate 204 and second array or deck 210 of memory cells on top of the first array or deck 205. Memory array 200 may also include word line 110-a and word line 110-b, and bit line 115-a, which may be examples of word line 110 and bit line 115, as described with reference to FIG. 1. Memory cells of the first deck 205 and the second deck 210 each may have one or more self-selecting memory cell (e.g., self-selecting memory cell 220-a and self-selecting memory cell 220-b,respectively). Although some elements included in FIG. 2 are labeled with a numeric indicator, other corresponding elements are not labeled, though they are the same or would be understood to be similar, in an effort to increase visibility and clarity of the depicted features.[0032] Self-selecting memory cells of the first deck 205 may include first electrode 215- a, self-selecting memory cell 220-a (e.g., including chalcogenide material), and second electrode 225-a. In addition, self-selecting memory cells of the second deck 210 may include a first electrode 215-b, self-selecting memory cell 220-b (e.g., including chalcogenide material), and second electrode 225-b. The self-selecting memory cells of the first deck 205 and second deck 210 may, in some examples, have common conductive lines such that corresponding self-selecting memory cells of each deck 205 and 210 may share bit lines 115 or word lines 110 as described with reference to FIG. 1. For example, first electrode 215-b of the second deck 210 and the second electrode 225-a of the first deck 205 may be coupled to bit line 115-a such that bit line 115-a is shared by vertically adjacent self-selecting memory cells. In accordance with the teachings herein, a decoder may be positioned above or below each deck if the memory array 200 includes more than one deck. For example, a decoder may be positioned above first deck 205 and above second deck 210.[0033] The architecture of memory array 200 may be referred to as a cross-point architecture, in some cases, in which a memory cell is formed at a topological cross-point between a word line and a bit line as illustrated in FIG. 2. Such a cross-point architecture may offer relatively high-density data storage with lower production costs compared to other memory architectures. For example, the cross-point architecture may have memory cells with a reduced area and, resultantly, an increased memory cell density compared to other architectures. For example, the architecture may have a 4F2 memory cell area, where F is the smallest feature size, compared to other architectures with a 6F2 memory cell area, such as those with a three-terminal selection component. For example, DRAM may use a transistor, which is a three-terminal device, as the selection component for each memory cell and may have a larger memory cell area compared to the cross-point architecture.[0034] While the example of FIG. 2 shows two memory decks, other configurations are possible. In some examples, a single memory deck of self-selecting memory cells may be constructed above a substrate 204, which may be referred to as a two-dimensional memory.In some examples, a three or four memory decks of memory cells may be configured in a similar manner in a three-dimensional cross point architecture.[0035] In some examples, one or more of the memory decks may include a self-selecting memory cell 220 that includes chalcogenide material. The self-selecting memory cell 220 may, for example, include a chalcogenide glass such as, for example, an alloy of selenium (Se), tellurium (Te), arsenic (As), antimony (Sb), carbon (C), germanium (Ge), and silicon (Si). In some examples, a chalcogenide material having primarily selenium (Se), arsenic (As), and germanium (Ge) may be referred to as SAG-alloy. In some examples, SAG-alloy may include silicon (Si) and such chalcogenide material may be referred to as SiSAG-alloy. In some examples, the chalcogenide glass may include additional elements such as hydrogen (H), oxygen (O), nitrogen (N), chlorine (Cl), or fluorine (F), each in atomic or molecular forms.[0036] In some examples, a self-selecting memory cell 220 including chalcogenide material may be programmed to a logic state by applying a first voltage. By way of example, when a particular self-selecting memory cell 220 is programed, elements within the cell separate, causing ion migration. Ions may migrate towards a particular electrode, depending on the polarity of the voltage applied to the memory cell. For example, in a self-selecting memory cell 220, ions may migrate towards the negative electrode. The memory cell may then be read by applying a voltage across the cell to sense. The threshold voltage seen during a read operation may be based on the distribution of ions in the memory cell and the polarity of the read pulse. [0037] For example, if a memory cell has a given distribution of ions, the threshold voltage detected during the read operation may be different for a first read voltage with a first polarity than it is with a second read voltage having a second polarity. Depending on the polarity of the memory cell, this concentration of migrating ions may represent a logic“1” or logic“0” state. This description of ion migration represents an example of a mechanism of the self-selecting memory cell for achieving the results described herein. This example of a mechanism should not be considered limiting. This disclosure also includes other examples of mechanisms of the self-selecting memory cell for achieving the results described herein.[0038] In some cases, a first voltage may be applied to a first conductive line of a decoder as part of an access operation of the self-selecting memory cell 220. Upon applying the first voltage, the first conductive line may be coupled with the access line (e.g., word line 110-a, word line 110-b, or bit line 115-a) associated with the self-selecting memory cell 220. For example, the first conductive line may be coupled with the access line based on a doped material of the decoder which extends between the first conductive line and the access line in a first direction.[0039] In some examples, the first voltage may be applied to the self-selecting memory cell 220 based on coupling the first conductive line of the decoder with the access line. The decoder may include one or more doped materials that extend between the first conductive line and the access line of the memory array 200 of memory cells in a first direction away from a surface of the substrate 204. In some cases, the decoder may be coupled with the substrate 204.[0040] FIG. 3 illustrates an example of a top-down view of a decoder 300 as disclosed herein. Decoder 300 may be an example of a row decoder 120 or column decoder 130 described with reference to FIG. 1. Decoder 300 may include doped material 310 that extends in a direction away from a surface of the substrate (not shown). Decoder 300 may be an example of a last level decoder of a memory array.[0041] Decoder 300 may include at least first conductive line 305. In some cases, decoder 300 may include a plurality of first conductive lines 305. First conductive line 305 may be configured to carry a voltage that is applied to the access line of the array of memory cells (not shown). For example, each first conductive line 305 may a receive a signal from an access line within decoder 300. First conductive line 305 may extend in a second direction. [0042] In some cases, decoder 300 may include doped materials 310 that may extend between first conductive line 305 and the access line (not shown). For example, doped material 310 may extend in a direction (e.g., first direction) away from the surface of the substrate. In some cases, the direction may be perpendicular or orthogonal to a plane defined by a surface of the substrate. For example, the second direction may be perpendicular to the first direction in which the first conductive line 305 extends. Doped material 310 may be configured to selectively couple first conductive line 305 of decoder 300 with the access line. In some cases, doped material 310 may comprise a semiconductor material such as polysilicon. In some cases, polysilicon may be deposited at a lower temperature than other materials, thereby increasing the compatibility between the polysilicon material of decoder 300 and the memory array.[0043] Decoder 300 may also include contacts 315. Contact 315 may extend between doped material 310 and other conductive lines of the decoder 300 or access lines of the array of memory cells. In some cases, doped material 310 may selectively couple first conductive line 305 of decoder 300 with contact 315. Contact 315 may also extend between conductive material 320 and a second conductive line (not shown).[0044] In some examples, decoder 300 may include at least one conductive material 320. Conductive material 320 may be coupled with doped material 310. In some cases, conductive material 320 may be configured to carry a second voltage (e.g., different voltage than the voltage applied to the access line) for causing doped material 310 to selectively couple first conductive line 305 with the access line the memory array (e.g., array of memory cells). In that case, one or more conductive materials 320 may receive a signal from an access line associated with the memory array. In some cases, the access line may be an example of a word line. Each conductive material 320 may contact to an access line of the memory array.[0045] In some cases, decoder 300 may include one or more transistors. For example, doped material 310 and conductive material 320 may comprise a transistor. The transistor may selectively couple first conductive line 305 with the access line of the memory array. In that case, conductive material 320 may be an example of a gate of the transistor and doped material 310 may be an example of a source of the transistor, a drain of the transistor, or both. In some cases, conductive material 320 may contact an oxide of doped material 310. The transistor may be an example of a nMOS type transistor or a pMOS type transistor. In some cases, polysilicon transistors as decoders may allow for large degree of freedom as compared to polysilicon transistors as selectors in the back-end of the memory array. For example, poly silicon transistors in the front-end of the memory array may allow the use of a higher thermal budget for dopant activation, thereby reducing the device engineering complexity. In some cases, a gate oxide may be positioned between the conductive material 320 and the doped material 310.[0046] In some examples, if decoder 300 includes doped material 310 that extends in a direction away from a surface of the substrate, the size and dimensions of decoder 300 may be optimized. For example, distance 325 between two conductive materials 320 may decrease when a vertical decoder is implemented. In some examples, distance 325 between conductive materials 320 may be 120 nm. In some cases, width 330 of conductive material 320 may also decrease when a vertical decoder is implemented. For example, width 330 of conductive material 320 may be 120 nm. The combined distance 335 of distance 325 and width 330 may be 240 nm. In that case, the combined distance 335 may decrease when a vertical decoder is implemented.[0047] In some cases, distance 340 between two first conductive lines 305 may increase when a vertical decoder is implemented. For example, distance 340 between first conductive lines 305 may be 120 nm. In some cases, width 345 of first conductive line 305 may decrease when a vertical decoder is implemented. For example, width 345 of first conductive line 305 may be 120 nm. The combined distance 350 of distance 340 and width 345 may be 240 nm.In that case, the combined distance 350 may decrease when a vertical decoder isimplemented. For example, the area of a nMOS transistor may be 0.015 um2. As described below in further detail, decoder 300 may be viewed via perspective line 355.[0048] FIG. 4 illustrates an example of a cross-sectional view of a portion of a memory array 400 that supports a vertical decoder as disclosed herein. The portion of the memory array 400 may include a decoder 402 that may include doped materials 410-a, 410-b, 410-c, and/or 410-d that extend in a direction away from a surface 435 of the substrate 425. Decoder 402 may be an example of decoder 300 as described with reference to FIG. 3. Doped materials 410-a, 410-b , 410-c, and 410-d may be examples of doped material 310 described with reference to FIG. 3.[0049] The portion of the memory array 400 may include substrate 425, which may be an example of substrate 204 as described in reference to FIG. 2. In some examples, decoder 402 may be coupled with substrate 425. Substrate 425 may be above or below decoder 402. In some cases, decoder 402 may be configured to apply a voltage to an access line of an array of memory cells (e.g., a word line or digit line) as part of an access operation. Decoder 402 may also include first conductive line 405, which may be an example of first conductive line 305 as described in reference to FIG. 3. In some cases, first conductive line 405 may be directly coupled with doped material 410-a.[0050] In some cases, decoder 402 may include doped materials 410-a through 410-d. Doped materials 410-a through 410-d may be a poly silicon material. In some examples, doped materials 410-a through 410-d may extend between first conductive line 405 and the access line of the array of memory cells (e.g., word line or digit line) in a direction away from a surface 435 of substrate 425. For example, doped materials 410-a through 410-d may extend orthogonally from a plane defined by the surface 435 of substrate 425.[0051] In some examples, doped material 410 may be include a first doped region 440 and a second doped region 445. For example, the first doped region 440 may be a first distance away from the surface 435 of substrate 425, and the second doped region 445 may be a second distance away from the surface 435 of substrate 425. In that case, the first distance and the second distance away from the surface 435 of substrate 425 may be different. In some cases, the first doped region 440 and the second doped region 445 may include similarly doped materials. In other examples, the first doped region 440 and the second doped region 445 may include different doped materials. For example, the first doped region 440 may include polysilicon and the second doped region 445 may include a different semiconductor material.[0052] Decoder 402 may include one or more contacts 415 including contacts 415-a and 415-b, which may be examples of contact 315 described in reference to FIG. 3. Contact 415-a may extend between doped material 410-a and the access line of the array of memory cells.In such cases, contact 415-a may be directly coupled with doped material 410-a. In some cases, doped material 410-a may selectively couple first conductive line 405 of decoder 402 with contact 415-a.[0053] Decoder 402 may also include conductive material 420 that may be coupled with doped material 410-a and 410-b, and which may be an example of conductive material 320 as described in reference to FIG. 3. Conductive material 420 may be configured to carry a second voltage for causing doped material 410-a to selectively couple first conductive line 405 with the access line or the contact 415-a. In some cases, conductive material 420 may be directly coupled with a surface of doped material 410-a. For example, conductive material 420 may be coupled with a surface of doped material 410-a. Conductive material 420 may contact an oxide of doped material 410-a. In some examples, conductive material 420 may extend in a direction parallel to the surface of substrate 425. Doped material 410-a may extend in a direction perpendicular to a surface of the conductive material 420.[0054] In some cases, decoder 402 may include second conductive line 430. Second conductive line 430 may be coupled to contact 415-b. For example, contact 415-b may extend between second conductive line 430 and conductive material 420. Second conductive line 430 may carry the second voltage for causing doped material 410-a to couple first conductive line 405 of decoder 402 with the access line. In some cases, contact 415-b may carry the second voltage from second conductive line 430 to conductive material 420 as part of the access operation. Second conductive line 430 may extend in a direction parallel to the surface of substrate 425. In that case, doped material 410-a may extend in a direction perpendicular to a surface of the second conductive line 430. In some cases, the first conductive line 405 may be an example of a global word line or global digit line of the decoder 402 and the second conductive line 430 may be an example of a local word line or a local digit line of the decoder 402.[0055] FIG. 5 illustrates an example of a memory array 500 that supports a vertical decoder as disclosed herein. Memory array 500 may include decoders 502-a and 502-b, substrate 525, an array of memory cells 535, and access lines 530-a (e.g., first set of access lines) and 530-b (e.g., second set of access lines). Decoders 502-a and 502-b and substrate 525 may be examples of decoder and substrate, as described in reference to FIGs. 2-4.Memory array 500 may include the array of memory cells 535 coupled with substrate 525. In some cases, the access lines 530-a may comprise word lines or digit lines. In some examples, the access lines 530-b may comprise bit lines or digit lines or word lines. In other examples, memory array 500 may be an example a cross-point architecture, a pillar architecture, or a planar architecture. Memory array 500 may be an example of an electrical schematic representation.[0056] Decoders 502-a and 502-b may each be an example of a vertical decoder as described herein. Decoder 502-a may be an example of a first decoder (e.g., a row decoder) coupled with substrate 525 and array of memory cells 535. In some cases, decoder 502-a may include a plurality of nMOS transistors. In some cases, decoder 502-a may include conductive lines 505-a (e.g., first conductive line), doped materials 510-a, contacts 515-a, contacts 515-b, and conductive material 520-a, which may be examples of first conductive lines, doped materials, contact, and conductive materials, as described in reference to FIGs. 3 and 4. In some examples, decoder 502-a may be positioned above the array of memory cells 535 (not shown), below the array of memory cells 535, or both.[0057] Decoder 502-a may apply a first voltage to an access line (e.g., first access line) of access lines 530-a as part of an access operation. Conductive line 505-a may carry the first voltage for the for the access operation. In some cases, conductive line 505-a may be coupled to the access line of access lines 530-a based on applying the first voltage. For example, the contact 515-a may carry a signal from another conductive line to cause the first conductive line 505-a to be coupled with the access lines 530-a. The contacts 515-b may couple the doped materials 510-a with the access lines 530-a. In some cases, access lines 530-a may be selected based on activating the first conductive line 505-a and the conductive material 520-a. The first voltage may also be applied to a memory cell of the array of memory cells 535 based on coupling conductive line 505-a to the access line of the access lines 530-a. In some cases, a logic state stored in the memory cell of the array of memory cells 535 may be outputted based on applying the first voltage. In that case, the access operation may be a read operation. In some examples, a logic state may be stored in the memory cell of the array of memory cells 535 based on applying the first voltage. In that case, the access operation may be a write operation.[0058] Doped material 510-a may extend between conductive line 505-a and one of the access lines 530-a (or contacts 515-b) in a direction perpendicular to the surface of substrate 525. That is, doped material 510-a may extend in a direction perpendicular to a surface of conductive material 520-a. In some cases, conductive line 505-a and access lines 530-a may be selectively coupled via doped material 510-a.[0059] In some cases, memory array 500 may include decoder 502-b which may be an example of a second decoder (e.g., a column decoder). In some cases, decoder 502-b may include a plurality of pMOS transistors. For example, decoder 502-b may be coupled with substrate 525 and the array of memory cells 535. In some cases, decoder 502-b may include conductive lines 505-b (e.g., second conductive line), doped materials 510-b, contacts 515-c, contacts 515-d, and conductive material 520-b. In some examples, decoder 502-b may be positioned above the array of memory cells 535, below the array of memory cells 535 (not shown), or both.[0060] In some cases, fabrication techniques to form memory array 500 may include a different masking step to form each of the different lengths of contacts 515-d (e.g., the distance between doped material 510-b and access line 530-b). In some examples, the contacting scheme may be an example of a staggered configuration. For example, the length of contact 515-d may increase as the distance between contact 515-d and the array of memory cells 535 increases. In such cases, the bottom access line 530-b may extend further than the top access line 530-b. The contacting scheme may be implemented via additional conductive layers (not shown). In some examples, a single masking step after deposition may be implemented to obtain the contacting scheme (e.g., staggered configuration).[0061] In some examples, decoder 502-b may apply a second voltage to an access line (e.g., second access line) of access lines 530-b as part of the access operation. Conductive line 505-b may carry a second voltage for selecting a memory cell of the array of memory cells 535 as part of the access operation. The contacts 515-d may couple the doped materials 510-b with the access lines 530-b. In some cases, access lines 530-b may be selected based on activating the conductive line 505-b and the conductive material 520-b. In some cases, the contact 515-c may carry a signal from another conductive to cause the first conductive line 505-b to be coupled with the access lines 530-b. A memory cell included in the array of memory cells 535 may be selected based on the intersection of activated access lines 530-a and 530-b. For example, the intersection of the first voltage and second voltage may select the memory cell. In that case, the signal applied to the memory cell of the array of memory cells 535 may have a positive or negative polarity.[0062] In some cases, doped material 510-b may extend between conductive line 505-b and one of the access lines 530-b (or contacts 515-d) in a direction perpendicular to the surface of substrate 525. Conductive line 505-b and access lines 530-b may be coupled via doped material 510-b.[0063] FIG. 6 illustrates an example of a memory array that supports a vertical decoder as disclosed herein. Memory array 600 may include a first decoder 602-a, a second decoder 602-b, substrate 625, an array of memory cells 635, and access lines 630-a (e.g., first set of access lines) and 630-b (e.g., second set of access lines). Memory array 600 may include the array of memory cells 635 coupled with substrate 625. In some cases, the access lines 630-a may comprise word lines or digit lines. In some examples, the access lines 630-b may comprise bit lines or word lines. In other examples, memory array 600 may be an example a cross-point architecture, a pillar architecture, or a planar architecture. Memory array 600 may be an example of memory array 500, as described in reference to FIG. 5.[0064] First decoder 602-a may be an example of a vertical decoder as described herein. First decoder 602-a may be coupled with substrate 625 and array of memory cells 635. In some cases, first decoder 602-a may include a plurality of nMOS transistors or a plurality of pMOS transistors. In some cases, first decoder 602-a may include conductive lines 605-a (e.g., first conductive line), doped materials 610-a, contacts 615-a, contacts 615-b, and conductive material 620-a, which may be examples of first conductive lines, doped materials, contacts, and conductive materials, as described in reference to FIGs. 3-5.[0065] First decoder 602-a may apply a first voltage to an access line (e.g., first access line) of access lines 630-a as part of an access operation. Conductive lines 605-a may carry the first voltage for the for the access operation (e.g., through the contact 615-b). Doped materials 610-a may extend between conductive line 605-a and one of the access lines 630-a in a direction perpendicular to the surface of substrate 625. Conductive line 605-a and access lines 630-a may be coupled via doped material 610-a. For example, the contact 615-a may carry a signal from another conductive to cause the first conductive line 605-a to be coupled with the access lines 630-a.[0066] In some cases, memory array 600 may include the second decoder 602-b which may be an example of a planar decoder. In some cases, second decoder 602-b may include a plurality of pMOS transistors or nMOS transistors. For example, second decoder 602-b may be coupled with substrate 625 and the array of memory cells 635. In some cases, second decoder 602-b may include conductive lines 605-b (e.g., second conductive line), doped materials 610-b, contacts 615-c, contacts 615-d, and conductive material 620-b, which may be examples of first conductive lines, doped materials, contacts, and conductive materials, as described in reference to FIGs. 3-5.[0067] In some examples, second decoder 602-b may apply a second voltage to an access line (e.g., second access line) of access lines 630-b as part of the access operation.Conductive lines 605-b may carry a second voltage for selecting a memory cell of the array of memory cells 635 as part of the access operation. In some cases, the doped material 610-b may extend parallel to a surface of the substrate 625. The doped material 610-b may include a plurality of doped regions that are configured to couple a first conductive line 605-b with an access lines 630-b based at least in part on a signal applied to one or more of the conductive materials 620-b. The contacts 615-c may couple the first conductive lines 605-b with first doped regions of the doped material 610-b and contacts 615-d may couple the access lines 630-b with second doped regions of the doped material 610-b.[0068] Doped material 610-b may extend in a direction parallel to the surface of substrate 625. In such cases, doped material 610-b may extend in a direction perpendicular to a surface of doped material 610-a. Conductive line 605-b and access lines 630-b may be coupled via doped material 610-b. In some cases, the memory array 600 may include a first decoder 602- a that includes doped materials 610-a that extend in a direction perpendicular to the surface of substrate 625 and a second decoder 602-b that includes doped materials 610-b that extend in a direction parallel to the surface of substrate 625.[0069] FIG. 7A illustrates an example of a memory device configuration 700-a that supports a vertical decoder as disclosed herein. Memory device configuration 700-a may include decoder 705-a, array of memory cells 710-a, and substrate 715-a, which may be examples of a decoder, array of memory cells, and substrate, as described in reference to FIGs. 3-6. In some cases, array of memory cells 710-a may be positioned between substrate 715-a and decoder 705-a.[0070] FIG. 7B illustrates an example of a memory device configuration 700-b that supports a vertical decoder as disclosed herein. Memory device configuration 700-b may include decoder 705-b, array of memory cells 710-b, and substrate 715-b, which may be examples of a decoder, array of memory cells, and substrate, as described in reference to FIGs. 3-6. In some cases, decoder 705-b may be positioned between array of memory cells 710-b and substrate 715-b.[0071] FIG. 8 shows a block diagram 800 of a device 805 that supports a vertical decoder as disclosed herein. In some examples, the device 805 may be an example of a memory array. The device 805 may be an example of portions of a memory controller (e.g., memory controller 140 as described with reference to FIG. 1). The device 805 may include selection component 810, coupling component 815, voltage component 820, command component 825, and logic state component 830. Each of these components may communicate, directly or indirectly, with one another (e.g., via one or more buses). [0072] Selection component 810 may apply a first voltage for selecting a memory cell to a first conductive line of a decoder as part of an access operation of the memory cell. In some examples, selection component 810 may select the memory cell based at least in part on an intersection of the first voltage and the second voltage, wherein a signal applied to the memory cell as part of the access operation has a positive polarity or a negative polarity.[0073] Coupling component 815 may couple, based at least in part on applying the first voltage and using a doped material of the decoder extending between the first conductive line and an access line in a first direction, the first conductive line with the access line associated with the memory cell as part of the access operation.[0074] Voltage component 820 may apply the first voltage to the memory cell as part of the access operation based at least in part on coupling the first conductive line of the decoder with the access line. In some examples, voltage component 820 may apply a second voltage to a second conductive line of the decoder as part of the access operation, the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line associated with the memory cell, wherein applying the first voltage to the memory cell is based at least in part on applying the second voltage to the second conductive line.[0075] Command component 825 may receive a command comprising an instruction to perform the access operation on the memory cell. In some examples, command component 825 may identify an address of the memory cell based at least in part on receiving the command, wherein applying the second voltage to the second conductive line is based at least in part on identifying the address.[0076] Logic state component 830 may output a logic state stored in the memory cell based at least in part on applying the first voltage to the memory cell. In that case, the access operation is a read operation. In some examples, logic state component 830 may store a logic state in the memory cell based at least in part on applying the first voltage to the memory cell. In that case, the access operation is a write operation.[0077] FIG. 9 shows a flowchart illustrating a method 900 that supports a vertical decoder as disclosed herein. The operations of method 900 may be implemented by a memory controller or its components as described herein. For example, the operations of method 900 may be performed by a memory array as described with reference to FIG. 8 or a memory controller 140 as described with reference to FIG. 1. In some examples, a memory controller may execute a set of instructions to control the functional elements of the memory array to perform the functions described below. Additionally or alternatively, a memory controller may perform portions of the functions described below using special-purpose hardware.[0078] At 905, the memory controller may apply a first voltage for selecting a memory cell to a first conductive line of a decoder as part of an access operation of the memory cell. The operations of 905 may be performed according to the methods described herein. In some examples, portions of the operations of 905 may be performed by a selection component as described with reference to FIG. 8.[0079] At 910, the memory controller may couple, based at least in part on applying the first voltage and using a doped material of the decoder extending between the first conductive line and an access line in a first direction, the first conductive line with the access line associated with the memory cell as part of the access operation. The operations of 910 may be performed according to the methods described herein. In some examples, portions of the operations of 910 may be performed by a coupling component as described with reference to FIG. 8.[0080] At 915, the memory controller may apply the first voltage to the memory cell as part of the access operation based at least in part on coupling the first conductive line of the decoder with the access line. The operations of 915 may be performed according to the methods described herein. In some examples, portions of the operations of 915 may be performed by a voltage component as described with reference to FIG. 8.[0081] FIG. 10 shows a flowchart illustrating a method 1000 that supports a vertical decoder as disclosed herein. The operations of method 1000 may be implemented by a memory controller or its components as described herein. For example, the operations of method 1000 may be performed by a memory array as described with reference to FIG. 8 or a memory controller 140 as described with reference to FIG. 1. In some examples, a memory controller may execute a set of instructions to control the functional elements of the memory array to perform the functions described below. Additionally or alternatively, a memory controller may perform portions of the functions described below using special-purpose hardware.[0082] At 1005, the memory controller may apply a first voltage for selecting a memory cell to a first conductive line of a decoder as part of an access operation of the memory cell. The operations of 1005 may be performed according to the methods described herein. In some examples, portions of the operations of 1005 may be performed by a selection component as described with reference to FIG. 8.[0083] At 1010, the memory controller may couple, based at least in part on applying the first voltage and using a doped material of the decoder extending between the first conductive line and an access line in a first direction, the first conductive line with the access line associated with the memory cell as part of the access operation. The operations of 1010 may be performed according to the methods described herein. In some examples, portions of the operations of 1010 may be performed by a coupling component as described with reference to FIG. 8.[0084] At 1015, the memory controller may apply the first voltage to the memory cell as part of the access operation based at least in part on coupling the first conductive line of the decoder with the access line. The operations of 1015 may be performed according to the methods described herein. In some examples, portions of the operations of 1015 may be performed by a voltage component as described with reference to FIG. 8.[0085] At 1020, the memory controller may apply a second voltage to a second conductive line of the decoder as part of the access operation, the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line associated with the memory cell, wherein applying the first voltage to the memory cell is based at least in part on applying the second voltage to the second conductive line. The operations of 1020 may be performed according to the methods described herein. In some examples, portions of the operations of 1020 may be performed by a voltage component as described with reference to FIG. 8.[0086] In some examples, an apparatus as described herein may perform a method or methods, such as the method 1000. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for applying a first voltage for selecting a memory cell to a first conductive line of a decoder as part of an access operation of the memory cell. The apparatus may include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for coupling, based at least in part on applying the first voltage and using a doped material of the decoder extending between the first conductive line and an access line in a first direction, the first conductive line with the access line associated with the memory cell as part of the access operation include features, means, or instructions (e.g., a non-transitory computer-readable medium storing instructions executable by a processor) for applying the first voltage to the memory cell as part of the access operation based at least in part on coupling the first conductive line of the decoder with the access line.[0087] Some examples of the method 1000 and the apparatus described herein may further include operations, features, means, or instructions for applying a second voltage to a second conductive line of the decoder as part of the access operation, the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line associated with the memory cell, wherein applying the first voltage to the memory cell is based at least in part on applying the second voltage to the second conductive line. Some examples of the method 1000 and the apparatus described herein may further include operations, features, means, or instructions for selecting the memory cell based at least in part on an intersection of the first voltage and the second voltage, wherein a signal applied to the memory cell as part of the access operation has a positive polarity or a negative polarity.[0088] Some examples of the method 1000 and the apparatus described herein may further include operations, features, means, or instructions for receiving a command comprising an instruction to perform the access operation on the memory cell. Some examples of the method 1000 and the apparatus described herein may further include operations, features, means, or instructions for identifying an address of the memory cell based at least in part on receiving the command, wherein applying the second voltage to the second conductive line is based at least in part on identifying the address. Some examples of the method 1000 and the apparatus described herein may further include operations, features, means, or instructions for outputting a logic state stored in the memory cell based at least in part on applying the first voltage to the memory cell. Some examples of the method 1000 and the apparatus described herein may further include operations, features, means, or instructions for storing a logic state in the memory cell based at least in part on applying the first voltage to the memory cell.[0089] It should be noted that the methods described above describe possibleimplementations, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible. Further, portions from two or more of the methods may be combined.[0090] In some examples, an apparatus or device may perform aspects of the functions described herein. The device may include a substrate, an array of memory cells coupled with the substrate, and a decoder coupled with the substrate and configured to apply a voltage to an access line of the array of memory cells as part of an access operation. In some examples, the decoder may include a first conductive line configured to carry the voltage applied to the access line of the array of memory cells and a doped material extending between the first conductive line and the access line of the array of memory cells in a first direction away from a surface of the substrate, the doped material configured to selectively couple the first conductive line of the decoder with the access line of the array of memory cells.[0091] In some examples, the device may include a contact extending between the doped material and the access line of the array of memory cells, wherein the doped material selectively couples the first conductive line of the decoder with the contact. In some examples, the doped material is directly coupled with the first conductive line.[0092] In some examples, the decoder may include a conductive material coupled with the doped material and configured to carry a second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line of the array of memory cells. In some examples, the conductive material is directly coupled with a surface of the doped material. In some examples, the conductive material extends in a second direction parallel to the surface of the substrate.[0093] In some examples, the decoder may include a second conductive line configured to carry the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line of the array of memory cells. In some examples, the decoder may include a contact extending between the second conductive line and the conductive material, the contact configured to carry the second voltage from the second conductive line to the conductive material as part of the access operation.[0094] In some examples, the doped material and the conductive material comprise a transistor configured to selectively couple the first conductive line of the decoder and the access line of the array of memory cells. In some examples, the doped material extends orthogonally from a plane defined by the surface of the substrate. In some examples, the doped material has a first doped region and a second doped region, wherein the first doped region is a first distance away from the surface of the substrate and the second doped region is a second distance away from the surface of the substrate different than the first distance. In some examples, the doped material is poly silicon. In some examples, the array of memory cells comprises self-selecting memory cells.[0095] In some examples, an apparatus or device may perform aspects of the functions described herein. The device may include a substrate and a decoder coupled with the substrate and configured to select a memory cell as part of an access operation. In some examples, the decoder may include a first conductive line configured to carry a voltage for selecting the memory cell as part of the access operation and a doped material extending between the first conductive line and a contact that couples the decoder with the memory cell and configured to selectively couple the first conductive line with the contact as part of the access operation.[0096] In some examples, the first conductive line is directly coupled with the doped material. In some examples, the decoder may include a conductive material coupled with the doped material and configured to carry a second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the memory cell. In some examples, the conductive material extends parallel to a plane defined by a surface of the substrate.[0097] In some examples, the decoder may include a second conductive line configured to carry the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with an access line of the memory cell. In some examples, the doped material is polysilicon and extends orthogonally from a plane defined by a surface of the substrate.[0098] In some examples, an apparatus or device may perform aspects of the functions described herein. The device may include a substrate, an array of memory cells coupled with the substrate and comprising a first set of access lines and a second set of access lines, a first decoder coupled with the substrate and the array of memory cells, the first decoder configured to apply a first voltage to a first access line of the first set as part of an access operation, and a second decoder coupled with the substrate and the array of memory cells, the second decoder configured to apply a second voltage to a second access line of the second set as part of the access operation. In some examples, the first decoder may include a first conductive line configured to carry the first voltage for the first access line as part of the access operation and a doped material extending between the first conductive line and one of the first set of access lines in a first direction perpendicular to a surface of the substrate, the doped material configured to selectively couple the first conductive line with the first access line as part of the access operation.[0099] In some examples, the second decoder may include a second conductive line configured to carry the second voltage for selecting a memory cell of the array of memory cells as part of the access operation and a second doped material extending between the second conductive line and one of the second set of access lines of the array of memory cells in the first direction perpendicular to the surface of the substrate, the second doped material configured to selectively couple the second conductive line with the second access line of the array of memory cells as part of the access operation.[0100] In some examples, the second decoder may include a second conductive line configured to carry the second voltage for selecting a memory cell of the array of memory cells as part of the access operation and a second doped material extending in a second direction parallel to the surface of the substrate, the second doped material configured to selectively couple the second conductive line with the second access line of the array of memory cells as part of the access operation.[0101] In some examples, the first decoder is positioned between the substrate and the array of memory cells. In some examples, the array of memory cells is positioned between the substrate and the first decoder. In some examples, the first decoder comprises a plurality of nMOS transistors and the second decoder comprises a plurality of a pMOS transistors. In some examples, the first set of access lines comprise word lines. In some examples, the array of memory cells comprises a cross-point architecture, a pillar architecture, or a planar architecture.[0102] In some examples, an apparatus or device may perform aspects of the functions described herein. The device may include a decoder configured to apply a voltage as part of an access operation of a memory cell. The decoder may include a first conductive line configured to carry the voltage for selecting the memory cell as part of the access operation, a doped material coupled with the first conductive line and a contact, the doped material configured to selectively couple the first conductive line with the contact, and a controller. In some examples, the controller may be operable, as part of the access operation of the memory cell, to select the memory cell by applying a first voltage to the first conductive line of the decoder, couple the first conductive line of the decoder with an access line associated with the memory cell based at least in part on selecting the memory cell, and apply the first voltage to the memory cell based at least in part on coupling the first conductive line of the decoder with the access line.[0103] In some examples, the controller may be operable to apply a second voltage to a second conductive line of the decoder as part of the access operation, the second voltage for causing the doped material to selectively couple the first conductive line of the decoder with the access line associated with the memory cell, wherein applying the first voltage to the memory cell is based at least in part on applying the second voltage to the second conductive line.[0104] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal; however, it will be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, where the bus may have a variety of bit widths.[0105] The terms“electronic communication,”“conductive contact,”“connected,” and “coupled” may refer to a relationship between components that supports the flow of signals between the components. Components are considered in electronic communication with (or in conductive contact with or connected with or coupled with) one another if there is any conductive path between the components that can, at any time, support the flow of signals between the components. At any given time, the conductive path between components that are in electronic communication with each other (or in conductive contact with or connected with or coupled with) may be an open circuit or a closed circuit based on the operation of the device that includes the connected components. The conductive path between connected components may be a direct conductive path between the components or the conductive path between connected components may be an indirect conductive path that may include intermediate components, such as switches, transistors, or other components. In some cases, the flow of signals between the connected components may be interrupted for a time, for example, using one or more intermediate components such as switches or transistors. [0106] The term“coupling” refers to condition of moving from an open-circuit relationship between components in which signals are not presently capable of being communicated between the components over a conductive path to a closed-circuit relationship between components in which signals can be communicated betweencomponents over the conductive path. When a component, such as a controller, couples other components together, the component initiates a change that allows signals to flow between the other components over a conductive path that previously did not permit signals to flow.[0107] As used herein, the term“substantially” means that the modified characteristic (e.g., a verb or adjective modified by the term substantially) need not be absolute but is close enough to achieve the advantages of the characteristic.[0108] As used herein, the term“electrode” may refer to an electrical conductor, and in some cases, may be employed as an electrical contact to a memory cell or other component of a memory array. An electrode may include a trace, wire, conductive line, conductive layer, or the like that provides a conductive path between elements or components of memory array 102[0109] The devices discussed herein, including memory array, may be formed on a semiconductor substrate, such as silicon, germanium, silicon-germanium alloy, gallium arsenide, gallium nitride, etc. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or epitaxial layers of semiconductor materials on another substrate. The conductivity of the substrate, or sub-regions of the substrate, may be controlled through doping using various chemical species including, but not limited to, phosphorous, boron, or arsenic. Doping may be performed during the initial formation or growth of the substrate, by ion-implantation, or by any other doping means.[0110] The description set forth herein, in connection with the appended drawings, describes example configurations and does not represent all the examples that may be implemented or that are within the scope of the claims. The term“exemplary” used herein means“serving as an example, instance, or illustration,” and not“preferred” or“advantageous over other examples.” The detailed description includes specific details for the purpose of providing an understanding of the described techniques. These techniques, however, may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form to avoid obscuring the concepts of the described examples.[0111] In the appended figures, similar components or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If just the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.[0112] Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0113] The various illustrative blocks and modules described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be amicroprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiplemicroprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0114] The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described above can be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations. Also, as used herein, including in the claims,“or” as used in a list of items (for example, a list of items prefaced by a phrase such as“at least one of’ or“one or more of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase“based on” shall not be construed as a reference to a closed set of conditions. For example, an exemplary step that is described as“based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase“based on” shall be construed in the same manner as the phrase “based at least in part on.”[0115] Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, non-transitory computer-readable media can comprise RAM, ROM, electrically erasable programmable read only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general- purpose or special-purpose computer, or a general-purpose or special-purpose processor.Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include CD, laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[0116] The description herein is provided to enable a person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein, but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.
This description provides a circuit (200) including a squelch detector (225) having a first input coupled to a first node (260) and configured to receive a positive component of a differential signal with a floating center tap, a second input coupled to a second node (262) and configured to receive a negative component of the differential signal, and an output coupled to a logic circuit (230), a first resistor (235) coupled between the first node and a third node (268), a second resistor (240) coupled between the third node and the second node, a third resistor (245) coupled between the first node and a fourth node (270), a fourth resistor (250) coupled between the fourth node and the second node, a capacitor (255) coupled between the fourth node and a ground terminal (272), a comparator (220) having a first input coupled to the third node, a second input coupled to a fifth node (274), and an output coupled to the logic circuit.
CLAIMSWhat is claimed is:1. A circuit, comprising:a first amplifier having a first input coupled to a first node, a second input coupled to a second node, a first output, and a second output;a second amplifier having a first input coupled to the first output of the first amplifier, a second input coupled to the second output of the first amplifier, a first output, and a second output;a third amplifier having a first input coupled to the first output of the second amplifier, a second input coupled to the second output of the second amplifier, a control input, a first output coupled to a third node, and a second output coupled to a fourth node;a squelch detector having a first input coupled to the first node, a second input coupled to the second node, and an output;a first resistor coupled between the first node and a fifth node;a second resistor coupled between the fifth node and the second node;a comparator having a first input coupled to the fifth node, a second input coupled to a sixth node, and an output; anda logic circuit having a first input coupled to the output of the squelch detector, a second input coupled to the output of the comparator, and an output coupled to the control input of the third amplifier.2. The circuit of claim 1, further comprising:a third resistor coupled between the first node and a seventh node;a fourth resistor coupled between the seventh node and the second node; anda capacitor coupled between the seventh node and a ground terminal.3. The circuit of claim 1, wherein the first node is configured to couple to a positive terminal of a differential signal line of an embedded Universal Serial Bus (USB) (eUSB2) system, wherein the second node is configured to couple to a negative terminal of the differential signal line of the eUSB2 system, wherein the third node is configured to couple to a positive terminal of a differential signal line of a legacy USB system, and wherein the fourth node is configured to couple to a negative terminal of the differential signal line of the legacy USB system.4. The circuit of claim 1, wherein a reference voltage (VREF) is received at the sixth node, wherein VREF has a value greater than an ideal common mode voltage (Vcm) of high-speed differential communication and less than a value indicative of a logical high value of a single- ended signal.5. The circuit of claim 4, wherein the comparator is configured to compare a value of a signal present at the fifth node to VREF to indicate whether the circuit is receiving the single-ended signal.6. The circuit of claim 4, wherein a skewed single-ended signal pair having rising edge transitions is received, wherein the squelch detector identifies the skewed single-ended signal pair as high-speed differential communication, and wherein the comparator disproves the squelch detector by determining that the value of the signal present at the fifth node exceeds VREF.7. The circuit of claim 4, wherein a differential input signal is received, wherein the squelch detector identifies the differential input signal as high-speed differential communication, and wherein the comparator verifies the squelch detector by determining that the value of the signal present at the fifth node is not greater than VREF.8. A circuit, comprising:a squelch detector having a first input coupled to a first node, a second input coupled to a second node, and an output, wherein the first node is configured to receive a positive component of a differential input signal with a floating center tap, and wherein the second node is configured to receive a negative component of the differential input signal with the floating center tap;a first resistor coupled between the first node and a third node;a second resistor coupled between the third node and the second node;a third resistor coupled between the first node and a fourth node;a fourth resistor coupled between the fourth node and the second node;a first capacitor coupled between the fourth node and a ground terminal;a comparator having a first input coupled to the third node, a second input coupled to a fifth node, and an output; anda logic circuit having a first input coupled to the output of the squelch detector, a second input coupled to the output of the logic circuit, and an output.9 The circuit of claim 8, wherein the first node is configured to couple to a positive terminal of a differential signal line of an embedded Universal Serial Bus (USB) (eUSB2) system, and wherein the second node is configured to couple to a negative terminal of the differential signal line of the eUSB2 system.10. The circuit of claim 8, further comprising a first amplifier having a first input coupled to the first node, a second input coupled to the second node, a first output, and a second output.11. The circuit of claim 10, further comprising:a second amplifier having a first input coupled to the first output of the first amplifier, a second input coupled to the second output of the first amplifier, a first output, and a second output; anda third amplifier having a first input coupled to the first output of the second amplifier, a second input coupled to the second output of the second amplifier, a first output coupled to a sixth node, and a second output coupled to a seventh node.12. The circuit of claim 10, wherein the third node is configured to couple to a positive terminal of a differential signal line of a legacy Universal Serial Bus (USB) system, and wherein the fourth node is configured to couple to a negative terminal of the differential signal line of the legacy USB system.13. The circuit of claim 8, wherein a reference voltage (VREF) is received at the fifth node, wherein VREF has a value greater than an ideal common mode voltage (Vcm) of high-speed differential communication and less than a value indicative of a logical high value of a single- ended signal.14. The circuit of claim 13, wherein the comparator is configured to compare a value of a signal present at the fifth node to VREF to indicate whether the circuit is receiving the single- ended signal.15. The circuit of claim 13, wherein a skewed single-ended signal pair having rising edge transitions is received, wherein the squelch detector identifies the skewed single-ended signal pair as high-speed differential communication, and wherein the comparator disproves the squelch detector by determining that the value of the signal present at the fifth node exceeds VREF.16. The circuit of claim 13, wherein a differential input signal is received, wherein the squelch detector identifies the differential input signal as high-speed differential communication, and wherein the comparator verifies the squelch detector by determining that the value of the signal present at the fifth node is not greater than VREF.17. A method, comprising:receiving, at a circuit, data via an idle differential signal line;performing a squelch detection on the differential signal line;determining a value of a common mode voltage (Vcm) with reference to a reference voltage (VREF) by performing a comparison; andverifying a result of the squelch detection with a result of the comparison.18. The method of claim 17, wherein VREF has a value greater than an ideal Vcm of high speed differential communication and less than a value indicative of a logical high value of a single-ended signal.19. The method of claim 18, further comprising receiving a skewed single-ended signal pair having rising edge transitions, wherein the result of the squelch detection identifies the skewed single-ended signal pair as high-speed differential communication, and wherein the result of the comparison disproves the result of the squelch detection by indicating that Vcm exceeds VREF.20. The method of claim 17, further comprising receiving a differential input signal, wherein the result of the squelch detection identifies the differential input signal as high-speed differential communication, and wherein the result of the comparison verifies the result of the squelch detection by indicating that Vcm is not greater than VREF.
EMBEDDED UNIVERSAL SERIAL BUS 2 REPEATERSUMMARY[0001] This description provides a circuit. In some examples, the circuit includes a first amplifier, a second amplifier, a third amplifier, a squelch detector, a first resistor, a second resistor, a comparator, and a logic circuit. The first amplifier has a first input coupled to a first node, a second input coupled to a second node, a first output, and a second output. The second amplifier has a first input coupled to the first output of the first amplifier, a second input coupled to the second output of the first amplifier, a first output, and a second output. The third amplifier has a first input coupled to the first output of the second amplifier, a second input coupled to the second output of the second amplifier, a control input, a first output coupled to a third node, and a second output coupled to a fourth node. The squelch detector has a first input coupled to the first node, a second input coupled to the second node, and an output. The first resistor is coupled between the first node and a fifth node. The second resistor is coupled between the fifth node and the second node. The comparator has a first input coupled to the fifth node, a second input coupled to a sixth node, and an output. The logic circuit has a first input coupled to the output of the squelch detector, a second input coupled to the output of the comparator, and an output coupled to the control input of the third amplifier.[0002] Other aspects of this description provide a circuit. In some examples, the circuit includes a squelch detector, a first resistor, a second resistor, a third resistor, a fourth resistor, a first capacitor, a comparator, and a logic circuit. The squelch detector has a first input coupled to a first node, a second input coupled to a second node, and an output, wherein the first node is configured to receive a positive component of a differential input signal with a floating center tap, and wherein the second node is configured to receive a negative component of the differential input signal with the floating center tap. The first resistor is coupled between the first node and a third node. The second resistor is coupled between the third node and the second node. The third resistor is coupled between the first node and a fourth node. The fourth resistor is coupled between the fourth node and the second node. The first capacitor is coupled between the fourth node and a ground terminal. The comparator has a first input coupled to the third node, a second input coupled to a fifth node, and an output. The logic circuit has a first input coupled to the output of the squelch detector, a second input coupled to the output of the logic circuit, and an output.[0003] Other aspects of this description provide a method. In some examples, the method includes receiving, at a circuit, data via an idle differential signal line, performing a squelch detection on the differential signal line, determining a value of a common mode voltage (Vcm) with reference to a reference voltage (VREF) by performing a comparison, and verifying a result of the squelch detection with a result of the comparison.BRIEF DESCRIPTION OF THE DRAWINGS[0004] For a detailed description of various examples, reference will now be made to the accompanying drawings in which:[0005] FIG. 1 shows a block diagram of an illustrative system in various examples;[0006] FIG. 2 shows a schematic diagram of an illustrative circuit in various examples;[0007] FIG. 3 shows a diagram of illustrative signal waveforms in various examples; and[0008] FIG. 4 shows a flowchart of an illustrative method in various examples.DETA1LED DESCRIPTION OF EXAMPLE EMBODIMENTS[0009] Universal Serial Bus (USB) is a standard establishing specifications for interconnect cabling, connectors, and communication protocols. As referred to herein, USB refers to any version of the USB specification, including any amendments or supplements, certified by the USB Implementers Forum (USB IF) or any suitable body who replaces and/or aids the USB IF in its role overseeing the USB specification, whether now existing or later developed. In at least one example, USB, as referred to herein, encompasses any one or more of the USB 1.0 specification, USB 2.0 specification, USB 3.0 specification, USB 4.0 specification, or any derivatives thereof, such as amended or“.x” variations of the above specifications. Also, as referred to herein, legacy USB refers to USB 2.x and/or USB l .x. Embedded USB (eUSB), in some examples, refers to eUSB22. While reference is made herein to eUSB2, in various examples the teachings of the description are further applicable to other versions of eUSB2 that are extensions of, alternatives to, derivatives of, or otherwise share at least some commonalities with, or similarities to, eUSB2. Accordingly, while eUSB2 is referred to herein in an exemplary manner, the description is, in some examples, not limited to implementation in an eUSB2 environment, in an eUSB2 environment, or in a USB environment.[0010] At its inception, USB was primarily intended for implementation in specifying standards for connection and communication between personal computers and peripheral devices. However, as adoption of the USB standard has expanded and implementation in computing devices of support for the USB standard has gained in popularity, efforts have been made to extend and expand the applicability of USB. For example, while initially establishing specifications for communications between personal computers and peripheral devices, USB has expanded to communication between peripheral devices, between personal computers, and other use cases. As a result of such widespread implementation and use of USB, efforts are being further made to utilize USB as a communication protocol among individual subsystems or circuits (e.g., such as a system-on-a-chip (SoC)). Such implementations are sometimes referred to as eUSB2. New challenges arise in implementing eUSB2. For example, at a circuit level, computing devices often operate at voltage levels that vary from those of legacy USB, creating an impediment between direct communication between eUSB2 and legacy USB systems. To mitigate this impediment, an eUSB2 repeater operates as a bridge or non-linear redriver between eUSB2 and legacy USB systems, or vice versa, to translate between legacy USB signaling voltage levels that are customarily about 3.3 volts (V) and eUSB2 signaling voltages levels that are circuit-level (e.g., silicon appropriate voltages) such as about 1.0 V, 1.2 V, or any other suitable value less than 3.3 V. In some examples, the signaling voltage levels are determined according to values of a supply voltage for a respective system. For example, a legacy USB system is powered by a 3.3 V, or any other suitable value, supply voltage and an eUSB2 system is powered by 1.0 V or 1.2 V, or any other suitable value, voltage supply.[0011] When eUSB2 differential signal lines are idle, in some examples, single-ended signaling is permitted over one or both of the differential signal lines (e.g., such that instead of one differential signal being sent over the differential signal lines, two single-ended signals are sent over the differential signal lines). Also, in some examples, a single-ended signaling is used to enter, or exit, from various modes of differential signaling. For example, some eUSB2 systems include a low-speed operation mode and a high-speed operation mode. When the eUSB2 system is operating in the high-speed operation mode, in some examples, a single-ended logical high signal (e.g., a signal having a value of about 1 V) is transmitted on each of the differential signal lines to indicate an exit from the high-speed operation mode. Under ideal conditions, the single-ended logical high signal is transmitted substantially simultaneously on each of the differential signal lines to prevent the single-ended signals from appearing as differential input. However, in actual application environments, skew often exists between the single-ended logical high signals such that, for at least some period of time, the single-ended logical high signal is asserted and present on one of the differential signal lines but a single-ended logical high signal is not asserted and present on another of the differential signal lines. The skew is caused, in various examples, from non-ideal operation of a transmitter transmitting the single-ended logical high signal over the differential signal lines, delay introduced by various couplings associated with the differential signal lines, propagation delay of the differential signal lines, etc. For a period of time between the single- ended logical high signal being present on one of the differential signal lines and the single-ended logical high signal becoming present on the other of the differential signal lines, in some examples, the differential signal lines appear to a device, such as an eUSB2 repeater, to be the beginning of data communication (e.g., such as the beginning of a high-speed packet or a start of packet (SOP) indicator), which is contrary to the intended operation of exiting the high-speed mode of operation. Accordingly, in some examples, the skew between the single-ended logical high signals on the differential signal lines causes erroneous detections and/or operations of a device receiving the single-ended logical high signals. The erroneous detections (such as erroneous detection of the skewed single-ended logical high signals as a differential signal), in some examples, cause the erroneous operations (such as an output corresponding to the received inputs being undefined and an unknown signal, potentially affecting a down-stream device that receives the output).[0012] In some eUSB2 repeater implementations, a clock data recovery (CDR) circuit or a phase locked loop (PLL) determines clock timing information of a signal received by the eUSB2 repeater and, based on that clock timing information, the eUSB2 repeater reconstructs a received signal for subsequent transmission. Knowledge of this clock information, in some examples, enables compensation for skew in signals, thereby preventing, or at least partially mitigating, the erroneous detections and/or operations of a device, a described above. However, both a CDR circuit and a PLL are comparatively large components of an eUSB2 repeater in terms of footprint (e.g., physical surface area of a component die) with respect to a remainder of the eUSB2 repeater, increasing both cost to manufacture the eUSB2 repeater and power consumed by the eUSB2 repeater. In at least some aspects, goals of implementation of eUSB2 include providing communication according to the USB specifications in smaller, lower-power environments than legacy USB, which runs contrary to the size and power requirements of both the CDR circuit and the PLL. Accordingly, it at least some eUSB2 repeater implementations it is desirable to accurately detect receipt of skewed single-ended logical high signals as opposed to a differential input signal (e.g., such as a high speed SOP indicator) to provide for accurate operation of the eUSB2 repeater.[0013] Some aspects of the description provide a circuit. The circuit is, in some examples, suitable for use in interfacing between eUSB2 and USB interfaces. Particularly, in some examples the circuit is an eUSB2 to USB repeater. In other examples, the circuit is a USB to eUSB2 repeater. For example, the circuit provides level-shifting from eUSB2 voltage levels to USB voltage levels and/or from USB voltage levels to eUSB2 voltage levels. As such, in some examples the circuit is viewed as a buffer and/or a level-shifter. In some examples, the circuit further provides support for one or more elements of USB communication, such as accurate detection of both a high-speed SOP indicator and a pair of skewed single-ended logical high signals. For example, the circuit detects a difference in voltages present on differential signal lines via a squelch detector (e.g., determining whether a differential exceeding a threshold amount exists between a value of a signal present on one of the differential signal lines and a value of a signal present on another of the differential signal lines). In some examples, when the squelch detector detects that the differential signal lines are unsquelched (the differential between the value of the signal present on one of the differential signal lines and the value of the signal present on another of the differential signal lines exceeds the threshold), the differential signal lines are active and data is being received and the squelch detector outputs a logical low signal. When the differential signal lines are squelched (the differential between the value of the signal present on one of the differential signal lines and the value of the signal present on another of the differential signal lines does not exceed the threshold), the differential signal lines are idle and the squelch detector outputs a logical high signal. Thus, to the squelch detector, when skewed single-ended logical high signals are received, a period of time between receipt of a rising edge transition of a first of the single- ended logical high signals and a receipt of a rising edge transition of a second of the single-ended logical high signals appears as active differential signal lines receiving differential data.[0014] To prevent inaccurate operation due to an output of the squelch detector seemingly indicating the existence of differential input data, the circuit verifies and/or validates the output of the squelch detector based on a common mode voltage (Vcm) of the differential signal lines. For example, when Vcm exceeds a threshold, a comparator of the circuit outputs a logical high signal. When Vcm does not exceed the threshold, the comparator outputs a logical low signal. The threshold is, in some examples, determined according to a highest Vcm that is not intended to be detected as a single-ended logical high signal and a lowest output low voltage of an upstream device transmitting on the differential signal lines. When the squelch detector outputs a logical low signal and the comparator outputs a logical low signal, the circuit determines, such as through one or more logic components, that a differential input signal (such as a high-speed SOP indicator) is being received. When the squelch detector outputs a logical low signal and the comparator outputs a logical high signal, the circuit determines that the output of the squelch detector is an erroneous indication of differential data input and instead single-ended logical high signals are being received via the differential signal lines. When the squelch detector outputs a logical high and the comparator outputs a logical low signal, the circuit determines that data is not being received by the circuit. When the squelch detector outputs a logical high and the comparator outputs a logical high signal, the circuit determines that single-ended logical high signals are being received via the differential signal lines. In some examples, the common mode comparator also facilitates detection of an absence of a high speed differential input signal when receiving single ended one which, in some examples, renders a squelch detector ineffective and prone to erroneous detection resulting from Vcm of the input signal exceeding a valid common mode range for the squelch detector.[0015] Turning now to FIG. 1, a block diagram of an illustrative system 100 is shown. In some examples, the system 100 is illustrative of a computing device, or elements of a computing device. For example, the system 100 includes a processor 105, an eUSB2 device 110, an eUSB2 repeater 115, and a USB device 120. The USB device 120 is a legacy USB device, as described elsewhere herein. In some examples, one or both of the eUSB2 device 110 or the USB device 120 is implemented external to the system 100 and configured to couple to the system 100 through an appropriate interface (e.g., such as a port and receptacle suitable for performing communication according to eUSB2 or USB protocol, respectively). The processor 105 is, in some examples, a SoC. The eUSB2 device 110 is any device operating in both ingress and egress communication directions according to signal voltage level specifications for eUSB2. The USB device 120 is any device operating in both ingress and egress communication directions according to signal voltage level specifications for legacy USB. For example, in at least some implementations the USB device 120 is a peripheral such as a user input device, (e.g., a sensor, a scanner, an imaging device, a microphone, etc.), an output device (e.g., a printer, speakers, etc.), a storage device, or any other peripheral, component, or device suitable for communicating with the processor 105. [0016] The eUSB2 repeater 115 communicatively couples the processor 105 to the USB device 120 and vice versa, converting signals appropriate for the processor 105 to signals appropriate for the USB device 120 and vice versa. For example, in some implementations signaling in the processor 105 is performed in a range of about 0.8 V to about 1.4 V. Similarly, in some implementations, signaling in the USB device 120 is performed at about 3.3 V or about 5 V. In some examples, the eUSB2 repeater 115 operates as a bit-level repeater, receiving signals from one of the processor 105 or USB device 120 and converting the signals for use by the other of the processor 105 or USB device 120 (e.g., by shifting a voltage level of the signals upward or downward based on a direction of the communications). In some examples, a data packet communicated in the system 100 begins with an SOP indicator.[0017] In some examples, differential eUSB2 input signal communication lines of the eUSB2 repeater 115 transition from an idle state to an active state when the eUSB2 repeater 115 receives the SOP indicator via the differential eUSB2 input signal lines. In other examples, while the differential eUSB2 input signal lines remain in the idle state single-ended communication is permitted on each individual line of the differential eUSB2 input signal lines. In some examples, data communicated via the single-ended communication is used to control operation of the eUSB2 repeater 115. For example, while the eUSB2 repeater 115 is operating in a high-speed mode of operation, receipt of single-ended logical high signals via each individual line of the differential eUSB2 input signal lines indicates and/or commands an exit from the high-speed mode of operation to a low-speed mode of operation. In some examples, the single-ended logical high signals are skewed, as described above, and the skewed single-ended logical high signals appears to the eUSB2 repeater 115 as similar to a beginning of high-speed data communication. This, in some examples, causes the eUSB2 repeater 115 to not interpret the single-ended logical high signals as an instruction to exit the high-speed mode of operation and therefore causing the eUSB2 repeater 115 to remain in the high-speed mode of operation and, in some examples, erroneously activating a receiver and/or transmitter of the eUSB2 repeater 115, propagating erroneous data and/or a glitch to the USB device 120.[0018] Accordingly, in some examples the eUSB2 repeater 115 includes a comparator 125 that is configured to determine whether signals present on the differential eUSB2 input signal lines are components of a differential signal or are single-ended communications. In some examples, the determination is made according to Vcm of the signals present on the differential eUSB2 input signal lines. In some examples, the comparator 125 does not itself determine whether the signals present on the differential eUSB2 input signal lines are components of a differential signal or are single-ended communications. Instead, the comparator 125 provides an output signal to another component of the eUSB2 repeater 115, such as a logic circuit (not shown), that determines whether the signals present on the differential eUSB2 input signal lines are components of a differential signal or are single-ended communications based on any one or more signals including at least the output of the comparator 125.[0019] Turning now to FIG. 2, a schematic diagram of an illustrative circuit 200 is shown. In some examples, the circuit 200 is suitable for implementation as the eUSB2 repeater 115 of the system 100 of FIG. 1. The circuit 200, in some examples, is representative of an eUSB2 repeater having functionality to receive data from an eUSB2 system and provide data to a legacy USB system. The circuit 200, in some examples, includes an amplifier 205, an amplifier 210, an amplifier 215, a comparator 220, a squelch detector 225, a logic circuit 230, a resistor 235, a resistor 240, a resistor 245, a resistor 250, and a capacitor 255. In some examples, the amplifier 205 is considered a receiver (RX) of the circuit 200, the amplifier 210 is considered a pre-amplifier (Pre-Amp) of the circuit 200, and the amplifier 215 is considered a transmitter (TX) of the circuit 200. In some examples, the amplifier 210 is omitted from the circuit 200. In some examples, the circuit 200 expressly does not include a CDR circuit or a PLL.[0020] In an example architecture of the circuit 200, the amplifier 205 has a positive differential input coupled to a node 260 and a negative differential input coupled to a node 262. The amplifier 210 has a positive differential input coupled to a negative differential output of the amplifier 205 and a negative differential input coupled to negative differential output of the amplifier 205. The amplifier 215 has a positive differential input coupled to a positive differential output of the amplifier 210, a negative differential input coupled to a negative differential output of the amplifier 210, a positive differential output coupled to a node 264, and a negative differential output coupled to a node 266. The comparator 220 has a first input coupled to a node 268 and a second input coupled to a node 274. The squelch detector 225 has a first input coupled to the node 260 and a second input coupled to the node 262. An output of the comparator 220 and an output of the squelch detector 225 are each coupled to respective inputs of the logic circuit 230. An output of the logic circuit 230 is coupled to a control terminal of the amplifier 215. The resistor 235 is coupled between the node 260 and the node 268 and the resistor 240 is coupled between the node 268 and the node 262. The resistor 245 is coupled between the node 260 and the node 270 and the resistor 250 is coupled between the node 270 and the node 262. The capacitor 255 is coupled between the node 270 and a ground terminal 272.[0021] In an example of operation of the circuit 200, a differential input signal is received at the node 260 and the node 262. For example, a positive component of the differential input signal (eD+) is received at the node 260 and a negative component of the differential input signal (eD-) is received at the node 262. In this regard, in some examples the node 260 and the node 262 collectively comprise differential eUSB2 input ports and/or differential eUSB2 input signal lines of the circuit 200. The amplifier 205 amplifies the differential input signal and the amplifier 210 amplifies a result of that amplification, and the amplifier 215 in turn amplifies a result of that second amplification to provide a differential output signal at the node 264 and the node 266, respectively. A positive component of the differential output signal (D+) is output at the node 264 and a negative component of the differential output signal (D-) is output at the node 266. In this regard, in some examples the node 264 and the node 266 collectively comprise differential USB output ports and/or differential USB output signal lines of the circuit 200. In some examples, the amplifier 215 is powered by a different power source and/or receives a different supply voltage than the amplifier 205, for example, such that the circuit 200 uses a dual-supply to provide level- shifting functionality between the differential eUSB2 input ports and the differential USB output ports. Also, in some examples the amplifier 215 is subject to control of the logic circuit 230. For example, the logic circuit 230 controls when the amplifier 215 is active, amplifying signals provided by the amplifier 210 to provide the differential output signal at the node 264 and the node 266, respectively, or when the amplifier 215 is turned off and is not amplifying signals provided by the amplifier 210 to provide the differential output signal at the node 264 and the node 266, respectively.[0022] Each of the resistor 235, resistor 240, resistor 245, and resistor 250 have approximately a same resistance value such that they are balanced and a voltage present at the node 268 is approximately equal to Vcm of eD+ and eD-. Also, a center tap of eUSB2 differential signaling lines is floating, meaning the center tap is not referenced to a ground potential. Accordingly, a voltage present at the node 270 is approximately equal to Vcm ref based on Vcm and a voltage (Vc) of the capacitor 255. In some examples, the capacitor 255 has a capacitance of about 50 picofarads (pF) to create the floating center tap between eD+ and eD-. [0023] The comparator 220 is configured to compare Vcm of eD+ and eD- to a reference voltage (VREF) received at the node 274. In some examples, VREF has a value determined according to a value for representing a logical high signal in single-ended signaling via the differential eUSB2 input signal lines and a value of Vcm for high-speed differential signaling via the differential eUSB2 input signal lines. For example, VREF is greater than the value of Vcm for high-speed differential signaling via the differential eUSB2 input signal lines and is less than the value for representing a logical high signal in single-ended signaling via the differential eUSB2 input signal lines. In some examples, VREF has a value of about 700 millivolts (mV). In other examples, VREF has a value of about 500 mV, about 400 mV, or any other suitable voltage greater than Vcm of the high-speed differential signals. When Vcm is greater than VREF, the comparator 220 outputs a signal having a logical high value. When Vcm is less than VREF, the comparator 220 outputs a signal having a logical low value.[0024] The squelch detector 225, in some examples, outputs a logic high value signal when a differential between eD+ and eD- exceeds a threshold (e.g., a squelch threshold) and outputs a logical low value signal when the differential between eD+ and eD- is less than the threshold. Accordingly, in some examples an output of the squelch detector 225 indicates whether differential data is being received by the circuit 200.[0025] In some examples, the squelch detector 225 outputs a false positive, for example, erroneously indicating that differential data is being received by the circuit 200. For example, when the differential eUSB2 input signal lines are idle, eUSB2 specification permits single-ended communication via an ingress communication line that carries eD+ and single-ended communication via an ingress communication line that carries eD-. In some examples, this single- ended communication signals to the circuit 200 a mode of operation to enter or exit. For example, when the circuit 200 is operating in a high-speed mode of operation and single-ended signals representing logical high values are received at both the node 260 and the node 262, the circuit 200 is controlled to exit the high-speed mode of operation and return to the low-speed mode of operation. However, because of propagation delay, transmitter non-idealities, noise, interference, and/or any other various sources of signal delay, in various examples the logical high values of the single-ended signals do not reach the node 260 and the node 262 at the same time. In some examples, this results in the single-ended signals at the node 260 and the node 262 being skewed. The skewed single-ended signals appear to the squelch detector 225 as differential data. For example, rather than a transition to a logical high value occurring at the node 260 and at the node 262 simultaneously, in some examples there is a delay at one of the node 260 or the node 262 that creates the skew. For the period of time of that delay, the squelch detector 225 outputs a logical high signal indicating that differential input data is being received, resulting from the differential voltage present between the node 260 and the node 262 exceeding the squelch threshold. However, when the single-ended signals are being received at the node 260 and the node 262, the differential eUSB2 input signal lines are still considered to be idle because differential data is not being received. Therefore, the output of the squelch detector 225, for the period of time of the delay between the transition to a logical high value occurring at the node 260 and at the node 262, erroneously indicates that differential data is being received. The logic circuit 230, however, does not have knowledge of the erroneous nature of the indication received from the squelch detector 225 from the signal received from the squelch detector 225 itself.[0026] Instead, the logic circuit 230 also receives the output of the comparator 220. The output of the comparator 220, in some examples, verifies or disproves the output of the squelch detector 225. For example, when differential data is being received at the node 260 and the node 262, in some examples, the comparator 220 outputs a logical low signal (e.g., such as when high-speed differential data is received having an ideal Vcm of about 200 mV, which is less than VREF). Conversely, when differential data is not being received at the node 260 and the node 262, but instead one or more single-ended signals are being received, the comparator 220 outputs a logical high signal. For example, a single-ended logical high signal in eUSB2 systems, in some examples, has a value of about 1 V, causing Vcm present between the node 260 and the node 262 at the node 268 to be greater than VREF.[0027] When the output of the squelch detector 225 is a logical high signal and the output of the comparator 220 is a logical low signal, the output of the comparator 220 verifies the output of the squelch detector 225 (e.g., indicating that Vcm is not greater than VREF). However, when the output of the squelch detector 225 is a logical high signal and the output of the comparator 220 is a logical high signal, the output of the comparator 220 disproves the output of the squelch detector 225. For example, because VREF is greater than Vcm of differential data input, and the logical high output of the comparator 220 indicates that Vcm is greater than VREF, then Vcm is greater than the Vcm of differential data input and single-ended signals are being received by the circuit 200 at the node 260 and/or the node 262. [0028] When the output of the squelch detector 225 is a logical low signal and the output of the comparator 220 is a logical low signal, either the output of the comparator 220 verifies the output of the squelch detector 225 or the logical low value single-ended signals are being received by the circuit 200. When the output of the squelch detector 225 is a logical low signal and the output of the comparator 220 is a logical high signal, the output of the comparator 220 verifies the output of the squelch detector 225, indicating that differential input is not being received by the circuit 200 but instead the circuit 200 is receiving single-ended signals.[0029] Based on the output of the comparator 220 and the squelch detector 225, the logic circuit 230 determines whether single-ended signals are being received or whether differential input signals are being received. Also, the logic circuit 230 controls the amplifier 215 based on the determination of whether single-ended signals are being received or differential input signals are being received. For example, the logic circuit controls when the amplifier 215 is active or the amplifier 215 is inactive, as described above such that the amplifier 215 remains inactive when the single-ended signals are being received and becomes active when the differential input signals are being received.[0030] Turning now to FIG. 3, a diagram 300 of illustrative signal waveforms is shown. In some examples, the diagram 300 corresponds to at least some signal waveforms present in the circuit 200. For example, a signal 305 corresponds to eD+, a signal 310 corresponds to eD-, a signal 315 corresponds to VREF, a signal 320 corresponds to Vcm, a signal 325 corresponds to an output of the comparator 220, a signal 330 corresponds to an output of the squelch detector 225, and a signal 335 corresponds to a control signal output by the logic circuit 230 to control enabling or disabling of the amplifier 215 (e.g., control whether signals are being transmitted by the circuit 200 via the node 264 and the node 266).[0031] As shown by diagram 300, when the signal 305 and the signal 310 are each pulled high, in some examples skew exists between the signals such that, for a period of time, a non-zero differential voltage exists between the signal 305 and the signal 310. As further shown by the signal 330, in some examples this non-zero differential voltage exceeds a squelch threshold, causing the squelch detector 225 to trigger, indicating that a differential signal is being received for the period of time over which the non-zero differential voltage exceeds the squelch threshold. In the absence of the comparator 220 and operation of the circuit 200 according to the description, in some examples, the signal 335 would include a rising edge at substantially a same time as the falling edge of the signal 330 and would include a falling edge at substantially a same time as the rising edge of the signal 330, creating a positive pulse in the signal 335. This pulse in the signal 335 would cause the amplifier 215 to activate and transmit data erroneously during a duration of time of the pulse. The transmitted data is, in some examples, referred to as a glitch and is undesirable in operation of the circuit 200. However, by inclusion of the comparator 220 and operation of the circuit 200 according to the description, the glitch is at least mitigated, if not prevented.[0032] For example, as shown by the signal 325, when a value of the signal 320 exceeds a value of the signal 315, a rising edge occurs in the signal 325 and the signal 325 maintains a high value until the value of the signal 320 no longer exceeds the value of the signal 315. When the signal 325 has the high value, the signal 335 is held at a low value by the logic circuit 230, as described elsewhere herein, without respect to a value of the signal 330. In this way, a glitch in an transmission of the amplifier 215 is mitigated and/or prevented.[0033] Turning now to FIG. 4, a flowchart of an illustrative method 400 is shown. In some examples, the method 400 corresponds to actions performed by one or more components of the system 100 and/or the circuit 200. The method 400 is, in some examples, a method for controlling a circuit, such as an eUSB2 repeater. Implementation of the method 400 by a circuit, in some examples, is advantageous in accurately determining whether a circuit is receiving differential data (e.g., high-speed data, such as a high-speed SOP indicator), single-ended signals, or no signals, without the use of a CDR circuit or PLL.[0034] At operation 405, data is received via an idle differential signal line. In some examples, the idle differential signal line is defined as both positive and negative lines of the differential signal line being weakly held to a ground potential. The data is, in some examples, one or more single-ended signals (e.g., signals that are communicated entirely via one line of a differential signal line without regard or reference to a signal present on another line of the differential signal line). For example, between receipt of high speed packets, the differential signal lines are in the idle state. In some examples, this is referred to as a single-ended zero. When data transmission is complete, an upstream device indicates an exit from high speed mode by pulling both positive and negative lines of the differential signal line high (e.g., nominally to about 1.0 V or 1.2 V), which in some examples is referred to as a single-ended one SE1.[0035] At operation 410, a squelch detection is performed. The squelch detection is performed, in some examples, by a squelch detector. The squelch detector detects a difference between a signal present on one line of the differential input signal and a signal present on another line of the differential input signal and, when the difference exceeds a squelch threshold, outputs a squelch detection result as a logical low signal to indicate that the differential signal line is carrying differential data.[0036] At operation 415, a value of Vcm with reference to VREF is determined. The value of Vcm with reference to VREF is determined, in some examples, by a comparator. When Vcm exceeds VREF, the comparator outputs a comparison result as a logical high signal. When Vcm is less than Vref, the comparator outputs the comparison result as a logical low signal. In some examples, VREF has a value determined according to a value for representing a logical high signal in single-ended signaling via the differential signal line and a value of Vcm for high-speed differential signaling via the differential signal line. For example, VREF is greater than the value of Vcm for high-speed differential signaling via the differential signal line and is less than the value for representing a logical high signal in single-ended signaling via the differential signal line.[0037] At operation 420, the output of the squelch detector is verified against the output of the comparator (e.g., with a result of the comparison performed by the comparator). When the output of the squelch detector is a logical low signal and the output of the comparator is a logical low signal, the output of the comparator verifies the output of the squelch detector (e.g., indicating that Vcm is not greater than VREF). However, when the output of the squelch detector is a logical low signal and the output of the comparator is a logical high signal, the output of the comparator disproves the output of the squelch detector. For example, because VREF is greater than the Vcm of differential data input, and the logical high output of the comparator indicates that Vcm is greater than VREF, then Vcm is greater than the Vcm of differential data input and single-ended signals are being received.[0038] When the output of the squelch detector is a logical high signal and the output of the comparator is a logical low signal, either the output of the comparator verifies the output of the squelch detector or the logical low value single-ended signals are being received by the circuit. When the output of the squelch detector is a logical high signal and the output of the comparator is a logical high signal, the output of the comparator verifies the output of the squelch detector, indicating that differential input is not being received but instead single-ended signals are being received. [0039] While the operations of the method 400 have been described and labeled with numerical reference, in various examples the method 400 includes additional operations that are not recited herein (e.g., such as intermediary comparisons, logical operations, output selections such as via a multiplexer, etc.), in some examples any one or more of the operations recited herein include one or more sub-operations (e.g., such as intermediary comparisons, logical operations, output selections such as via a multiplexer, etc.), in some examples any one or more of the operations recited herein is omitted, and/or in some examples any one or more of the operations recited herein is performed in an order other than that presented herein (e.g., in a reverse order, substantially simultaneously, overlapping, etc.), all of which is intended to fall within the scope of the description.[0040] In the foregoing discussion, the terms Including” and“comprising” are open-ended and mean“including, but not limited to... ” Also, the term“couple” or“couples” means either an indirect or direct wired or wireless connection. A device that is“configured to” perform a task or function may be configured (e.g., programmed and/or hardwired) at a time of manufacturing by a manufacturer to perform the function and/or may be configurable (or re-configurable) by a user after manufacturing to perform the function and/or other additional or alternative functions. The configuring may be through firmware and/or software programming of the device, through a construction and/or layout of hardware components and interconnections of the device, or a combination thereof. Furthermore, a circuit or device that is said to include certain components may instead be configured to couple to those components to form the described circuitry or device.[0041] While certain components are described herein as being of a particular process technology, these components may be exchanged for components of other process technologies. Components illustrated as resistors, unless otherwise stated, are generally representative of any one or more elements coupled in series and/or parallel to provide an amount of impedance represented by the illustrated resistor. Also, the phrase“ground voltage potential” includes a chassis ground, an Earth ground, a floating ground, a virtual ground, a digital ground, a common ground, and/or any other form of ground connection applicable to, or suitable for, the teachings of the description. Unless otherwise stated,“about”,“approximately”, or“substantially” preceding a value means +/- 10 percent of the stated value.[0042] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
An integrated circuit (IC) package includes a die, a package substrate coupled to the die, and a first electrostatic discharge (ESD) protection component coupled to the package substrate, where the first electrostatic discharge (ESD) protection component is configured to provide package level electrostatic discharge (ESD) protection. In some implementations, the first electrostatic discharge (ESD) protection component is embedded in the package substrate. In some implementations, the die includes an internal electrostatic discharge (ESD) protection component configured to provide die level electrostatic discharge (ESD) protection. In some implementations, the internal electrostatic discharge (ESD) protection component and the first electrostatic discharge (ESD) protection component are configured to provide cumulative electrostatic discharge (ESD) protection for the die.
CLAIMS1. An integrated circuit (IC) package comprising:a die;a package substrate coupled to the die; anda first electrostatic discharge (ESD) protection component coupled to the package substrate, wherein the first electrostatic discharge (ESD) protection component is configured to provide package level electrostatic discharge (ESD) protection.2. The integrated circuit (IC) package of claim 1, wherein the first electrostatic discharge (ESD) protection component is embedded in the package substrate.3. The integrated circuit (IC) package of claim 1, wherein the die comprises an internal electrostatic discharge (ESD) protection component configured to provide die level electrostatic discharge (ESD) protection.4. The integrated circuit (IC) package of claim 3, wherein the internal electrostatic discharge (ESD) protection component and the first electrostatic discharge (ESD) protection component are configured to provide cumulative electrostatic discharge (ESD) protection for the integrated circuit (IC) package.5. The integrated circuit (IC) package of claim 3, wherein the internal electrostatic discharge (ESD) protection component and the first electrostatic discharge (ESD) protection component are configured to provide fault tolerant electrostatic discharge (ESD) protection for the integrated circuit (IC) package.6. The integrated circuit (IC) package of claim 1, wherein the die is configured to operate at a first voltage provided to the integrated circuit (IC) package, and wherein the first electrostatic discharge (ESD) protection component allows the die to operate when the integrated circuit (IC) package is coupled to a power source that provides a second discharge voltage to the integrated circuit (IC) package.7. The integrated circuit (IC) package of claim 1, wherein the die is configured to operate at a first current provided to the integrated circuit (IC) package, and wherein the first electrostatic discharge (ESD) protection component allows the die to operate if the integrated circuit (IC) package is coupled to a power source that provides a second discharge current to the integrated circuit (IC) package.8. The integrated circuit (IC) package of claim 1, wherein the first electrostatic discharge (ESD) protection component comprises a plurality of diodes.9. The integrated circuit (IC) package of claim 8, wherein at least some of the diodes from the plurality of diodes are configured to share a power signal.10. The integrated circuit (IC) package of claim 8, wherein at least some of the diodes from the plurality of diodes are configured to share a ground reference signal.11. The integrated circuit (IC) package of claim 8, wherein the die comprises a plurality of input / output (I/O) terminals, wherein each input / output (I/O) terminal is coupled to at least one diode from the plurality of diodes.12. The integrated circuit (IC) package of claim 1, wherein the first electrostatic discharge (ESD) protection component comprises:a first P+ layer;a first interconnect coupled to the first P+ layer;a first N+ layer;a second P+ layer;a second interconnect coupled to the first N+ layer and the second P+ layer; a second N+ layer; anda third interconnect coupled to the second N+ layer.13. The integrated circuit (IC) package of claim 12, wherein the first interconnect is configured to provide a first electrical path for a ground reference signal (Vss), the second interconnect is configured to provide a second electrical path for an input/output (I/O) signal, and the third interconnect is configured to provide a third electrical for a power signal (Vdd).14. The integrated circuit (IC) package of claim 12, further comprising a dielectric layer.15. The integrated circuit (IC) package of claim 12, further comprising:a second P- layer that at least partially encapsulates the first P+ layer;a first N- layer that at least partially encapsulates the second P- layer and the first N+ layer;a second N- layer that at least partially encapsulates the second N+ layer; and a first P- layer that at least partially encapsulates the first N- layer, the second P+ layer and the second N- layer.16. The integrated circuit (IC) package of claim 15, wherein the first electrostatic discharge (ESD) protection component comprises:a first diode comprising the second P- layer and the first N- layer; and a second diode comprising the second N- layer and the first P- layer.17. The integrated circuit (IC) package of claim 1, wherein the integrated circuit (IC) package is coupled to an interposer comprising a second electrostatic discharge (ESD) protection component, and the integrated circuit (IC) package further comprising the interposer and the second electrostatic discharge (ESD) protection component.18. The integrated circuit (IC) package of claim 17, wherein the first electrostatic discharge (ESD) protection component and the second electrostatic discharge (ESD) protection component are configured to provide cumulative electrostatic discharge (ESD) protection for the integrated circuit (IC) package.19. The integrated circuit (IC) package of claim 17, wherein the first electrostatic discharge (ESD) protection component and the second electrostatic discharge (ESD) protection component are configured to provide fault tolerant electrostatic discharge (ESD) protection for the integrated circuit (IC) package.20. The integrated circuit (IC) package of claim 1, wherein the integrated circuit (IC) package is incorporated into a device selected from a group consisting of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a device in a automotive vehicle, and further including the device.
INTEGRATED CIRCUIT (IC) PACKAGECOMPRISING ELECTROSTATIC DISCHARGE(ESD) PROTECTIONCROSS-REFERENCE TO RELATED APPLICATION[0001] This application claims priority to and the benefit of Non-Provisional Application No. 14/838,034 filed in the U.S. Patent and Trademark Office on August 27, 2015, the entire content of which is incorporated herein by reference.BACKGROUNDField[0002] Various features relate to an integrated circuit (IC) package, and more specifically to an integrated circuit (IC) package that includes electrostatic discharge (ESD) protection.Background[0003] FIG. 1 illustrates a configuration of an integrated circuit package that includes a die. Specifically, FIG. 1 illustrates an integrated circuit package 100 that includes a die 102 and a package substrate 106. The package substrate 106 includes a dielectric layer and a plurality of interconnects 110. The package substrate 106 is a laminated substrate. The plurality of interconnects 110 includes traces, pads and/or vias. The die 102 is coupled to the package substrate 106 through a plurality of solder balls 112. The package substrate 106 is coupled to a printed circuit board (PCB) 108 through a plurality of solder balls 116.[0004] The integrated circuit package 100 is designed to operate under a particular package operation. For example, the integrated circuit package 100 is designed to operate within certain reliability requirements and electronic stress boundaries. Examples of electronic stress boundaries include voltage boundaries (e.g. change in voltages), current boundaries (e.g., change in currents), and electrostatic discharge (ESD) boundaries. Similarly, the die 102 is designed to operate within similar electronic stress boundaries. These electronic stress boundaries are tested at the package level. That is, the integrated circuit package 100 is tested by an electronic tester (e.g., ESD tester) to determine whether the integrated circuit package 100, as a whole, is within specified electronic stress boundaries.[0005] Different devices (e.g., mobile devices, automotive devices) may specify different package operations, different reliability, and different electronic stress boundaries (e.g., different ESD requirements). Thus, different circuit designs for the dies and packages are desirable for different devices due to the different reliability and different electronic stress boundary specifications for each device. However, the process of redesigning the circuit design of the die 102 can be quite expensive. In many cases, this cost is so high that it is prohibitive.[0006] Moreover, changes to the circuit design of the die 102 will result in changes to the overall electronic reliability and sensitivity of the integrated circuit package 100. For example, changes to the circuit design of the die 102 may result in a different electronic stress boundary of the die 102 and a different electronic stress boundary of the integrated circuit package 100. Thus, a redesign of the circuit design of the die 102 may require a substantial redesign of the integrated circuit package 100. In a worst case scenario, a new circuit design of the die 102 may not work at all with the pre-existing design of the integrated circuit package 100.[0007] Therefore, there is a need for an integrated circuit package that can be used with different devices, applications, reliability requirements and electronic stress boundaries without having to completely redesign the die, while at the same time meeting the needs and/or requirements of the devices in which the integrated circuit package is implemented in.SUMMARY[0008] Various features relate to an integrated circuit (IC) package that includes electrostatic discharge (ESD) protection.[0009] One example provides an integrated circuit (IC) package that includes a die, a package substrate coupled to the die, and a first electrostatic discharge (ESD) protection component coupled to the package substrate, where the first electrostatic discharge (ESD) protection component is configured to provide package level electrostatic discharge (ESD) protection.[0010] Another example provides an electronic device that includes an integrated circuit (IC) package comprising a die and a package substrate coupled to the die. The electronic device also includes an interposer coupled to the integrated circuit (IC) package, where the interposer comprises a first electrostatic discharge (ESD) protection component. The first electrostatic discharge (ESD) protection component is configured to provide electrostatic discharge (ESD) protection.DRAWINGS[0011] Various features, nature and advantages may become apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout.[0012] FIG. 1 illustrates an integrated circuit (IC) package.[0013] FIG. 2 illustrates a profile view of an example of an integrated circuit package that includes an electrostatic discharge (ESD) protection component.[0014] FIG. 3 illustrates a profile view of an example of an electrostatic discharge (ESD) protection component.[0015] FIG. 4 illustrates a profile view of an example of an electrostatic discharge (ESD) protection component.[0016] FIG. 5 illustrates a profile view of an example of another electrostatic discharge (ESD) protection component.[0017] FIG. 6 illustrates a view of an example of an electrostatic discharge (ESD) protection component.[0018] FIG. 7 illustrates an example of circuit diagram of a circuit in an integrated circuit package that includes an electrostatic discharge (ESD) protection component.[0019] FIG. 8 illustrates a profile view of an example of an integrated circuit package that includes an electrostatic discharge (ESD) protection component embedded in a package substrate.[0020] FIG. 9 illustrates a profile view of an example of an integrated circuit package that includes an electrostatic discharge (ESD) protection component coupled to an interposer.[0021] FIG. 10 illustrates an example of circuit diagram of a circuit in an integrated circuit package that includes an electrostatic discharge (ESD) protection component.[0022] FIG. 11 (which includes FIGS. 11A-11C) illustrates an exemplary sequence for providing / fabricating an integrated circuit package that includes an electrostatic discharge (ESD) protection component. [0023] FIG. 12 (which includes FIGS. 12A-12B) illustrates an exemplary sequence for providing / fabricating an integrated circuit package that includes an electrostatic discharge (ESD) protection component coupled to an interposer.[0024] FIG. 13 illustrates an exemplary flow diagram of a method for providing / fabricating an integrated circuit package that includes an electrostatic discharge (ESD) protection component.[0025] FIG. 14 illustrates various electronic devices that may integrate an integrated circuit package, a semiconductor device, a die, an integrated circuit and/or PCB described herein.DETAILED DESCRIPTION[0026] In the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well-known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure.[0027] The present disclosure describes a device package (e.g., integrated circuit (IC) package) that includes a die, a package substrate coupled to the die, and a first electrostatic discharge (ESD) protection component coupled to the package substrate. The first electrostatic discharge (ESD) protection component is configured to provide package level electrostatic discharge (ESD) protection. In some implementations, the first electrostatic discharge (ESD) protection component is embedded in the package substrate. In some implementations, the die includes an internal electrostatic discharge (ESD) protection component that is configured to provide die level electrostatic discharge (ESD) protection. In some implementations, the internal electrostatic discharge (ESD) protection component and the first electrostatic discharge (ESD) protection component are configured to provide cumulative electrostatic discharge (ESD) protection for some or all of the input/output (I/O) terminals of the die.Exemplary Integrated Circuit (IC) Package Comprising An Electrostatic Discharge (ESD) Protection Component [0028] FIG. 2 illustrates an example of a device package that includes an electrostatic discharge (ESD) protection component configured to provide package level electrostatic discharge (ESD) protection. Specifically, FIG. 2 illustrates an example of an integrated circuit (IC) package 200 that includes a substrate 202, a die 204, an electrostatic discharge (ESD) protection component 206, and an encapsulation layer 210. The integrated circuit (IC) package 200 is mounted on a printed circuit board (PCB) 250. The die 204 may be an integrated circuit (IC) that includes several transistors and/or other electronic components. The die 204 may be a logic die and/or a memory die. As will be further described below, the die 204 may include an internal electrostatic discharge (ESP) protection component 240 that is configured to provide die level electrostatic discharge (ESP) protection.[0029] In some implementations, the electrostatic discharge (ESD) protection component 206 (e.g., first electrostatic discharge (ESD) protection component) and/or the internal electrostatic discharge (ESD) protection component 240 (e.g., second electrostatic discharge (ESD) protection component) may be configured to allow the die 204 and the integrated circuit (IC) package 200 to meet at least one electrostatic discharge (ESD) testing model. In some implementations, without the electrostatic discharge (ESD) protection component 206 and/or the internal electrostatic discharge (ESD) protection component 240, the die 204 and the integrated circuit (IC) package 200 may not meet a particular electrostatic discharge (ESD) testing model. Examples of various electrostatic discharge (ESD) testing models are further described below.[0030] The substrate 202 may be a package substrate and/or an interposer. The die 204 is coupled (e.g., mounted) to the substrate 202. More specifically, the die 204 is coupled to the substrate 202 through a first plurality of solder balls 242. In some implementations, the die 204 may be coupled to the substrate 202 differently.[0031] The substrate 202 includes a first dielectric layer 220, a second dielectric layer 222, a third dielectric layer 223, a first solder resist layer 224, a second solder resist layer 226, and several interconnects 227. The first dielectric layer 220 may be a core layer. In some implementations, the first dielectric layer 220 may be a prepeg layer. The second dielectric layer 222 and/or the third dielectric layer 223 may be one or more dielectric layers (e.g., one or more prepeg layers). The interconnects 227 may include traces, pads and/or vias, that are formed in the first dielectric layer 220, the second dielectric layer 222 and/or the third dielectric layer 223. The first solder resist layer 224 is formed on a first surface (e.g., bottom surface, surface facing the PCB 250) of the substrate 202. The second solder resist layer 226 is formed on a second surface (e.g., top surface, surface facing the die 204) of the substrate 202.[0032] The encapsulation layer 210 at least partially encapsulates the die 204. The encapsulation layer 210 may include a mold and/or an epoxy fill.[0033] As shown in FIG. 2, the electrostatic discharge (ESD) protection component 206 is coupled to the substrate 202. More specifically, the electrostatic discharge (ESD) protection component 206 is coupled to a surface (e.g., bottom surface, surface facing the PCB 250) of the substrate 202. It is noted that the electrostatic discharge (ESD) protection component 206 may be coupled to the substrate 202 differently. For example, the electrostatic discharge (ESD) protection component 206 may be located on a different surface (e.g., top surface, surface facing the die 204) of the substrate 202. In some implementations, the electrostatic discharge (ESD) protection component 206 may be located within the encapsulation layer 210. In some implementations, the electrostatic discharge (ESD) protection component 206 may be embedded in the substrate 202. An example of an electrostatic discharge (ESD) protection component that is embedded in a substrate is further described in detail below in at least FIG. 8.[0034] The electrostatic discharge (ESD) protection component 206 provides several technical advantages to the integrated circuit (IC) package 200.[0035] First, the electrostatic discharge (ESD) protection component 206 provides better ESD protection than the internal electrostatic discharge (ESD) protection component 240. This is because the electrostatic discharge (ESD) protection component 206 is much larger than the internal electrostatic discharge (ESD) protection component 240, and is thus able to provide a more robust, reliable and/or powerful ESD protection. The internal electrostatic discharge (ESD) protection component 240, if included in the die 204, is limited by the size of the die 204 and is thus only able to provide limited ESD protection.[0036] Second, the electrostatic discharge (ESD) protection component 206 is easier to design as a separate component instead of being integrated in the die 204. The die 204 has many transistor devices and integrating an electrostatic discharge (ESD) protection component in the die 204 requires a more complex manufacturing process than the manufacturing process of a separate electrostatic discharge (ESD) protection component 206.[0037] Third, since the electrostatic discharge (ESD) protection component 206 is a separate electronic component, the die 204 does not need to be redesigned. Instead, the electrostatic discharge (ESD) protection component 206 can be designed separately from the die 204 based on an expected and/or anticipated application (e.g., mobile application, automotive application). Thus, even though the die 204 and the integrated circuit (IC) package 200 are configured to operate under a particular application (e.g., mobile application) and pass a particular testing model (e.g., first testing model), the electrostatic discharge (ESD) protection component 206 is configured to allow the die 204 and the integrated circuit (IC) package 200 to operate when the integrated circuit (IC) package 200 operates under another application (e.g., automotive application) and pass another particular testing model (e.g., second testing model) that is different than the particular testing model. For example, the die 204 may be configured to operate in a mobile device, but with the use of the electrostatic discharge (ESD) protection component 206, the die 204 and the integrated circuit (IC) package 200 may be implemented with an electronic device in an automotive vehicle (which has higher voltage and/or higher current specifications / requirements), without having to completely redesign the die 204.[0038] In some implementations, the electrostatic discharge (ESD) protection component 206 is coupled (e.g., directly coupled, indirectly coupled) to at least one input/output (I/O) terminal of the die 204. In some implementations, all of the input/output (I/O) terminals of the die 204 are coupled (e.g., directly coupled, indirectly coupled) to the electrostatic discharge (ESD) protection component 206. Thus, in some implementations, at least some or all of the input/output (I/O) terminals of the die 204 are protected by the electrostatic discharge (ESD) protection component 206.[0039] FIG. 2 illustrates a first plurality of interconnects 270, a second plurality of interconnects 272, and a third plurality of interconnects 274 that are coupled to the electrostatic discharge (ESD) protection component 206. The electrostatic discharge (ESD) protection component 206 is configured to provide package level electrostatic discharge (ESD) protection.[0040] The first plurality of interconnects 270 are located in/on the substrate 202. The first plurality of interconnects 270 may include traces, vias, pads, bumps and/or solder interconnects. The first plurality of interconnects 270 may be configured to provide an electrical path for a first input/output (I/O) signal to and from the die 204. The second plurality of interconnects 272 are located in/on the substrate 202. The second plurality of interconnects 272 may include traces, vias, pads, bumps and/or solder interconnects. The second plurality of interconnects 272 may be configured to provide an electrical path for a power signal (e.g., Vdd) to the die 204. The third plurality of interconnects 274 are located in/on the substrate 202. The third plurality of interconnects 274 may include traces, vias, pads, bumps and/or solder interconnects. The third plurality of interconnects 274 may be configured to provide an electrical path for a ground reference signal (e.g., Vss) from the die 204. The first plurality of interconnects 270, the second plurality of interconnects 272 and/or the third plurality of interconnects 272 may be coupled to the die 204 (e.g., through the first plurality of solder balls 242). Different implementations may have a different number of interconnects coupled to the electrostatic discharge (ESD) protection component 206.[0041] As mentioned above, FIG. 2 further illustrates that the integrated circuit (IC) package 200 is coupled (e.g., mounted) on the printed circuit board (PCB) 250 through a second plurality of solder balls 252. More specifically, the substrate 202 of the integrated circuit (IC) package 200 is coupled to the PCB 250 through the second plurality of solder balls 252. In some implementations, the integrated circuit (IC) package 200 may be coupled to the PCB 250 differently.[0042] In some implementations, the several electrostatic discharge (ESD) protection components (e.g., internal electrostatic discharge (ESD) protection component 240 of the die and the electrostatic discharge (ESD) protection component 206 of the package substrate) may provide cumulative electrostatic discharge (ESD) protection for the die 204 and the integrated circuit (IC) package 200. Cumulative electrostatic discharge (ESD) protection is further described in detail below in FIGS. 7 and 10. The electrostatic discharge (ESD) protection component 206 may provide package level electrostatic discharge (ESD) protection (e.g., protection of the die 204 and other components in the integrated circuit (IC) package 200). In some implementations, providing the electrostatic discharge (ESD) protection component 206 inside the integrated circuit (IC) package 200 may provide real estate savings in the device, since the electrostatic discharge (ESD) protection component 206 may be provided in the available space of the integrated circuit (IC) package 200.[0043] In some implementations, the die 204 is configured to operate at a first voltage provided to the integrated circuit (IC) package 200, and an electrostatic discharge (ESD) protection component (e.g., electrostatic discharge (ESD) protection components 206, 240, 906) allows the die 204 to operate when the integrated circuit (IC) package 200 is coupled to a power source that provides a second discharge voltage to the integrated circuit (IC) package 200. [0044] FIG. 2 illustrates that the electrostatic discharge (ESD) protection component 206 is positioned and located underneath the substrate 202. However, the electrostatic discharge (ESD) protection component 206 may be positioned and located differently in or on the integrated circuit (IC) package 200. For example, the electrostatic discharge (ESD) protection component 206 may be located over the substrate 202 and co-planar with the die 204. In some implementations, the electrostatic discharge (ESD) protection component 206 may be at least partially encapsulated by the encapsulation layer 210. In some implementations, the electrostatic discharge (ESD) protection component 206 may be embedded in the substrate 202, which is further described below in FIG. 8.Exemplary Electrostatic Discharge Protection (ESD) Components[0045] Different implementations may use different designs of an electrostatic discharge (ESD) protection component. FIGS. 3-6 illustrate various examples of an electrostatic discharge (ESD) protection component.[0046] FIG. 3 illustrates a profile view of an example of an electrostatic discharge (ESD) protection component configuration 306 that may be implemented with a device package (e.g., integrated circuit (IC) package). In some implementations, the electrostatic discharge (ESD) protection component configuration 306 may be implemented as the electrostatic discharge (ESD) protection component 206 described in FIG. 2. The electrostatic discharge (ESD) protection component configuration 306 may be configured as a semiconductor device.[0047] As shown in FIG. 3, the electrostatic discharge (ESD) protection component 206 includes the electrostatic discharge (ESD) protection component configuration 306. The electrostatic discharge (ESD) protection component configuration 306 includes a first P- (light P doped) layer 300, a first N- (light N doped) layer 302, a second P- layer 304, a first P+ (heavily P doped) layer 308, a first N+ (heavily N doped) layer 310, a second P+ layer 312, a second N- layer 320, a second N+ layer 322, a first contact interconnect 330, a second contact interconnect 340, a third contact interconnect 342, and a fourth contact interconnect 350.[0048] The first N- layer 302, the second P+ layer 312, and the second N- layer 320 are located in the a first P- layer 300. The second P- layer 304 and the first N+ layer 310 are located in the first N- layer 302. The first P+ layer 308 is located in the second P- layer 304. The second N+ layer 322 is located in the second N- layer 320. [0049] The second P- layer 304 at least partially encapsulates the first P+ layer 308. The first N- layer 302 at least partially encapsulates the second P- layer 304 and the first N+ layer 310. The second N- layer 320 at least partially encapsulates the second N+ layer 322. The first P- layer 300 at least partially encapsulates the first N- layer 302, the second P+ layer 312 and the second N- layer 320.[0050] The first contact interconnect 330 is coupled to the first P+ layer 308. The first contact interconnect 330 may be configured to provide an electrical path for a ground reference signal (Vss). The second contact interconnect 340 is coupled to the first N+ layer 310. The third contact interconnect 342 is coupled to the second P+ layer 312. The second contact interconnect 340 and the third contact interconnect 342 are configured to provide an electrical path for an input/output (I/O) signal. The fourth contact interconnect 350 is coupled to the second N+ layer 322. The fourth contact interconnect 350 may be configured to provide an electrical path for a power signal (Vdd).[0051] The first contact interconnect 330 may be coupled to the first plurality of interconnects 270 (e.g., through micro bumps and/or solder interconnect). The second contact interconnect 340 and the third contact interconnect 342 may be coupled to the second plurality of interconnects 272 (e.g., through traces, pads, micro bumps and/or solder interconnect). The fourth contact interconnect 350 may be coupled to the third plurality of interconnects 274 (e.g., through micro bumps and/or solder interconnect).[0052] In some implementations, the first N- layer 302 and the second P- layer 304 are configured to operate as a first diode 360, where the first N- layer 302 is a cathode side of the first diode 360, and the second P- layer 304 is an anode side of the first diode 360.[0053] In some implementations, the first P- layer 300 and the second N- layer 320 are configured to operate as a second diode 370, where the first P- layer 300 is an anode side of the second diode 370, and the second N- layer 320 is a cathode side of the second diode 370.[0054] It is noted that different implementations may have different configurations of the various P-, P+, N- and N+ layers, and thus, the configuration shown in FIG. 3 is merely exemplary.[0055] FIG. 4 illustrates a profile view of another electrostatic discharge (ESD) protection component. As shown in FIG. 4, the electrostatic discharge (ESD) protection component 206 includes at least two electrostatic discharge (ESD) protection component configurations 306a-b. Thus, FIG. 4 illustrates that the electrostatic discharge (ESD) protection component 206 includes a plurality (e.g., array) of electrostatic discharge (ESD) protection component configurations 306a-b. As further shown in FIG. 4, the various electrostatic discharge (ESD) protection component configurations 306a-b share the first P- (light P doped) layer 300. In some implementations, there may be one electrostatic discharge (ESD) protection component configuration 306 for each input/output (I/O) terminal of the die 204.[0056] FIG. 5 illustrates a profile view of an example of another electrostatic discharge (ESD) protection component configuration 506 that may be implemented with a device package (e.g., integrated circuit (IC) package). In some implementations, the electrostatic discharge (ESD) protection component configuration 506 may be implemented as the electrostatic discharge (ESD) protection component 206 described in FIG. 2. The electrostatic discharge (ESD) protection component configuration 506 may be configured as a semiconductor device.[0057] The electrostatic discharge (ESD) protection component configuration 506 is similar to the electrostatic discharge (ESD) protection component configuration 306 of FIG. 3, except that the electrostatic discharge (ESD) protection component configuration 506 also includes a dielectric layer 500, a first interconnect 530, a second interconnect 540, and a third interconnect 550.[0058] The first interconnect 530 is coupled to the first contact interconnect 330. The second interconnect 540 is coupled to the second contact interconnect 340 and the third contact interconnect 342. The third interconnect 550 is coupled to the fourth contact interconnect 350. The first interconnect 530 may be configured to provide an electrical path for a ground reference signal (Vss). The second interconnect 540 may be configured to provide an electrical path for an input/output (I/O) signal. The third interconnect 550 may be configured to provide an electrical path for a power signal (Vdd). The first interconnect 530 may be coupled to the first plurality of interconnects 270 (e.g., through micro bumps and/or solder interconnect). The second interconnect 540 may be coupled to the second plurality of interconnects 272 (e.g., through micro bumps and/or solder interconnect). The third interconnect 550 may be coupled to the third plurality of interconnects 274 (e.g., through micro bumps and/or solder interconnect). [0059] Similar to FIG. 4, the electrostatic discharge (ESD) protection component 206 of FIG. 5 may include one or more (e.g., plurality) of the electrostatic discharge (ESD) protection component configuration 506.[0060] FIG. 6 illustrates a view of an example of another electrostatic discharge (ESD) protection component. As shown in FIG. 6, the electrostatic discharge (ESD) protection component 206 includes a plurality of electrostatic discharge (ESD) protection component configurations 506 (e.g., 506a-h) arranged in an array. It is noted that the electrostatic discharge (ESD) protection component 206 of FIG. 6 may represent a plurality of electrostatic discharge (ESD) protection component configurations 306 (e.g., 306a-b or more). It is also noted that for the purpose of clarity, not all the components of the electrostatic discharge (ESD) protection component 206 are shown in FIG. 6. The electrostatic discharge (ESD) protection component 206 of FIG. 6 may be configured as a semiconductor device.[0061] FIG. 6 illustrates an example of how the various electrostatic discharge (ESD) protection component configurations 506 (e.g., 506a-h) may be electrically coupled together in the electrostatic discharge (ESD) protection component 206. More specifically, FIG. 6 illustrates how some of the electrostatic discharge (ESD) protection component configurations 506 may share one or more paths (e.g., one or more electrical paths) for ground reference signals (e.g., Vss) and power signals (e.g., Vdd). As shown in FIG. 6, the first interconnect 530a is coupled to the first contact interconnect 330 of various electrostatic discharge (ESD) protection component configurations 506. Similarly, the third interconnect 550a is coupled to the fourth contact interconnect 350 of various electrostatic discharge (ESD) protection component configurations 506.[0062] The first interconnect 530a may be coupled to a first interconnect 600 that is configured to provide an electrical path for a ground reference signal (e.g., Vss). The first interconnect 600 may comprise a via and/or solder interconnect of the substrate 202. The second interconnect 540a may be coupled to a second interconnect 610 that is configured to provide an electrical path for an input/output (I/O) signal. The second interconnect 610 may comprise a via and/or solder interconnect of the substrate 202. The third interconnect 550a may be coupled to a third interconnect 620 that is configured to provide an electrical path for a power signal (e.g., Vdd). The third interconnect 620 may comprise a via and/or solder interconnect of the substrate 202.[0063] FIG. 6 further illustrates that the first interconnect 530b is coupled to the first contact interconnect 330 of various other electrostatic discharge (ESD) protection component configurations 506 (e.g., 506e-h). Similarly, the third interconnect 550b is coupled to the fourth contact interconnect 350 of various other electrostatic discharge (ESD) protection component configurations 506 (e.g., 506e-h).[0064] The first interconnect 530b may be coupled to a first interconnect 630 that is configured to provide an electrical path for a ground reference signal (e.g., Vss). The first interconnect 630 may comprise a via and/or solder interconnect of the substrate 202. The second interconnect 540b may be coupled to a second interconnect 640 that is configured to provide an electrical path for an input/output (I/O) signal. The second interconnect 640 may comprise a via and/or solder interconnect of the substrate 202. The third interconnect 550b may be coupled to a third interconnect 650 that is configured to provide an electrical path for a power signal (e.g., Vdd). The third interconnect 650 may comprise a via and/or solder interconnect of the substrate 202.[0065] Examples of how diodes may be configured, arranged and/or electrically coupled to provide electrostatic discharge (ESD) protection in the integrated circuit (IC) package 200 and the die 204 are further illustrated and described below in at least FIGS. 7 and 10.Exemplary Circuit Diagram of an Integrated Circuit (IC) Package Comprising An Electrostatic Discharge (ESD) Protection Component[0066] FIG. 7 illustrates an exemplary circuit diagram 700 that includes several diodes configured to provide electrostatic discharge (ESD) protection in an integrated circuit (IC) package (e.g., device package). The circuit diagram 700 includes a die circuit 702, a package substrate circuit 704, and an electrostatic discharge (ESD) protection circuit 706. The electrostatic discharge (ESD) protection circuit 706 may be part of the package substrate circuit 704. The die circuit 702 may represent at least part of a circuit for the die 204. The package substrate circuit 704 may represent at least part of a circuit for the substrate 202. The electrostatic discharge (ESD) protection circuit 706 may represent at least part of a circuit for the electrostatic discharge (ESD) protection component 206.[0067] The die circuit 702 includes a first terminal 710 (e.g., internal die circuit I/O), a second terminal 712, a third terminal 714, and a fourth terminal 716. The first terminal 710, the second terminal 712, the third terminal 714 and the fourth terminal 716 may be input / output (I/O) terminals for a die (e.g., die 204). Different implementations of the circuit diagram 700 may have a different number of terminals. [0068] The die circuit 702 also includes a plurality of diodes 720 arranged in series and/or in parallel to each other. The plurality of diodes 720 may be configured as an electrostatic discharge (ESD) protection component (e.g., internal electrostatic discharge (ESD) protection component 240) of a die (e.g., die 204).[0069] The plurality of diodes 720 includes a diode 722, a diode 724, a diode 726, and a diode 728. The diode 722 is coupled in series to the diode 724. The first terminal 710 is connected between the diode 722 and the diode 724. The diode 726 is coupled in series to the diode 728. The second terminal 712 is connected between the diode 726 and the diode 728. The diode 722 and the diode 724 are in parallel to the diode 726 and the diode 728. A ground terminal 730 for a ground reference signal (Vss) is coupled to the anode portions of the diode 722 and the diode 726. A power terminal 732 for a power signal (Vdd) is coupled to the cathode portions of the diode 724 and the diode 728.[0070] The electrostatic discharge (ESD) protection circuit 706 includes a plurality of diodes 760. The plurality of diodes 760 may be configured as an electrostatic discharge (ESD) protection component (e.g., electrostatic discharge (ESD) protection component 206) that is coupled to a package substrate (e.g., substrate 202).[0071] The plurality of diodes 760 includes a diode 762, a diode 764, a diode 766, and a diode 768. The diode 762 is coupled in series to the diode 764. The diode 766 is coupled in series to the diode 768. The diode 762 and the diode 764 are in parallel to the diode 766 and the diode 768. The ground terminal 730 for a ground reference signal (Vss) is coupled to the anode portions of the diode 762 and the diode 766. The power terminal 732 for a power signal (Vdd) is coupled to the cathode portions of the diode 764 and the diode 768. A terminal between the diode 722 and the diode 724 is coupled to a terminal between the diode 762 and the diode 764. A terminal between the diode 726 and the diode 728 is coupled to a terminal between the diode 766 and the diode 768.[0072] FIG. 7 also illustrates that input/output terminals (e.g., first terminal 710), ground terminal 730 and power terminal 732 are coupled to a printed circuit board (PCB) circuit 708. The PCB circuit 708 may represent at least part of a circuit for the PCB 250. In some implementations, the circuit diagram 700 illustrates how the internal electrostatic discharge (ESD) protection component of the die and the electrostatic discharge (ESD) protection component of the package substrate may provide cumulative electrostatic discharge (ESD) protection for the integrated circuit (IC) package (e.g., the die of the integrated circuit (IC) package). [0073] FIG. 7 illustrates how cumulative electrostatic discharge (ESD) protection may be used to provide robust protection for the integrated circuit (IC) package. In some implementations, cumulative electrostatic discharge (ESD) protection is the use of two or more electrostatic discharge (ESD) protection components (e.g., electrostatic discharge (ESD) protection components coupled in parallel and/or in series) used in conjunction with each other to provide a more effective and powerful electrostatic discharge (ESD) protection. For example, as an analogy, two resistors coupled in series to each other provide an equivalent resistor that has a higher resistance than each of the individual resistor coupled to each other in series.[0074] Similarly, two or more electrostatic discharge (ESD) protection components that are coupled to each other provide a cumulative electrostatic discharge (ESD) protection component that provides greater electrostatic discharge (ESD) protection than each of the individual electrostatic discharge (ESD) protection component. Thus, by grouping the electrostatic discharge (ESD) protection components from different portions of the integrated circuit (IC) package, the present disclosure provides an effective, efficient and robust electrostatic discharge (ESD) protection.[0075] In addition, cumulative electrostatic discharge (ESD) protection may provide electrostatic discharge (ESD) protection even when one of the electrostatic discharge (ESD) protection component fails or does not operate as designed. Thus, cumulative electrostatic discharge (ESD) protection, through the use of several electrostatic discharge (ESD) protection components, may provide fault tolerant electrostatic discharge (ESD) protection for the integrated circuit (IC) package. For example, in the event that the electrostatic discharge (ESD) protection component in the die circuit 702 should fail (or not work properly), the electrostatic discharge (ESD) protection circuit 706 coupled to the package substrate circuit 704 may still work to provide electrostatic discharge (ESD) protection for the integrated circuit (IC) package (e.g., die of the IC package).Exemplary Integrated Circuit (IC) Package Comprising an Electrostatic Discharge (ESD) Protection Component[0076] In some implementations, an electrostatic discharge (ESD) protection component may be embedded in a package substrate. FIG. 8 illustrates an example of a device package that includes an electrostatic discharge (ESD) protection component embedded in a package substrate. Specifically, FIG. 8 illustrates an example of an integrated circuit (IC) package 800 that includes a substrate 802, the die 204, an electrostatic discharge (ESD) protection component 806, and the encapsulation layer 210. The integrated circuit (IC) package 800 is mounted on a printed circuit board (PCB) 250. The die 204 may be an integrated circuit (IC) that includes several transistors and/or other electronic components. The die 204 may be a logic die and/or a memory die. The die 204 may include an internal electrostatic discharge (ESD) protection component 240.[0077] The integrated circuit (IC) package 800 of FIG. 8 is similar to the integrated circuit (IC) package 200 of FIG. 2, except that the electrostatic discharge (ESD) protection component 806 is embedded in the substrate 802. In some implementations, the electrostatic discharge (ESD) protection component 806 is similar to the electrostatic discharge (ESD) protection component 206, as described in FIGS. 3-6.[0078] FIG. 8 illustrates a first plurality of interconnects 870, a second plurality of interconnects 872, and a third plurality of interconnects 874 that are coupled to the electrostatic discharge (ESD) protection component 806. The first plurality of interconnects 870 are located in/on the substrate 802. The first plurality of interconnects 870 may include traces, vias and/or pads. The first plurality of interconnects 870 may be configured to provide an electrical path for a first input/output (I/O) signal to and from the die 204. The second plurality of interconnects 872 are located in/on the substrate 802. The second plurality of interconnects 872 may include traces, vias and/or pads. The second plurality of interconnects 872 may be configured to provide an electrical path for a power signal (e.g., Vdd) to the die 204. The third plurality of interconnects 874 are located in/on the substrate 802. The third plurality of interconnects 874 may include traces, vias and/or pads. The third plurality of interconnects 874 may be configured to provide an electrical path for a ground reference signal (e.g., Vss) from the die 204. The first plurality of interconnects 870, the second plurality of interconnects 872 and/or the third plurality of interconnects 872 may be coupled to the die 204 (e.g., through the first plurality of solder balls 242). Different implementations may have a different number of interconnects coupled to the electrostatic discharge (ESD) protection component 806.[0079] In some implementations, the several electrostatic discharge (ESD) protection components (e.g., internal electrostatic discharge (ESD) protection component 240 of the die and the electrostatic discharge (ESD) protection component 806 of the package substrate) may provide cumulative electrostatic discharge (ESD) protection for the die 204 and the integrated circuit (IC) package 800, as described in FIG. 7 above.Exemplary Integrated Circuit (IC) Package Comprising an Electrostatic Discharge (ESD) Protection Component Coupled to an Interposer[0080] In some implementations, an electrostatic discharge (ESD) protection component may be coupled to an interposer. FIG. 9 illustrates an example of a device package that includes an electrostatic discharge (ESD) protection component coupled to an interposer. Specifically, FIG. 9 illustrates an example of an integrated circuit (IC) package 200 that includes a substrate 202, the die 204, an electrostatic discharge (ESD) protection component 206, and the encapsulation layer 210. The integrated circuit (IC) package 200 is coupled to an interposer 902 through a plurality of solder balls 252. The interposer 902 is coupled to a printed circuit board (PCB) 250 through a plurality of solder balls 952.[0081] The integrated circuit (IC) package 200 of FIG. 9 is similar to the integrated circuit (IC) package 200 of FIG. 2, except that the integrated circuit (IC) package 200 is coupled to the interposer 902 that includes the electrostatic discharge (ESD) protection component 906. In some implementations, the electrostatic discharge (ESD) protection component 906 is similar to the electrostatic discharge (ESD) protection component 206, as described in FIGS. 3-6. In some implementations, the interposer 902 may be coupled to the integrated circuit (IC) package 800 of FIG. 8, instead of the integrated circuit (IC) package 200.[0082] FIG. 9 illustrates a first plurality of interconnects 970, a second plurality of interconnects 972, and a third plurality of interconnects 974 that are coupled to the electrostatic discharge (ESD) protection component 906. The first plurality of interconnects 970 are located in/on the interposer 902. The first plurality of interconnects 970 may include traces, vias, pads, bumps and/or solder interconnects. The first plurality of interconnects 970 may be configured to provide an electrical path for a first input/output (I/O) signal to and from the die 204. The second plurality of interconnects 972 are located in/on the interposer 902. The second plurality of interconnects 972 may include traces, vias, pads, bumps and/or solder interconnects. The second plurality of interconnects 972 may be configured to provide an electrical path for a power signal (e.g., Vdd) to the die 204. The third plurality of interconnects 974 are located in/on the interposer 902. The third plurality of interconnects 974 may include traces, vias, pads, bumps and/or solder interconnects. The third plurality of interconnects 974 may be configured to provide an electrical path for a ground reference signal (e.g., Vss) from the die 204. The first plurality of interconnects 970, the second plurality of interconnects 972 and/or the third plurality of interconnects 972 may be coupled to the die 204 (e.g., through the first plurality of solder balls 242, the solder balls 252). Different implementations may have a different number of interconnects coupled to the electrostatic discharge (ESD) protection component 906. In addition, the position or location of the electrostatic discharge (ESD) protection component 906 may be different in different implementations. For example, the electrostatic discharge (ESD) protection component 906 may be located over the interposer 902 or embedded in the interposer 902.[0083] In some implementations, the several electrostatic discharge (ESD) protection components (e.g., internal electrostatic discharge (ESD) protection component 240 of the die, the electrostatic discharge (ESD) protection component 206 of the package substrate, and/or the electrostatic discharge (ESD) protection component 906 of the interposer) may provide cumulative electrostatic discharge (ESD) protection for the die 204 and the integrated circuit (IC) package 200. Cumulative electrostatic discharge (ESD) protection is further described in detail below in FIG 10.Exemplary Circuit Diagram of an Integrated Circuit (IC) Package Comprising an Electrostatic Discharge (ESD) Protection Component Coupled to an Interposer[0084] FIG. 10 illustrates another exemplary circuit diagram 1000 that includes several diodes configured to provide electrostatic discharge (ESD) protection in an integrated circuit (IC) package. The circuit diagram 1000 is similar to the circuit diagram 700 of FIG. 7, except that it includes additional circuits for additional electrostatic discharge (ESD) protection. The circuit diagram 1000 includes the die circuit 702, the package substrate circuit 704 and the electrostatic discharge (ESD) protection circuit 706 as described above in FIG. 7.[0085] The circuit diagram 1000 also includes an interposer circuit 1004, and an electrostatic discharge (ESD) protection circuit 1006. The electrostatic discharge (ESD) protection circuit 1006 may be part of the interposer circuit 1004. The interposer circuit 1004 may represent at least part of a circuit for the interposer 902. The electrostatic discharge (ESD) protection circuit 1006 may represent at least part of a circuit for the electrostatic discharge (ESD) protection component 906. [0086] The electrostatic discharge (ESD) protection circuit 1006 includes a plurality of diodes 1060. The plurality of diodes 1060 may be configured as an electrostatic discharge (ESD) protection component (e.g., electrostatic discharge (ESD) protection component 906) that is coupled to an interposer (e.g., interposer 902).[0087] The plurality of diodes 1060 includes a diode 1062, a diode 1064, a diode 1066, and a diode 1068. The diode 1062 is coupled in series to the diode 1064. The diode 1066 is coupled in series to the diode 1068. The diode 1062 and the diode 1064 are in parallel to the diode 1066 and the diode 1068. The ground terminal 1030 for a ground reference signal (Vss) is coupled to the anode portions of the diode 1062 and the diode 1066. The power terminal 1032 for a power signal (Vdd) is coupled to the cathode portions of the diode 1064 and the diode 1068. A terminal between the diode 1062 and the diode 1064 is coupled to a terminal between the diode 762 and the diode 764. A terminal between the diode 1066 and the diode 1066 is coupled to a terminal between the diode 766 and the diode 768.[0088] In some implementations, the circuit diagram 1000 illustrates how the internal electrostatic discharge (ESD) protection component 240 of the die, the electrostatic discharge (ESD) protection component 206 of the package substrate, and/or the electrostatic discharge (ESD) protection component 906 of the interposer may provide cumulative electrostatic discharge (ESD) protection for the die 204 and the integrated circuit (IC) package 200.[0089] FIG. 10 illustrates how cumulative electrostatic discharge (ESD) protection may be used to provide robust protection for the integrated circuit (IC) package. In some implementations, cumulative electrostatic discharge (ESD) protection is the use of two or more electrostatic discharge (ESD) protection components (e.g., electrostatic discharge (ESD) protection components coupled in parallel and/or in series) used in conjunction with each other to provide a more effective and powerful electrostatic discharge (ESD) protection. For example, as an analogy, two resistors coupled in series to each other provide an equivalent resistor that has a higher resistance than each of the individual resistor coupled to each other in series.[0090] Similarly, two or more electrostatic discharge (ESD) protection components that are coupled to each other provide a cumulative electrostatic discharge (ESD) protection component that provides greater electrostatic discharge (ESD) protection than each of the individual electrostatic discharge (ESD) protection component. Thus, by grouping the electrostatic discharge (ESD) protection components from different portions of the integrated circuit (IC) package, the present disclosure provides an effective, efficient and robust electrostatic discharge (ESD) protection. Cumulative electrostatic discharge (ESD) protection may include electrostatic discharge (ESD) protection from an electrostatic discharge (ESD) protection component of the die circuit 702, an electrostatic discharge (ESD) protection circuit 706 of the package substrate circuit 704, and/or an electrostatic discharge (ESD) protection circuit 1006 of the interposer circuit 1004.[0091] In addition, cumulative electrostatic discharge (ESD) protection may provide electrostatic discharge (ESD) protection even when one or more of the electrostatic discharge (ESD) protection components fail or does not operate as designed. Thus, cumulative electrostatic discharge (ESD) protection, through the use of several electrostatic discharge (ESD) protection components, may provide fault tolerant electrostatic discharge (ESD) protection for the integrated circuit (IC) package. For example, in the event that the electrostatic discharge (ESD) protection circuit 706 coupled to the package substrate circuit 704 should fail (or not work properly), the electrostatic discharge (ESD) protection circuit 1006 coupled to the interposer circuit 1004 may still work to provide electrostatic discharge (ESD) protection for the integrated circuit (IC) package (e.g., die of the IC package).Exemplary Sequence for Fabricating an Integrated Circuit (IC) Package Comprising an Electrostatic Discharge (ESD) Protection Component[0092] In some implementations, providing / fabricating an integrated circuit (IC) package that includes an electrostatic discharge (ESD) protection component includes several processes. FIG. 11 (which includes FIGS. 11A-11C) illustrates an exemplary sequence for providing / fabricating a device package (e.g., integrated circuit (IC) package) that an electrostatic discharge (ESD) protection component. In some implementations, the sequence of FIGS. 11A-11C may be used to provide / fabricate the integrated circuit (IC) package 800 of FIG. 8 and/or other integrated circuit (IC) packages described in the present disclosure.[0093] It should be noted that the sequence of FIGS. 11 A-l IC may combine one or more stages in order to simplify and/or clarify the sequence for providing / fabricating a integrated circuit (IC) package that includes an electrostatic discharge (ESD) protection component. In some implementations, the order of the processes may be changed or modified. [0094] Stage 1, as shown in FIG. 11 A, illustrates a state after a dielectric layer 1100 is provided. The dielectric layer 1100 may be a core layer. In some implementations, the dielectric layer 1100 is provided by a supplier. In some implementations, the dielectric layer 1100 is fabricated (e.g., formed).[0095] Stage 2 illustrates a state after a first cavity 1101 and a second cavity 1103 are formed in the dielectric layer 1100. Different implementations may form the first cavity 1101 and the second cavity 1103 differently. In some implementations, a laser process may be used to form the cavities.[0096] Stage 3 illustrates a state after a first metal layer 1102 and a second metal layer 1104 are formed on the dielectric layer 1100. The forming and patterning of the first metal layer 1102 and the second metal layer 1104 may form and define interconnects (e.g., traces, pads, vias) on the dielectric layer 1100. Different implementations may use different processes for forming the first metal layer 1102 and the second metal layer 1104. A photo-lithography process (e.g., photo-etching process) may be use to pattern the metal layers. Patterning methods could include modified semi- additive or semi-additive patterning processes (SAP).[0097] Stage 4 illustrates a state after a cavity 1107 is formed in the dielectric layer 1100. In some implementations, a laser is used to form (e.g., remove) portions of the dielectric layer 1100.[0098] Stage 5 illustrates a state after the dielectric layer 1100 that includes interconnects, is coupled to a carrier 1110.[0099] Stage 6 illustrates a state after an electrostatic discharge (ESD) protection component 806 is positioned in the cavity 1107 of the dielectric layer 1100 (e.g., core layer). The electrostatic discharge (ESD) protection component 806 may any of the electrostatic discharge (ESD) protection components described in the present disclosure. The electrostatic discharge (ESD) protection component 806 is positioned over the carrier 1110.[00100] Stage 7, as shown in FIG. 11B, illustrates a state after a second dielectric layer 1114 is formed on a first surface of the dielectric layer 1100, the cavity 1107 and the electrostatic discharge (ESD) protection component 806. The second dielectric layer 1114 may be prepeg layer.[00101] Stage 8 illustrates a state after the carrier 1110 is decoupled (e.g., detached) from the dielectric layer 1100. [00102] Stage 9 illustrates a state after a third dielectric layer 1116 is formed on a second side of the dielectric layer 1100. In some implementations, the third dielectric layer 1116 and the second dielectric layer 1114 are the same dielectric layer. Stage 9 illustrates that the second dielectric layer 1114 and/or the third dielectric layer at least partially encapsulates the electrostatic discharge (ESD) protection component 806.[00103] Stage 10 illustrates a state after a cavity 1117 is formed in the second dielectric layer 1114, and a cavity 1119 is formed in the third dielectric layer 1116. A photo-etching process may be used to form the cavity. Stage 10 involves via cavity formation and patterning for the second and third dielectric layers. Patterning methods could include modified semi-additive or semi-additive patterning processes (SAP).[00104] Stage 11 illustrates a state after an interconnect 1120 (e.g., via) and an interconnect 1121 (e.g., trace) are formed in/on the second dielectric layer 1114, and an interconnect 1122 (e.g., via) and an interconnect 1123 (e.g., trace) are formed in/on the third dielectric layer 1116. The interconnect 1120 is coupled to the interconnect 1121 and the electrostatic discharge (ESD) protection component 806.[00105] Stage 12 illustrates a state after a first solder resist layer 1124 is formed on the second dielectric layer 1114, and a second solder resist layer 1126 is formed on the third dielectric layer 1116. Stage 12 illustrates a substrate 1130 that includes the dielectric layer 1100, the electrostatic discharge (ESD) protection component 806, the second dielectric layer 1114, the third dielectric layer 1116, several interconnects (e.g., interconnect 1120), the first solder resist layer 1124, and the second solder resist layer 1126. The substrate 1130 may be a package substrate. The substrate 1130 may be similar to the substrate 202.[00106] Stage 13, as shown in FIG. 11C, illustrates a state after a die 204 is coupled (e.g., mounted) to the substrate 1130 through a plurality of solder balls 1142. The die 204 may be coupled to the substrate 1130 differently. In some implementations, the die 204 may include an internal electrostatic discharge (ESD) protection component 240 as described in FIG. 2.[00107] Stage 14 illustrates a state after an encapsulation layer 210 is formed on the substrate 1130 and the die 204. In some implementations, the encapsulation layer 210 comprises a mold and/or epoxy fill.[00108] Stage 15 illustrates a state after a plurality of solder balls 1160 is coupled to the substrate 1130. In some implementations, stage 15 illustrates an integrated circuit (IC) package 1170 that includes the substrate 1130, the electrostatic discharge (ESD) protection component 806, the die 204, and the encapsulation layer 210. In some implementations, the integrated circuit (IC) package 1170 is similar to the integrated circuit (IC) package 800 as described and illustrated in FIG. 8.Exemplary Sequence for Fabricating an Integrated Circuit (IC) Package Comprising an Electrostatic Discharge (ESD) Protection Component[00109] In some implementations, providing / fabricating a device package that includes an electrostatic discharge (ESD) protection component, includes several processes. FIG. 12 (which includes FIGS. 12A-12B) illustrates an exemplary sequence for providing / fabricating a device package (e.g., integrated circuit (IC) package) that includes an electrostatic discharge (ESD) protection component. In some implementations, the sequence of FIGS. 12A-12B may be used to provide / fabricate the integrated circuit (IC) package 900 of FIG. 9 and/or other integrated circuit (IC) packages described in the present disclosure.[00110] It should be noted that the sequence of FIGS. 12A-12B may combine one or more stages in order to simplify and/or clarify the sequence for providing / fabricating a integrated circuit (IC) package that includes an electrostatic discharge (ESD) protection component. In some implementations, the order of the processes may be changed or modified.[00111] Stage 1, as shown in FIG. 12A, illustrates a state after a substrate 202 is provided. The substrate 202 may be a package substrate. The substrate 202 may include at least one dielectric layer (e.g., core layer, prepeg layer), several interconnects (e.g., traces, pads, vias), and at least one solder resist layer (e.g., first solder resist layer, second solder resist layer), as described in FIGS. 2 and 9.[00112] Stage 2 illustrates a state after a die 204 is coupled (e.g., mounted) to the substrate 202 through a plurality of solder balls 242. The die 204 may be coupled to the substrate 202 differently. In some implementations, the die 204 may include an internal electrostatic discharge (ESD) protection component 240 as described in FIG. 2.[00113] Stage 3 illustrates a state after an encapsulation layer 210 is formed on the substrate 202 and the die 204. In some implementations, the encapsulation layer 210 comprises a mold and/or epoxy fill.[00114] Stage 4 illustrates a state after an electrostatic discharge (ESD) protection component 206 is coupled (e.g., mounted) to the substrate 202. In some implementations, solder may be used to couple the electrostatic discharge (ESD) protection component 206 to the substrate 202. However, different implementations may couple the electrostatic discharge (ESD) protection component 206 to the substrate 202 differently.[00115] Stage 5 illustrates a state after a plurality of solder balls 252 is coupled to the substrate 202. In some implementations, stage 5 illustrates an integrated circuit (IC) package 200 that includes the substrate 202, the electrostatic discharge (ESD) protection component 206, the die 204, and the encapsulation layer 210. In some implementations, the integrated circuit (IC) package 200 at stage 5 is similar to the integrated circuit (IC) package 200 of FIG. 2.[00116] Stage 6, as shown in FIG. 12B, illustrates a state after an interposer 902 is provided. The interposer 902 includes a dielectric layer 920 and several interconnects 1200. The interconnects 1200 may includes traces, vias, and/or pads. The interconnects 1200 may include a first plurality of interconnects 970, a second plurality of interconnects 972, and a third plurality of interconnects 974, as described in FIG. 9.[00117] Stage 7 illustrates a state after an electrostatic discharge (ESD) protection component 906 is coupled (e.g., mounted) to the interposer 902. In some implementations, a solder interconnect may be used to couple the electrostatic discharge (ESD) protection component 906 to the interposer 902. However, different implementations may couple the electrostatic discharge (ESD) protection component 906 to the interposer 902 differently.[00118] Stage 8 illustrates a state after a plurality of solder balls 952 is coupled to the interposer 902.[00119] Stage 9 illustrates a state after the integrated circuit (IC) package 200 is coupled to the interposer 902 that includes the electrostatic discharge (ESD) protection component 906.Exemplary Flow Diagram of a Method for Fabricating an Integrated Circuit (IC) Package Comprising an Electrostatic Discharge (ESD) Protection Component[00120] FIG. 13 illustrates an exemplary flow diagram of a method 1300 for providing / fabricating a device package (e.g., integrated circuit (IC) package) that includes an electrostatic discharge (ESD) protection component. In some implementations, the method of FIG. 13 may be used to provide / fabricate the integrated circuit (IC) package 200 of FIG. 9 and/or other integrated circuit (IC) packages described in the present disclosure. [00121] It should be noted that the flow diagram of FIG. 13 may combine one or more processes in order to simplify and/or clarify the method for providing an integrated circuit (IC) package. In some implementations, the order of the processes may be changed or modified.[00122] The method provides (at 1305) a substrate. In some implementations, the substrate is provided by a supplier. In some implementations, the substrate is fabricated (e.g., formed). The substrate may be a package substrate. The substrate (e.g., substrate 202) may include a dielectric layer (e.g., core layer) and metal layers on the dielectric layer.[00123] The method forms (at 1310) several interconnects in and on the substrate. Different implementations may use different processes for forming the interconnects. A photo-lithography process (e.g., photo-etching process) may be use to pattern metal layer into interconnects. Patterning methods could include modified semi-additive or semi-additive patterning processes (SAP).[00124] The method couples (at 1315) an electrostatic discharge (ESD) protection component (e.g., electrostatic discharge (ESD) protection component 206) to the substrate (e.g., substrate 202). The electrostatic discharge (ESD) protection component may be coupled to the substrate through a solder interconnect (or through bump and solder interconnect).[00125] The method couples (at 1320) a die (e.g., die 204) to the substrate (e.g., substrate 202). The die may include an internal electrostatic discharge (ESD) protection component (e.g., electrostatic discharge (ESD) protection component 240). A plurality of solder balls may be used to couple the die to the substrate.[00126] The method forms (at 1325) an encapsulation layer (e.g., encapsulation layer 210) over the die and the substrate. The encapsulation layer may comprise a mold and/or an epoxy fill. In some implementations, the substrate, the electrostatic discharge (ESD) protection component, the die and the encapsulation layer may form an integrated circuit (IC) package (e.g., integrated circuit (IC) package 200).[00127] The method couples (at 1330) the integrated circuit (IC) package (e.g., integrated circuit (IC) package 200) to an interposer (e.g., interposer 902) that includes an electrostatic discharge (ESD) protection component (e.g., electrostatic discharge (ESD) protection component 906). In some implementations, the several electrostatic discharge (ESD) protection components (e.g., internal electrostatic discharge (ESD) protection component 240 of the die, the electrostatic discharge (ESD) protection component 206 of the package substrate, and/or the electrostatic discharge (ESD) protection component 906 of the interposer) may be configured to provide cumulative electrostatic discharge (ESD) protection for the die 204 and the integrated circuit (IC) package 200.Electrostatic Discharge Protection (ESD) Models[00128] An electrostatic discharge (ESD) is the sudden flow of electricity between two electrically charged objects caused by contact, an electrical short, or dielectric breakdown. A buildup of static electricity may be caused by tribocharging or by electrostatic induction. The ESD occurs when objects with different charges are brought close together or when the dielectric between them breaks down.[00129] An electrostatic discharge (ESD) can cause damage to sensitive electronic devices (e.g., dies, integrated circuit (IC) packages, device packages). These devices can suffer permanent damage when subjected to high voltages. Thus, these devices are designed to withstand some level of electrostatic discharge (ESD). The level of electrostatic discharge (ESD) protection will depend on the assembly environment. For example, a mobile device may have a different level of electrostatic discharge (ESD) requirement than the level of electrostatic discharge (ESD) requirement of an automotive device.[00130] To account for these different applications (e.g., mobile applications, automotive applications), different testing models have been established to test and determine whether a device or device package (e.g., integrated circuit (IC) package) is appropriate for a particular application (e.g., whether a device package can be used in an automotive device and/or automotive application).[00131] Examples of electrostatic discharge (ESD) testing models include a human body model (HBM) testing model and a charged device model (CDM) testing model.[00132] The HBM testing model is used to characterize the susceptibility of an electronic component or electronic device to ESD damage. The test simulates an electrical discharge of a human onto an electronic component, which could occur if a human has built up charge.[00133] In some implementations, the HBM testing model is setup by applying a high-voltage supply in series with a charging resistor (e.g., a l-ΜΩ resistor or higher) and a capacitor (e.g., a 100-pF capacitor). After the capacitor is fully charged, a switch is used to remove it from the high-voltage supply and series resistor and to apply it in series with a discharge resistor (e.g., a 1.5-kQ resistor) and the device under test (DUT) (e.g., device package, integrated circuit (IC) package). The voltage thus fully dissipates through the discharge resistor and the DUT. Different HBM testing models may use different values for the high-voltage supply range, depending on the application of the device. In some implementations, the voltage used during the test may be between about 0.5kV and 4 kV. Different implementations may use different peak current that is between about 0.4A and 3A. In some implementations, the HBM testing models may use a discharge time of about 300 nanoseconds (nS) or less.[00134] The CDM testing model is used to model what often happen in automated- manufacturing environments in which machines often remain on indefinitely, causing the electronic integrated circuits (ICs) to electrically charge over time. When the part of the IC comes into contact with a grounded conductor, the built-up charge on the part's capacitance discharges.[00135] In some implementations, a CDM testing model may use voltages between about 250V and 1000V. Examples of CDM testing models include a 250V CDM model, a 500V CDM model, a 750V CDM model, and a 1000V CDM model. Different implementations may use a different peak current that is between about 4A and 12A. In some implementations, the CDM testing models may use a discharge time of about 1 nanosecond (nS) or less.[00136] As mentioned above, the ESD testing model that is used will depend on the application the device is intended to be used in or implemented in. For example, a mobile device may require a particular ESD testing model that is different than for an ESD testing model of an automotive device.[00137] In some implementations, for example, a device package (e.g., integrated circuit (IC) package) designed to be used in a mobile device or as a mobile application, may pass a testing model for a mobile device, but may not be able to pass a testing model for an automotive device or an automotive application, without making changes to the device circuit or package. In some implementations, one or more electrostatic discharge protection (ESD) components are provided in a device package in order to ensure that the device package passes a different testing model. In some implementations, using this approach avoids having to redesign the die in the device package, while providing a device package that is used and implemented in an electronic device that is different than what the die and device package were initially designed for, saving substantial design and manufacturing costs. Exemplary Electronic Devices[00138] FIG. 14 illustrates various electronic devices that may be integrated with any of the aforementioned integrated circuit device, semiconductor device, integrated circuit, die, interposer, package or package-on-package (PoP). For example, a mobile phone device 1402, a laptop computer device 1404, and a fixed location terminal device 1406 may include an integrated circuit device 1400 as described herein. The integrated circuit device 1400 may be, for example, any of the integrated circuits, dies, integrated circuit devices, integrated circuit device packages, integrated circuit devices, package- on-package devices described herein. The devices 1402, 1404, 1406 illustrated in FIG. 14 are merely exemplary. Other electronic devices may also feature the integrated circuit device 1400 including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand-held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices, servers, routers, electronic devices implemented in automotive vehicles (e.g., autonomous vehicles), or any other device that stores or retrieves data or computer instructions, or any combination thereof.[00139] One or more of the components, features, and/or functions illustrated in FIGS. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11A-11C, 12A-12B, 13, and/or 14 may be rearranged and/or combined into a single component, feature or function or embodied in several components, or functions. Additional elements, components, and/or functions may also be added without departing from the disclosure. It should also be noted that FIGS. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11A-11C, 12A-12B, 13, and/or 14 and its corresponding description in the present disclosure is not limited to dies and/or ICs. In some implementations, FIGS. 2, 3, 4, 5, 6, 7, 8, 9, 10, 11A-11C, 12A-12B, 13, and/or 14 and its corresponding description may be used to manufacture, create, provide, and/or produce integrated circuit devices. In some implementations, a device may include a die, a die package, an integrated circuit (IC), an integrated circuit device, an integrated circuit (IC) package, a device package, a wafer, a semiconductor device, a package on package structure, and/or an interposer. [00140] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another— even if they do not directly physically touch each other.[00141] Also, it is noted that the implementations may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed.[00142] The various features of the disclosure described herein can be implemented in different systems without departing from the disclosure. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the disclosure. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art.
Described herein are architectures, platforms and methods for NFC-based operations in a stylus device. The stylus device comprises: a stylus coil antenna configured to detect presence of a magnetic field, wherein the magnetic field facilitates one or more of the following: switching ON, power charging, and establishing of a near field communication (NFC) link in the device; a NFC module configured to process a signal received through the stylus coil antenna, wherein the received signal includes a particular frequency channel for the NFC link; a processor coupled to the NFC module, configured to run a plurality of applications to control an operation of the device, the operation comprises transmitting an information or data using the particular frequency channel; and a user interface coupled to the processor, configured to facilitate selection of operations that correspond to the plurality of applications.
A device comprising:a stylus coil antenna configured to detect presence of a magnetic field, wherein the magnetic field facilitates one or more of the following: switching ON, power charging, and establishing of a near field communication (NFC) link in the device;a NFC module configured to process a signal received through the stylus coil antenna, wherein the received signal includes a particular frequency channel for the NFC link;a processor coupled to the NFC module, configured to run a plurality of applications to control an operation of the device, the operation comprises transmitting an information or data using the particular frequency channel; anda user interface coupled to the processor, configured to facilitate selection of operations that correspond to the plurality of applications.The device as recited in claim 1, wherein the NFC modules utilizes the particular frequency channel to send the information or data through the stylus coil antenna, the information or data comprises at least one of a user-fingerprint, a user identification or authorization, or a stylus-status information.The device as recited in claim 2, further comprising a sensor configured to perform a function based on the selected operation, the function includes scanning and reading of the user-fingerprint, wherein the user-fingerprint is compared with stored fingerprints to determine the user identification or authorization.The device as recited in claim 1, wherein the user interface is a switch or a sensor.The device as recited in claim 1, wherein the operation comprises transmitting of a signal request to receive a copy of a data shown at a screen of another device.The device as recited in claim 1, wherein the stylus coil antenna is disposed at a front-end, back-end, or along outer body-surface of the device.The device as recited in claim 1 further comprising a storage that stores the plurality of applications.The device as recited in claim 1 further comprising a NFC tag configured to include a unique identification that is transmitted through the stylus coil antenna to identify the device.The device as recited in claim 1 further comprising a power storage that comprises a full-wave rectifier and a storing capacitor to receive and store the charging power.A method of near field communications (NFC)-based operation in a stylus device, the method comprising:detecting presence of a magnetic field that establishes a NFC link;receiving of a particular operating frequency channel through the NFC link; andcommunicating an information or data through the NFC link.The method as recited in claim 10, wherein the detected magnetic field induces a current that charges the stylus device.The method as recited in claim 10, wherein the particular frequency channel is selected based on a determined user identification or authorization.The method as recited in claim 10 further comprising running one or more applications for the communicating of the information or data through the NFC link.The method as recited in claim 10 further comprising transmitting of an NFC tag to identify the stylus device.The method as recited in claim 10 further comprising transmitting of a signal request to receive a copy of a data shown at a screen of another device.
BACKGROUND Portable devices such as tablets, phone and Ultrabook devices that are available in the market support the use of a stylus or a stylus device. For example, among other types of stylus supported is an active stylus, which carries its own power source to power itself. This feature is particularly attractive due to supporting finer stylus tips enabling more natural writing experience, better noise immunity and its ability to add extra functions such as eraser and pressure sensitivity in the stylus.One problem with the active stylus has been the need to incorporate a battery to accommodate the power needs of the circuitry. The incorporation of the battery (e.g., AA batteries) increases the thickness and weight of the stylus, and affects the balance on the stylus. This is in addition to the need to replace batteries on the stylus periodically.Thus, an ideal active stylus solution may have the following features: long battery life, which is typically measured in several months with 8 hour usage per day; a light weight and proper distribution of weight to mimic a traditional pen; and an ability to be quickly charged in the event of a discharge without relying on components that are external to the computer system. As such, there is a need for a design to provide the solution as described above. BRIEF DESCRIPTION OF THE DRAWINGS The detailed description is described with reference to accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to reference like features and components.FIG. 1 is an example scenario illustrating a near field communication (NFC)-based stylus and device arrangement as described in implementations herein.FIGS. 2a and 2b illustrate an example system of the stylus and a physical configuration-overview of the stylus, respectively, as described in implementations herein.FIG. 3 is an example NFC-power charging arrangement between a power storage of the stylus and a first coil antenna of the portable device as described in present implementations herein.FIG. 4 is an example process chart illustrating an example method for NFC-based operations as implemented at a stylus side.FIG. 5 is an example process chart illustrating an example method for NFC-based operations as implemented at a portable device side. DETAILED DESCRIPTION Described herein are architectures, platforms and methods for an NFC-based active stylus operations. The stylus, for example, is configured as an independent wireless device that is coupled or paired with another device such as a tablet, mobile phone, and other types of portable devices.In an implementation, the stylus includes a stylus coil antenna that detects presence of a magnetic field when it is aligned or directed to a magnetic field -coverage area of another device (e.g., portable device). The detected magnetic fields, for example, may facilitate switching ON, power charging, and establishing of a near field communication (NFC) link in the stylus.As an independent wireless device, the stylus further includes a NFC module configured to process a signal that is received through the stylus coil antenna. For example, the received signal includes a particular frequency channel to use for the NFC link. In this example, the particular frequency channel is selected by the main device (i.e., tablet, mobile phone, etc.) and this is transmitted towards the stylus for further processing. For example, with the received particular frequency channel, a stylus processor coupled to the NFC module may run one or more applications to control an operation of the stylus. The operation may include transmitting an information or data using the particular frequency channel. In this manner, the stylus need not transmit multiple frequencies which may be needed to avoid noise and thus, the stylus saves battery power in this configuration while avoiding noise.The stylus may also include a switch (i.e., user interface) coupled to the processor. For example, the switch is configured to facilitate selection of operations that correspond to the one or more applications. In this example, the stylus processor receives the selection signal and thereafter runs the corresponding application. For example, pressing the switch once triggers the stylus processor to run user-fingerprint identification. In this example, a sensor that is embedded in the stylus performs scanning and reading of the user fingerprint and the scanned fingerprint is thereafter compared with fingerprints stored in the stylus. The verified user-fingerprint may be further transmitted to the other device for electronic signature verification purposes.In another example, the verification of the user-fingerprint may be used as the basis for an authority to open and/or edit a particular document(s), or vice-versa. That is, the stylus processor is configured to make a user-identification and thereafter limits the particular document/s that may be opened or edited based on the user-identification. Conversely, when the particular document(s) are already opened at the main device, the stylus processor may be configured to use the user-identification in determining user-authority to further edit the opened particular document(s).In another implementation, the stylus includes a NFC tag that includes a unique identification for the stylus. The NFC tag, for example, is transmitted by the NFC module through the stylus coil antenna to identify the stylus to the other device.FIG. 1 is an example scenario 100 that illustrates an NFC-based stylus and device arrangement as described in implementations herein. The NFC-based stylus, for example, is an untethered stylus that may or may not use a battery to power itself.Scenario 100 may include a portable device 102 with a first coil antenna 104-2, a second coil antenna 104-4, and a capacitive-based touch-sensor screen 106. The scenario 100 further shows a stylus 108 and a built-in stylus holder 110 at a back-cover of the portable device 102. The built-in stylus holder 110, for example, may be utilized for charging/docking of the stylus 108 when not in use.In an implementation, a near field coupling arrangement such as an NFC communication between the stylus 108 and the portable device 102 is integrated or incorporated with features of the touch-sensor screen 106. For example, when the stylus 108 is taken out from the stylus holder 110 and is aligned or directed within a certain distance, which is enough to be within a magnetic field - coverage area of the first coil antenna 104-2, the first coil antenna 104-2 may facilitate power activation in the stylus 108. Furthermore, the first coil antenna 104-2 may initiate NFC-power charging of the stylus 108 using a principle of mutual induction between the first coil antenna 104-2 and an antenna (not shown) of the stylus 108.In an implementation, a processor (not shown) within the portable device 102 may select a particular frequency channel and this information (i.e., frequency channel) is transmitted through the first coil antenna 104-2 to the stylus 108. The particular frequency channel, for example, may be utilized by the stylus 108 in transmitting data such as a user identity or user-fingerprint data, stylus-status information, stylus location, current battery charging status of the stylus, and the like, to the portable device 102. To obtain this data, the stylus 108 may further include different other sensors such as a fingerprint-sensor, accelerometer, heart-rate sensor, battery status sensor, and the like.When a tip of the stylus 108 engages the touch-sensor screen 106, the NFC transaction or wireless communication between the stylus 108 and the portable device 102 may still continue. That is, the stylus 108 may still transmit data and the data is received by the portable device 102 through its first coil antenna 104-2. The stylus 108, for example, may include the antenna that is utilized when engaging in an NFC communication or transaction with the portable device 102. Furthermore, the stylus 108 may include software, firmware, hardware, or a combination thereof, to engage in NFC related transaction or wireless communication with the portable device 102.As shown, the first coil antenna 104-2 or the second coil antenna 104-4 may each include a continuous a continuous loop of coil antenna that operates, for example, at about 13.56MHz or any other frequency for the near field coupling communications. These coil antennas of the portable device 102 may be connected in series and are disposed in a manner to avoid presence of flux linkage/s between the two. For example, the first coil antenna 104-2 may be disposed on a display side and faces a user (not shown) while the second coil antenna 104-4 may be disposed at a corner or a back-side of the portable device 102. Other examples such as when the first and second coil antennas are at opposite corners of the portable device 102 may similarly apply.In an implementation, an NFC module (not shown) may control these coil antennas 104-2 and 104-4 when communicating with the stylus 108. For example, when the processor has selected the particular frequency channel to be used for NFC communications between the stylus 108 and the portable device 102, the NFC module may control the coil antennas to resonate at that particular frequency channel. Although Fig. 1 shows a limited number of coil antennas (i.e., first and second coil antennas 104-2 and 104-4), additional multiple coil antennas may be configured or disposed at different other locations within the portable device 102.The portable device 102 may include, but is not limited to, Ultrabooks, a tablet computer, a netbook, a notebook computer, a laptop computer, mobile phone, a cellular phone, a smartphone, a personal digital assistant, a multimedia playback device, a digital music player, a digital video player, a navigational device, a digital camera, and the like.FIGs. 2a and 2b illustrate an example system and a physical configuration-overview of the stylus 108, respectively, as described in present implementations herein. As shown, the stylus 108 may be configured to be an independent wireless device by itself. That is, the stylus 108 may have its own processor(s) 200, a storage 202, and applications 204. The stylus 108 may further include an NFC module 206, an optional power storage 208, sensor 210, and a stylus coil antenna 212. Furthermore still, the stylus 108 may include an actuator or a switch 214 that may be utilized to select a present operation of the stylus 108.For example, pressing the switch 214 once may activate the user-identification feature of the stylus 108. In this example, the sensor 210 may perform a function of scanning and reading user-fingerprints and the scanned user-fingerprint may be compared to stored fingerprints at the storage 202 to determine user identification and/or authorization. For identified and/or authorized users, the stylus 108 may be utilized to open and/or a particular document. Conversely, an already opened document may allow the stylus 108 to perform editing of the document after the user identification and/or authorization has been confirmed.In another example, pressing the switch 214 twice may allow the stylus 108 to copy data from a screen (not shown) of the portable device 102 by using, for example, a copy-screen tab feature of the touch-sensor screen 106. In this example, the portable device 102 may utilize its NFC communication feature to transfer and store the requested data to the stylus 108. Thereafter, the stylus 108 may paste the data to another portable device (not shown) by using a pen frequency channel configured for the stylus 108, and/or through the same NFC communication mechanism as discussed above. In these two examples, the power storage 208 may continuously harvest charging signals from the magnetic fields generated by the first coil antenna 104-2 of the portable device 102.In an implementation, the processor 200 may be configured to execute stored instructions or any of a number of applications 204 residing within the storage 202. In this implementation, the processor 200 is configured to control and coordinate the overall operations of the stylus 108. For example, to implement the user-identification feature of the stylus 108, the processor 200 may execute the application 204 that is specifically designed for user identification or user authorization. That is, upon activation of the user-identification feature using the switch 214, the processor 200 runs the application 204 that may direct the sensor 210 to perform fingerprint scanning and reading operations. In other implementations, the sensor 210 may be configured as another user interface (i.e., similar to the switch 214) for the stylus 108.In another example, the processor 200 may run the software application 204 that offsets the hand-tremors on the part of the user. In this example, the processor 200 direct the sensor 210 to perform jitter-detection on the hands of the user and thereafter, an accelerometer (i.e., sensor) may be utilized by the processor 200 improve, for example, the identification of letters or markers that the user may want to write/input at the touch-sensor screen 106.With continuing reference to FIG. 2a , the storage 202 may be a miniature memory of the stylus 108. For example, the storage 202 may include any form of random access memory (RAM), read-only memory (ROM), flash memory, or the like. In this example, the processor 200 may have direct access to the storage 202.Coupled to the processor 200 is the NFC module 206 that may be utilized to control the stylus coil antenna 212. For example, the NFC module 206 may direct the stylus coil antenna 212 to operate at a particular frequency channel for touch communications. In this example, the particular frequency channel may be received from the portable device 102 to synchronize their operations. That is, a touch processor (not shown), for example, of the portable device 102 may be configured to select the particular frequency channel and communicates this selected frequency channel to the stylus 108 using the NFC communication channel. Thereafter, the stylus 108 and the main portable device 102 may engage in touch communications through the selected frequency channel. This operation may further save the power storage 208 from draining power because there is no need for the stylus 108 to continuously transmit multiple frequencies to transmit its location status, battery status, and the like.The NFC module 206 may further include a transceiver circuitry that processes electrical signals (not shown) that may be received through the stylus coil antenna 212. For example, the NFC module 206 may facilitate tuning of the stylus coil antenna 212 for maximum power transfer during transmit or receive operations. In this implementation, the NFC module 206 may be integrated with the stylus coil antenna 212 and/or the processor 200 to form a single module.In other implementations, the stylus 108 may be configured to receive data shown at the touch-sensor screen 106 every time that the switch 214 is pressed a number of times (e.g., twice). In this example, the portable device 102 may be pre-configured to send the screen data once it detects the request/control signal (i.e., pressing the switch twice) from the stylus 108 during NFC communications. In this other implementation, the stylus 108 may retrieve stored data as the need arises. For example, the stored data may be selected from the stylus 108 using a voice-to-text sensor. In this example, the selected data may be communicated back to the portable device 102 for display at the touch-sensor screen 106. The voice-to-text sensor, for example, includes a miniature display screen (not shown) for user's convenience.In another implementation still, an NFC tag (not shown) may be integrated to the power storage 208. The NFC tag, for example, may transmit stylus identification to the portable device 102. In this example, the NFC tag may perform the transmission once the stylus 108 is within the magnetic field - coverage area of the first coil antenna 104-2 and the stylus 108 is powered ON.With continuing reference to FIG. 2b , the stylus coil antenna 212 is shown to include a continuous loop of coil antenna that is disposed at front end of the stylus 108. The sensor 210 such as a fingerprint sensor, may be positioned adjacent to the stylus coil antenna 212. Furthermore, the rest of the components as discussed in FIG. 2a above may be located at the circuitry along the main body of the stylus 108. In another implementation, the stylus coil antenna 212 may be disposed at back-end or along outer surface of the pen body.FIG. 3 illustrates an example NFC-power charging arrangement between the power storage 208 and the first coil antenna 104-2. Although the power storage 208 as described herein may store harvested charging power from the first coil antenna 104-2, the power storage 208 need not store the charging power, because they are continuously converted and used for present operations of the stylus 108.As described in present implementations herein, the NFC-power charging arrangement includes the first coil antenna 104-2 of the portable device 102, the stylus coil antenna 212 together with a coil tuning capacitor 300, and the power storage 208 of the stylus 108. The power storage 208 may further include a NFC tag 302, a full-wave rectifier 304, a storing capacitor 306, and an output voltage supply 308 that supplies power to the stylus 108.When the stylus coil antenna 212 is aligned and directed within the coverage area, for example, of the first coil antenna 104-2, the principle of mutual induction may induce a current 310 to the portable storage 208. The induced current 310 is rectified by the full-wave rectifier 304 and the resulting DC voltage may be used to charge the storing capacitor 306. A regulator 312 may then be configured to supply the output voltage supply 308.Similarly, the principle of mutual induction may facilitate the activation of the stylus 108. At this instance, the NFC tag 302 may modulate the operating frequency of the first coil antenna 104-2 to transmit, for example, the identification of the stylus 108. In another example, the switch 214 of the stylus 208 may be utilized to activate the NFC tag 302. That is, the NFC tag 302 may use the present NFC communications between the first coil antenna 104-2 and the stylus coil antenna 212 to transmit the stylus tag or identification.FIG. 4 shows an example process chart 400 illustrating an example method for implementing NFC-based active stylus as implemented at the stylus device side. The order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method, or alternate method. Additionally, individual blocks may be deleted from the method without departing from the spirit and scope of the subject matter described herein. Furthermore, the method may be implemented in any suitable hardware, software, firmware, or a combination thereof, without departing from the scope of the invention.At block 402, detecting presence of a magnetic field by a stylus coil antenna is performed. For example, when the stylus 108 is within certain distance from a surface of the touch-sensor screen 106, a principle of mutual induction between the first coil antenna 104-2 and the stylus coil antenna 212 may be utilized to detect presence of the magnetic field from the portable device 102. This detection may be utilized to turn ON the stylus 108. In this example, the magnetic field from the first coil antenna 104-2 may be utilized to charge the battery and/or the capacitor 306 of the stylus 108. In another example, when the tip of the stylus 108 touches the surface of the touch-sensor screen 108, the portable device 102 may send a control signal to turn ON the stylus 108.At block 404, receiving of a particular operating frequency channel is performed. For example, upon turning ON of the stylus 108, the stylus 108 is configured to receive the particular operating frequency channel to transmit information or data. The information or data may include user identification, user authorization, stylus location, stylus battery capacity, and the like. In this example, the particular operating frequency channel may be selected by the portable device 102. That is, the portable device 102 may select the operating frequency channel based on security purposes and /or to avoid channel interferences.At block 406, communicating information or data by the stylus is performed. For example, when the stylus 108 is set to the particular operating frequency, the stylus may send the information or data to the portable device 102 as described above. In this example, the use or operation of the stylus 108 over the touch-sensor screen 106 may run in parallel with the NFC communications between the stylus 108 and the portable device 102.FIG. 5 shows an example process chart 500 illustrating an example method for implementing NFC-based active stylus as implemented at the portable device side. The order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method, or alternate method. Additionally, individual blocks may be deleted from the method without departing from the spirit and scope of the subject matter described herein. Furthermore, the method may be implemented in any suitable hardware, software, firmware, or a combination thereof, without departing from the scope of the invention.At block 502, activating a coil antenna to radiate a magnetic field is performed. For example, turning ON the portable device 102 may facilitate radiation of a magnetic field in the first coil antenna 104-2. In this example, the magnetic field may create mutual induction with another coil antenna such as the stylus coil antenna 212 as discussed above.At block 504, receiving an NFC tag signal through the coil antenna is performed. For example, the NFC tag 306 includes the identification of the stylus 108 that is aligned or directed within coverage area of the firsts coil antenna 104-2. In this example, the portable device 102 through the first coil antenna 104-2 may receive the NFC tag to verify the identity of the pairing stylus 108.At block 506, transmitting a particular operating frequency channel through the coil antenna is performed. For example, the portable device 102 includes a processor that is configured to select the particular operating frequency channel for security reasons and/or lower interference purposes. In this example, the selected particular operating frequency channel is transmitted through the first coil antenna 104-2 by the portable device 102. The selected particular operating frequency channel may save the power storage at the stylus 108 from draining quickly as there is no need for the stylus to continuously transmit multiple frequencies.At block 508, receiving information or data using the particular operating frequency channel is performed. For example, the stylus 108 transmits a request signal to the portable device 102 to transmit a particular information or data as presently shown at the touch-sensor screen 106. In this example, the portable device 102 may communicate the file/files as requested to the stylus 108 for storing.The following examples pertain to further embodiments:Example 1 is a device comprising: a coil antenna configured to detect presence of a magnetic field, wherein the magnetic field facilitates one or more of the following: switching ON, power charging, and establishing of a near field communication (NFC) link in the device; a NFC module configured to process a signal received through the stylus coil antenna, wherein the received signal includes a particular frequency channel for the NFC link; a processor coupled to the NFC module, configured to run a plurality of applications to control an operation of the device, the operation comprises transmitting an information or data using the particular frequency channel; and a user interface coupled to the processor, configured to select an operation that corresponds to the plurality of applications.In Example 2, the device as recited in Example 1, wherein the coil antenna is a stylus antenna.In Example 3, the device as recited in Example 1, wherein the NFC modules utilizes the partic3lar frequency channel to send the information or data through the coil antenna, the information or data comprises at least one of a user-fingerprint, a user identification or authorization, or a stylus-status information.In Example 4, the device as recited in Example 3 further comprising a sensor configured to perform a function based on the selected operation, the function includes scanning and reading of the user-fingerprint, wherein the user-fingerprint is compared with stored fingerprints to determine the user identification or authorization.In Example 5, the device as recited in Example 1, wherein the operation comprises transmitting of a signal request to receive a copy of a data shown at a screen of another device.In Example 6, the device as recited in Example 1 further comprising a storage that stores the plurality of applications.In Example 7, the device as recited in Example comprising a NFC tag configured to include a unique identification that is transmitted through the coil antenna to identify the device.In Example 8, the device as recited in Example 1 further comprising a power storage that comprises a full-wave rectifier and a storing capacitor to receive and store the charging power.In Example 9, the device as recited in Examples 1 to 8, wherein the user interface is a switch or a sensor.In Example 10, the device as recited in Examples 1 to 8, wherein the coil antenna is disposed at a front-end, back-end, or along outer body-surface of the device.Example 11 is an apparatus comprising: a coil antenna configured to receive a magnetic field that induces a charging current; a power storage configured to receive the charging current; a NFC module configured to process a signal that is received through the coil antenna, wherein the received signal includes a particular frequency channel for a near field communications (NFC) link; and a processor coupled to the NFC module, the processor configured to run a plurality of applications to control an operation of the device, the operation comprises transmitting an information or data using the particular frequency channel.In Example 12, the apparatus as recited in Example 11, wherein the coil antenna is a stylus antenna.In Example 13, the apparatus as recited in Example 11, wherein the NFC module is configured to utilize the particular frequency channel to send the information or data through the coil antenna, the information or data includes a user-fingerprint, a user identification or authorization, or a stylus-status information.In Example 14, the apparatus as recited in Example 11 further comprising a switch coupled to the processor, configured to facilitate selection of operations that correspond to the plurality of applications.In Example 15, the apparatus as recited in Example 14 further comprising a sensor configured to perform a function based on the selected operation, the function comprises scanning and reading of the user-fingerprint, detecting a hand-jitter, or to act as an accelerometer.In Example 16, the apparatus as recited in any of Examples 11 to 15 further comprising a NFC tag that comprises a unique identification configured to be transmitted through the stylus coil antenna to identify the apparatus.Example 17 is a method of near field communications (NFC)-based operation in a stylus device, the method comprising: detecting presence of a magnetic field that establishes a NFC link; receiving of a particular operating frequency channel through the NFC link; and communicating an information or data through the NFC link.In Example 18, the method as recited in Example 17, wherein the particular frequency channel is selected based on a determined user identification or authorization.In Example 19, the method as recited in Example 17 further comprising running one or more applications for the communicating of the information or data through the NFC link.In Example 20, the method as recited in Example 17 further comprising transmitting of an NFC tag to identify the stylus device.In Example 21, the method as recited in Example 17 further comprising transmitting of a signal request to receive a copy of a data shown at a screen of another device.In Example 22, the method as recited in any of Example 1717 to 21, wherein the detected magnetic field induces a current that charges the stylus device.
An apparatus for providing tactile feedback is provided. The apparatus includes a touch-sensitive screen that detects a touch input and a set of electrodes. The apparatus also includes a haptic voltage signal generator that applies a haptic signal to the set of electrodes and modifies the haptic signal based on a displacement current from the touch input to the set of electrodes. The apparatus also includes a haptic feedback controller that determines the displacement current, where the displacement current is an effect of an amplitude of the haptic signal.
CLAIMS What is claimed is: 1. A method of providing tactile feedback, comprising: applying a haptic signal to a set of electrodes in a device; detecting, at a touch-sensitive screen of the device, a touch input; determining a displacement current from the touch input to the set of electrodes, the displacement current being an effect of an amplitude of the haptic signal; and modifying the haptic signal based on the determined displacement current,. 2. The method of claim 1, wherein modifying the haptic signal comprises: modifying the haptic signal as the detected displacement current changes. 3. The method of claim 1, further comprising: when the displacement current indicates a decrease in current magnitude, the modifying the haptic signal includes increasing the amplitude of the haptic signal. 4. The method of claim 1, further comprising: when the displacement current indicates an increase in current magnitude, the modifying the haptic signal includes decreasing the amplitude of the haptic signal. 5. The method of claim 1 , further comprising: when the displacement current indicates a constant current magnitude, the modifying the haptic signal inc ludes maintaining the amplitude of the haptic signal. 6. The method of claim 1, further comprising: determining whether the displacement current indicates a change in current; and when the displacement current is determined to indicate a change in current, the modifying the haptic signal includes modifying the amplitude of the haptic signal. 7. The method of claim 1, further comprising: after the modifying the haptic signal, determining a second displacement current into the set of electrodes, the second displacement current being an effect of an amplitude of the modified haptic signal; and modifying the haptic signal based on the determined second displacement current. 8. The method of claim 7, wherein the determining a first displacement current includes determining the first displacement current at a first point in time, and the determining a second displacement current includes determining the second displacement current at a second point in time subsequent to the first point in time. 9. The method of claim 8, wherein a grounding path between a source of the haptic signal and the touch input at the first point in time is different from a grounding path between the source of the haptic signal and the touch input at the second point in time. 10. The method of claim 1, further comprising: detecting a change in a grounding path between a source of the haptic signal and the touch input, wherein when the change is detected, the modifying the haptic signal includes modifying the amplitude of the haptic signal. 1 1. The method of claim 1 , further comprising: applying the modified haptic signal to the set of electrodes. 12. The method of claim 1, wherein the applying a haptic signal includes uniformly applying the haptic signal to the set of electrodes. 13. The method of claim 1, wherein the displacement current is based on the touch input and a grounding path between a source of the haptic signal and the touch input. 14. The method of claim 1, wherein the applying a haptic signal includes generating a potential on the set of electrodes. 15. The method of claim 1, wherein the touch input is from a user's fmger, and the applying a haptic signal induces a force on the user's finger. 16. The method of claim 1, further comprising: monitoring touch inputs with the touch-sensitive screen, wherein monitoring includes detecting the touch input. 17. An apparatus for providing tactile feedback, comprising: a touch-sensitive screen that detects a touch input; a set of electrodes; a haptic voltage signal generator that applies a haptic signal to the set of electrodes and modifies the haptic signal based on a displacement current from the touch input to the set of electrodes; and a haptic feedback controller that determines the displacement current, wherein the displacement current is an effect of an amplitude of the haptic signal. 18. The apparatus of claim 17, wherein the apparatus is a mobile device. 19. The apparatus of claim 18, wherein the mobile device is at least one of a smartphone, tablet, opaque surface, and aid for the visually impaired. 20. The apparatus of claim 17, wherein the touch-sensitive screen is a capacitive touch-sensitive screen. 21. The apparatus of claim 17, wherein the haptic voltage signal generator generates electrical signals. 22. The apparatus of claim 17, wherein the haptic voltage signal generator generates a potential on the set of electrodes. 23. The apparatus of claim 17, wherein the haptic feedback controller controls the amplitude of the haptic signal generated by the haptic voltage signal generator. 24. The apparatus of claim 17, wherein the haptic voltage signal generator is at least one of a transformer and a digital-to-analog converter. 25. The apparatus of claim 17, further comprising: an ammeter that uses a series of resistors to measure current. 26. The apparatus of claim 17, further comprising: a micro-controller including the haptic feedback controller. 27. An apparatus for providing tactile feedback, comprising: means for applying a haptic signal to a set of electrodes in a device; means for detecting a touch input; means for determining a displacement current from the touch input to the set of electrodes, the displacement current being an effect of an amplitude of the haptic signal; and means for modifying the haptic signal based on the determined displacement current, 28. A computer program product in a device, comprising: a computer-readable medium comprising code for: applying a haptic signal to a set of electrodes in a device; detecting a touch input; determining a displacement current from the touch input to the set of electrodes, the displacement current being an effect of an amplitude of the haptic signal; and modifying the haptic signal based on the determined displacement cun'ent.
FEEDBACK FOR GROUNDING INDEPENDENT HAPTIC ELECTROVIBRATION CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to U.S. Nonprovisional Application No. 13/973,749, filed on August 22, 2013, which is herein incorporated by reference in its entirety. TECHNICAL FIELD [0002] Embodiments disclosed herein are generally directed to haptic electrovibration and feedback. BACKGROUND [0003] Electrovibration-based haptics may refer to the use of an electrostatic force to provide one or more sensations to a user as, for example, the user's fmger slides across the surface of a touch-sensitive screen. In an example, the user slides her finger across a surface of a touch-sensitive screen that employs electrodes, and an electric potential is applied to the electrodes. The quality of the grounding path between the source of the electric potential and the user significantly affects the quality and intensity of the haptic experience. It is therefore difficult to provide a consistent haptic experience to the user unless the system has an explicit ground connection with the user (e.g., there is a consistent grounding path to the user at all times). [0004] A conventional solution to provide a consistent haptic experience to the user is to use wrist straps to ground the user. It is inconvenient, however, to require the user to wear additional equipment that is connected to the device. Another conventional technique is to require another finger or part of the user's hand to touch the device. This, however, may force the device to be held in a specific way, and again is inconvenient. Another conventional technique is to have no ground connection. This works if the signal is strong enough; however, this may still result in an inconsistent haptic experience for the user. [0005] The user experience may also be different depending on factors that affect the grounding path including, for example, how the user is standing, whether the user is connected directly to the device, and whether the user is in contact with someone else. SUMMARY [0006] Methods, systems, and techniques are disclosed that enable an electrovibration-based haptic system to deliver a consistent haptic experience to the user. The present disclosure describes methods, systems, and techniques to provide a consistent haptic sensation to the user regardless of whether the system has or does not have a consistent ground connection with the user. [0007] Consistent with some embodiments, there is provided an apparatus for providing haptic feedback. The apparatus includes a touch-sensitive screen capable of detecting a touch input and a set of electrodes. The apparatus also includes a haptic voltage signal generator capable of applying a haptic signal to the set of electrodes and capable of modifying the haptic signal based on a displacement current from the touch input to the set of electrodes. The apparatus also includes a haptic feedback controller capable of determining the displacement current, The displacement current is an effect of an amplitude of the haptic signal. [0008] Consistent with some embodiments, there is provided a method of providing haptic feedback. The method includes applying a haptic signal to a set of electrodes in a device. The method also includes detecting, at a touch-sensitive screen of the device, a touch input. The method also includes determining a displacement current from the touch input to the set of electrodes. The displacement current is an effect of an amplitude of the haptic signal. The method also includes modifying the haptic signal based on the determined displacement current. [0009] Consistent with some embodiments, an apparatus for providing tactile feedback includes means for applying a haptic signal to a set of electrodes. The apparatus also includes means for detecting a touch input. The apparatus also includes means for determining a displacement current from the touch input to the set of electrodes. The displacement current is an effect of an amplitude of the haptic signal. The apparatus also includes means for modifying the haptic signal based on the determined displacement current. BRIEF DESCRIPTION OF THE DRAWINGS [0010] FIG. 1 is a diagram illustrating a user holding a device, consistent with some embodiments. [001 1 ] FIG. 2 is a diagram illustrating a device for providing haptic feedback to the user, consistent with some embodiments. [0012] FIG. 3 is a diagram illustrating a method of providing haptic feedback to the user, consistent with some embodiments. [0013] FIG. 4 is a diagram illustrating a platform capable of providing haptic feedback to the user, consistent with some embodiments. [0014] In the drawings, elements having the same designation have the same or similar functions. DETAILED DESCRIPTION [0015] In the following description specific details are set forth describing certain embodiments. It will be apparent, however, to one skilled in the art that the disclosed embodiments may be practiced without some or all of these specific details. The specific embodiments presented are meant to be illustrative, but not limiting. One skilled in the art may realize other material that, although not specifically described herein, is within the scope and spirit of this disclosure. [0016] FIG. 1 is a diagram 100 illustrating a user 102 holding a device 104, consistent with some embodiments. As shown in FIG. 1, device 104 includes a touch- sensitive screen 106 that a user can use to interact with the device. Device 104 may include a haptic feedback system that provides user 102 with a consistent haptic experience. [0017] Haptic electrovibration can convey a variety of haptic sensations via touch-sensitive screen 106 of device 104 to user 102 without having to physically change touch-sensitive screen 106. Haptic electrovibration may simulate the user's nerves so that touch-sensitive screen 106 feels differently. Haptic signals may be generated in an application-specific manner depending on, for example, the application or operating system (OS) developer and the desired effect of the user's sensation. The sensations may include, for example, device 104 having rough or slimy edges. [0018] FIG. 2 is a diagram illustrating a device for providing haptic feedback to the user, consistent with some embodiments. The device may be device 104 in FIG. 1. As shown in FIG. 2, device 104 includes touch-sensitive screen 106 that a user's fmger 208 may touch to interact with the device and a set of electrodes 204, Device 104 also includes a haptic feedback system 205 including a haptic voltage signal generator 206, current sensing device 214, and haptic feedback controller 216. [0019] Haptic voltage signal generator 206 applies a haptic signal to set of electrodes 204. Set of electrodes 204 may be coated with a thin layer of insulator. In some embodiments, set of electrodes 204 may be a layer of transparent electrodes, Set of electrodes 204 in FIG. 2 may represent a set of electrodes that is distributed two- dimensionally about touch-sensitive screen 106. In an embodiment, set of electrodes 204 includes only a single layer or electrodes and is in a grid pattern that is not overlapping. In an embodiment, set of electrodes 204 includes a single electrode. [0020] The cause of the haptic sensation is a measurable electric potential on set of electrodes 204 under the surface insulating layer. The current flows instantaneously on both sides of the interface including finger 208 and haptic voltage signal generator 206, but does not flow through the insulator. The current flows back and forth onto set of electrodes 204 and fmger 208 on both sides, charging and discharging at a rate based on, for example, the user's touch and how the user is grounded. As finger 208 touches the surface of touch-sensitive screen 106, set of electrodes 204 and finger 208 form a two parallel-plate capacitor if finger 208 is grounded. [0021] Ideally, if the user is connected to haptic feedback system 205 's ground, the voltage on the fingertip is around 0V (with respect to set of electrodes 204), Because such connection does not exist, fingertip 208 may then be considered to be "floating" with respect to set of electrodes 204, and the voltage difference between fingertip 208 and set of electrodes 204 varies and may be difficult to predict. [0022] Further, the quality of the grounding path between the source of the electric potential and the user may affect the quality and intensity of the haptic experience. For example, it may be difficult to provide a consistent haptic experience to the user unless there is a consistent ground path to the user at all times. [0023] Depending on various factors, such as how the user holds device 104, a position of the user (e.g., whether the user is sitting or standing), the user's grounding (e.g., whether the user is touching device 104 on the ground or whether the user is wearing shoes or is barefoot), the manner in which the user is holding device 104 (e.g., whether the user is holding device 104 with one or two hands and the orientation in which the user is holding device 104), the electrical properties between the user and device 104 vary. Accordingly, the amount of current and its flow into set of electrodes 204 may be different, which affects the user's haptic experience. The current reflects the intensity of the sensation that the user feels. When the user is isolated from haptic feedback system 205, the user's experience is different based on, for example, whether or not the user has contact with a ground (e.g., wall or floor). Consequently, it may be difficult to generate a consistent tactile sensation for the user when the system does not have an explicit ground connection with the user. The present disclosure provides techniques to provide a consistent tactile sensation to the user. [0024] A user's finger 208 may touch touch-sensitive screen 106. In some embodiments, touch-sensitive screen 106 may be a capacitive touch-sensitive screen that includes a layer of capacitive material to hold an electrical charge. In an example, haptic voltage signal generator 206 generates an electrical signal and applies a uniform potential on set of electrodes 204 across the dielectric in the capacitor such that the current flows into and out of the capacitor. The haptic signal may be generated in various ways. In some embodiments, haptic voltage signal generator 206 is a transformer or a digital-to-analog converter. [0025] Current sensing device 214 may be capable of measuring current flowing from haptic voltage signal generator 206 to set of electrodes 204. The unit measurement for the current may be in microamps. In some embodiments, current sensing device 214 may include an ammeter that measures the electric current flowing from haptic voltage signal generator 206 to set of electrodes 204. In some embodiments, current sensing device 214 may include a series of resistors to measure current. For example, based on Ohm's Law, the voltage across the series of resistors may be proportional to a current flowing through current sensing device 214 such that the current can be determined based on a known voltage and a known resistance of the series of resistors. [0026] When haptic voltage signal generator 206 applies a voltage to set of electrodes 204 and the user's fmger 208 touches touch-sensitive screen 106, a force may be induced on fmger 208 and an electric field 209 is created. In an electrovibration- based haptic system, device 104 may send a current to the user through an object the user is touching (e.g., touch sensitive screen 106) to ground. The strength of electrovibration is proportional to the electrostatic force acts on finger 208 by set of electrodes 204. This electrostatic force may be affected mainly by a voltage (e.g., haptic signal) between the user's touch (e.g., fingertip) and set of electrodes 204, insulator thickness, and dielectric of the insulator material. While the thickness and dielectric properties of the insulator material may remain constant, the voltage between the user's touch and set of electrodes 204 may vary. The difference depends on the potential on set of electrodes 204 and the potential on the user's touch. In some embodiments, the potential on set of electrodes 204 is controlled directly by haptic voltage signal generator 206. [0027] The potential on fmger 208, however, may be much more complicated. Friction force 212 modulates as electric field 209 changes across the insulator and the amount of charge changes at the point of contact of the user's touch. The modulation of friction force 212 may be due to the haptic signal and the changing energy voltages on set of electrodes 204 and how the current flows through fmger 208. [0028] To provide the user with a consistent haptic experience, it may be desirable to stabilize the modulation of friction force 212 so that it is constant. If the tactile sensation provided by friction force 212 that the user is feeling can be determined, it may be used to control the strength of the haptic signal and a more consistent sensation experience may then be delivered to the user. The haptic sensation experienced by the user may depend on how friction force 212 appears over time and as finger 208 moves over touch-sensitive screen 106. Friction force 212 may provide different sensations to different users based on various factors, such as how the user's brain works and his/her physiology. Thus, the effect of friction force 212 on individual users may be difficult to quantitatively measure. [0029] The modulation of friction force 212 may be as a result of a user's touch and may instead be determined by observing a displacement current from the user's touch input to set of electrodes 204. Haptic voltage signal generator 206 may apply a haptic signal to set of electrodes 204, and the displacement current may be an effect of an amplitude of the applied haptic signal. Haptic feedback system 205 may infer what the user is experiencing based on the determined displacement current and modify the haptic signal accordingly. For example, the displacement current measurement indirectly informs feedback system 205 of the voltage between finger 208 and set of electrodes 204. As such, haptic feedback system 205 controls the voltage on set of electrodes 204 based on the displacement current to provide the user with a consistent haptic experience. [0030] The magnitude of the displacement current into set of electrodes 204 is related to the sensation strength felt by the user. In an example, the user may feel a sensation from a range of frequencies from about 40 to 300 Hertz. By measuring the displacement current from device 104 to the user, the haptic signal strength generated by haptic voltage signal generator 206 may be adjusted to deliver a consistent sensation to the user. [0031 ] In some embodiments, to determine the displacement current from the user's touch input to set of electrodes 204, haptic feedback controller 216 determines the displacement current into set of electrodes 204. By measuring the potential of set of electrodes 204 as, for example, the grounding path changes between the user and device 104, it may be determined whether device 104 is adapting to the grounding path changes. [0032] Haptic feedback controller 216 may use current measurements from current sensing device 1 14 as input to provide a consistent haptic experience to the user. Haptic feedback controller 216 may control the haptic signal as a function of current measurements over time. Depending on various factors, such as how the user is grounded, different amounts of current flow into and out of set of electrodes 204. Haptic feedback controller 216 captures changes in the user's grounding and uses the current measurements to change the voltage signal generated by haptic voltage signal generator 206. This feedback may be useful for both a system that has explicit grounding to the user and a system that does not. The perceived effect may be maintained irrespective of changes in device 104 to user grounding. [0033] Haptic feedback system 205 may monitor the user's interaction with touch-sensitive screen 106. Monitoring the user's interaction with touch-sensitive screen 106 may include, for example, detecting the user's touch input or detecting a grounding path change from the user to device 104. [0034] Haptic voltage signal generator 206 modifies the haptic signal based on the determined displacement current. The modified haptic signal may be applied to set of electrodes 204. Haptic feedback controller 216 dynamically controls haptic voltage signal generator 206 with respect to controlling and modifying the voltage generated by haptic voltage signal generator 206. Haptic feedback controller 216 modifies the voltage generated by haptic voltage signal generator 206 based on the displacement current, thus providing the user with a consistent haptic experience when the user is touching touch-sensitive screen 106. [0035] Haptic feedback system 206 may control a user's haptic experience based on an observed displacement current from the device to the user and based on observing the user's interaction with the device (e.g., by measuring the finger's vibration against the device or the electrical properties of the system). Haptic feedback system 205 includes a feedback loop, where the amplitude of the haptic signal is modified until the desired displacement current is achieved. [0036] The displacement current may be used to determine a change of magnitude in the current. Consistent with some embodiments, haptic feedback controller 216 determines whether the displacement current indicates a change in current. In an example, haptic feedback system 205 may detect a change in a grounding path between a source of the haptic signal and the user (e.g, the touch input). When the change is detected, the amplitude of the haptic signal generated by haptic voltage signal generator 206 may be modified. The displacement current may be affected by the user's touch and/or a grounding path between a source of the haptic signal and the user. In some embodiments, when the displacement current is determined to indicate a change in current, haptic feedback controller 216 sends an indication to haptic voltage signal generator 206 to modify the amplitude of the haptic signal. The haptic signals may be implemented as a continuous-time control loop. [0037] In an example, when the displacement current indicates a decrease in current magnitude, haptic feedback controller 216 sends an indication to haptic voltage signal generator 206 to increase the amplitude of the haptic signal. If the displacement current indicates a decrease in the current magnitude, then the user's tactile sensation has also decreased. To maintain the consistent haptic experience for the user, haptic voltage signal generator 206 responds to this change by increasing the amplitude of the signal to provide the user with a stronger sensation, thus improving the consistency of the user's tactile experience. [0038] In another example, when the displacement current indicates an increase in current magnitude, haptic feedback controller 216 sends an indication to haptic voltage signal generator 206 to decrease the amplitude of the haptic signal. If the displacement current indicates an increase in the current magnitude, then the user's tactile sensation has also increased. To maintain the consistent haptic experience for the user, haptic voltage signal generator 206 responds to this change by decreasing the amplitude of the haptic signal to provide the user with a weaker sensation, thus improving the consistency of the user's tactile experience. [0039] In another example, when the displacement current indicates a constant current magnitude, haptic feedback controller 216 sends an indication to haptic voltage signal generator 206 to maintain the amplitude of the haptic signal such that its amplitude remains approximately equal (or exactly equal) to its present amplitude. If the displacement current indicates a constant current magnitude, then the user's tactile sensation has remained consistent. To maintain this consistent haptic experience for the user, haptic voltage signal generator 206 maintains the amplitude of the haptic signal at its approximately current (or the same) value. Accordingly, the user's tactile sensation may remain relatively consistent. [0040] In some embodiments, after haptic voltage signal generator 206 modifies the amplitude of the haptic signal based on a first displacement current, haptic feedback controller 216 determines a second displacement current into set of electrodes 204. The second displacement current is an effect of an amplitude of the modified haptic signal. Determining the first displacement current may include determining the first displacement current at a first point in time, and determining the second displacement current may include determining the second displacement current at a second point in time subsequent to the first point in time. [0041] Haptic voltage signal generator 206 may modify the amplitude of the haptic signal based on the determined second displacement current. When the second displacement current indicates a decrease in current magnitude, haptic voltage signal generator 206 may increase the amplitude of the haptic signal. Further, when the second displacement current indicates an increase in current magnitude, haptic voltage signal generator 206 may decrease the amplitude of the haptic signal. Additionally, when the second displacement current indicates a constant current magnitude, haptic voltage signal generator 206 may maintain the present (or approximately equal) amplitude of the haptic signal. [0042] Haptic feedback system 205 includes a feedback loop, where the amplitude of the haptic signal may be continuously modified until the desired displacement current is achieved (e.g., when the displacement current indicates that the magnitude of the current is relatively consistent). In an example, haptic feedback controller 216 may continue to send indications to haptic voltage signal generator 206 to modify the haptic signal by varying its amplitude, and the modification may be based on additional displacement currents. When the desired displacement current is achieved, haptic feedback controller 216 may send an indication to haptic voltage signal generator 206 to continue to generate haptic signals at that particular amplitude. [0043] Device 104 may continue to monitor the user's interaction with touch- sensitive screen 106. Monitoring the user's interaction with touch-sensitive screen 106 may include, for example, detecting the user's touch input or detecting a grounding path change from the user to device 104. Accordingly, the amplitude of the haptic signals generated by haptic voltage signal generator 206 may subsequently change (e.g., be increased or decreased). For example, haptic feedback controller 216 may determine a displacement current that indicates a change in the magnitude of the current and may send an indication to haptic voltage signal generator 206 to modify the amplitude of the haptic signal based on the determined displacement current, thus continuously providing a consistent haptic experience to the user. [0044] In some embodiments, device 104 includes a micro-controller (not shown) that includes haptic feedback system 205. In some embodiments, device 104 is a mobile device. The mobile device may be, for example, a smartphone, tablet, opaque surface, or aid for the visually impaired. [0045] FIG. 3 is a diagram illustrating a method 300 of providing tactile feedback to the user, consistent with some embodiments. Method 300 is not meant to be limiting and may be used in other applications. [0046] Method 300 includes steps 310-340. In a step 310, a haptic signal is applied to a set of electrodes in a device. In an example, haptic voltage signal generator 206 applies a haptic signal to set of electrodes 204 in device 104. In a step 320, a touch input is detected at a touch-sensitive screen of the device. In an example, touch- sensitive screen 106 detects a touch input (e.g., a user's touch input). In a step 330, a displacement current from the touch input to the set of electrodes is determined, where the displacement current is an effect of an amplitude of the haptic signal. In an example, haptic feedback controller 216 determines a displacement current from the touch input (e.g., a user's touch input) to set of electrodes 204, where the displacement current is an effect of an amplitude of the haptic signal. In a step 340, the amplitude of the haptic signal is modified based on the determined displacement current. In an example, haptic voltage signal generator 206 modifies the haptic signal based on the determined displacement current. The haptic signal may be implemented as a continuous-time control loop. [0047] It is also understood that additional method steps may be performed before, during, or after steps 310-340 discussed above. It is also understood that one or more of the steps of method 300 described herein may be omitted, combined, or performed in a different sequence as desired. [0048] FIG. 4 is a diagram illustrating a platform capable of providing haptic feedback to the user, consistent with some embodiments. [0049] Device 104 may run a platform 400. Platform 400 includes a user interface 402 that is in communication with a control unit 404, e.g., control unit 404 accepts data from and controls user interface 402. User interface 402 includes display 406, which includes a means for displaying graphics, text, and images, such as an LCD or LPD display, and may include a means for detecting a touch of the display, such as touch sensors 408 (e.g., capacitive touch sensors). [0050] User interface 402 may further include a keypad 410 or other input device through which the user can input information into the platform 400. If desired, keypad 410 may be obviated by integrating a virtual keypad into display 406. It should be understood that with some configurations of platform 400, portions of user interface 402 may be physically separated from control unit 404 and connected to control unit 404 via cables or wirelessly, for example, in a Bluetooth headset. Touch sensor 412 may be used as part of user interface 402 by detecting a touch input from a user via display 406. [0051] Platform 400 may include means for applying a haptic signal to a set of electrodes. Platform 400 may further include a means for determining a displacement current from the user's touch input to the set of electrodes, the displacement current being an effect of an amplitude of the haptic signal. Control unit 404 accepts and processes data from user interface 402 and touch sensor 412 and controls the operation of the devices, including the generation and modification of haptic signals. Platform 400 may further include means for modifying the haptic signal based on the determined displacement current, and thus, serves as a means for providing a haptic feedback to the user. [0052] Control unit 404 may be provided by one or more processors 420 and associated memory 422, hardware 424, software 426, and firmware 428. Control unit 404 includes a means for controlling display 406, means for controlling touch sensors 412, and means for controlling the haptic signals, illustrated as a display controller 430, touch sensor controller 432, and haptic feedback system 205, respectively. Display controller 430, touch sensor controller 432, and haptic feedback system 205 may be implanted in processor 420, hardware 424, firmware 428, or software 426, e.g., computer readable media stored in memory 422 and executed by processor 420, or a combination thereof. Display controller 430, touch sensor controller 432, and haptic feedback system 205 nevertheless are illustrated separately for clarity. [0053] As discussed above and further emphasized here, FIGs. 1-4 are merely examples that should not unduly limit the scope of the claims. For example, although a haptic system including a display and a touch sensor is illustrated in FIG. 4, this is not intended to be limiting. It will be understood that a haptic system including only a set of electrodes (e.g., a layer of electrodes) and a layer of insulator on top of the set of electrodes is within the scope of the present disclosure. Further, the haptic feedback system may or may not use the user's touch input information to modify the haptic signal having a given amplitude. In an example, the haptic feedback system does not use the user's touch input information and may modify the haptic signal having the given amplitude based on reading the current from the set of electrodes. [0054] It will also be understood as used herein that processor 420 can, but need not necessarily include, one or more microprocessors, embedded processors, controllers, application specific integrated circuits (ASICs), digital signal processors (DSPs), graphics processing units (GPUs), and the like. The term processor is intended to describe the functions implemented by the system rather than specific hardware. Moreover, as used herein the term "memory" refers to any type of computer storage medium, including long term, short term, or other memory associated with the platform, and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored. [0055] The methodologies described herein may be implemented by various means depending upon the application. For example, these methodologies may be implemented in hardware 424, firmware 428, software 426, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof. [0056] For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine-readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in memory 422 and executed by the processor 420. Memory may be implemented within the processor unit or external to the processor unit. [0057] For example, software 426 may include program codes stored in memory 422 and executed by processor 420 and may be used to run the processor and to control the operation of platform 400 as described herein. A program code stored in a computer-readable medium, such as memory 422, may include program code to apply a haptic signal to a set of electrodes in a device, detect a user's touch input, determine a displacement current from the user's touch input to the set of electrodes, the displacement current being an effect of an amplitude of the haptic signal, and modify the haptic signal based on the determined displacement current. The program code stored in a computer-readable medium may additionally include program code to cause the processor to control any operation of platform 400 as described further below. [0058] If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [0059] One skilled in the art may readily devise other systems consistent with the disclosed embodiments which are intended to be within the scope of this disclosure.
A computer system has multiple performance states. The computer system periodically determines utilization information for the computer system and adjusts the performance state according to the utilization information. If a performance increase is required, the computer system always goes to the maximum performance state. If a performance decrease is required, the computer system steps the performance state down to a next lower performance state.
What is claimed is:1. A method of managing power consumption in a computing system having a plurality of performance states, including a maximum performance state and a plurality of other performance states that provide successively less performance capability for an integrated circuit, the method comprising:determining utilization of the integrated circuit; andeach time the computing system determines that a higher performance state is required based on the determined utilization while in each of the other performance states, changing to a predetermined performance state, skipping all intermediate performance states between a current performance state and the predetermined performance state.2. The method as recited in claim 1 wherein the predetermined performance state is the maximum performance state.3. The method as recited in claim 1 wherein the predetermined performance state is a near maximum performance state.4. The method as recited in claim 1 further comprising:comparing the determined utilization to a threshold utilization value to determine if a higher performance state is required;comparing the integrated circuit utilization to a second threshold utilization value; andif the integrated circuit utilization is below the second threshold utilization value, always entering a next lower performance state as a next performance state.5. The method as recited in claim 4 wherein the performance state is lowered by reducing at least one of the voltage and frequency.6. The method as recited in claim 1 further comprising:comparing the determined utilization to a threshold utilization value to determine if a higher performance state is required;comparing the integrated circuit utilization to a second threshold utilization value;if the integrated circuit utilization is below the second threshold utilization value, entering a lower performance state as a next performance state, the lower performance state being determined according to integrated circuit utilization.7. The method as recited in claim 1 wherein the performance state is reduced by reducing both voltage and clock frequency of the integrated circuit.8. The method as recited in claim 1 wherein determining the utilization is done periodically.9. The method as recited in claim 1 wherein the integrated circuit includes a central processing unit.10. A computing system comprising:an integrated circuit having multiple performance states;means for determining utilization of the integrated circuit; andmeans for changing, while in each of the performance states other than a maximum performance state, from a current performance state to the maximum performance state, skipping all intermediate performance states between the current performance state and the maximum performance state, each time the computing system determines that a higher performance is required based on the determined utilization.11. The computing system as recited in claim 10 further comprising:means for determining that the utilization is below a second threshold value and for always changing operation of the integrated circuit from the current performance state to a next lowest performance state in response to a determination that the utilization is below a second threshold utilization value.12. The computing system as recited in claim 10 further comprising:means for determining that the utilization is below a second threshold value and for changing operation of the integrated circuit from the current performance state to a lower performance state in response to a determination that the utilization is below a second threshold utilization value, the lower performance state being determined according to the integrated circuit utilization.
RELATED APPLICATION(S)This application claims the benefit under 35 U.S.C. [section] 119(e), of provisional application No. 60/287,897, filed May 1, 2001, which is incorporated herein by reference in its entirety.BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates to computer systems and more particularly to power management of such systems.2. Description of the Related ArtPower consumption and associated performance and thermal issues are considerations for every computer system design. For example, a conventional notebook computer (also commonly referred to as a laptop or portable computer) has power and thermal constraints that cause it to operate at performance states below an equivalent desktop computer.Many power saving techniques have been introduced to try and mitigate the impact of thermal and battery power constraints. The frequency of operation (clock frequency) of the processor and its operating voltage determines its power consumption. Since power consumption and therefore heat generation are roughly proportional to the processor's frequency of operation, scaling down the processor's frequency has been a common method of staying within appropriate power limitations. Microprocessors utilized in mobile applications, i.e., those used in battery powered systems, are particularly sensitive to power considerations and therefore generally require the lowest supply voltage that can achieve the rated clock speed. That is in part due to the small, densely packed system construction that limits the ability of the mobile computer system to safely dissipate the heat generated by computer operation.A common power management technique called "throttling" prevents the processor from overheating by temporarily placing the processor in a stop grant state. During the stop grant state the processor does not execute operating system or application code and typically has its clocks gated off internally to reduce power consumption. Throttling is an industry standard method of reducing the effective frequency of processor operation and correspondingly reducing processor power consumption by using a clock control signal (e.g., the processor's STPCLK# input) to modulate the duty cycle of processor operation. A temperature sensor monitors the processor temperature to determine when throttling is needed. Throttling continuously stops and starts processor operation and reduces the effective speed of the processor resulting in reduced power dissipation and thus lowers processor temperature.Referring to FIG. 1, one prior art system capable of implementing throttling is illustrated. Processor (CPU) 101 receives voltage 102 from voltage regulator 103. The voltage regulator is controlled by voltage identification (VID) signals 104 which are set by system jumper settings 105. A clock multiplier value 107 (bus frequency (BF)[2:0]), supplied from system jumper settings 105 is supplied to CPU 101. CPU 101 multiplies a received bus clock 109 by the multiplier value 107 to generate the core clocks for the processor.CPU 101 receives a STPCLK# (the # sign indicates the signal is active low) input, which is used to temporarily suspend core clock operation and conserve power. An asserted STPCLK# signal results in the processor entering a stop grant state. In that state, execution of operating system (OS) and application code is stopped, and the core clocks are typically stopped although some minimum logic including clock multiplier logic may still operate.Appropriately monitoring and controlling the processor's operating parameters is important to optimizing performance and battery life. Power management in older personal computer systems was typically implemented using micro-controllers and/or proprietary use of the system management interrupt (SMI). Current x86 based computer systems utilize an industry supported power management approach described in the Advanced Configuration and Power Interface Specification (ACPI). The ACPI is an operating system (OS) controlled power management scheme that uses features built into the Windows 9* and Windows NT or other compatible operating systems. It defines a standard interrupt (System Control Interrupt or SCI) that handles all ACPI events. Devices generate system control interrupts to inform the OS about system events.As part of that power management approach, ACPI specifies sleep and suspend states. Sleep states temporarily halt processor operation, and operation can be restored in a few milliseconds. A computer enters the sleep state when internal activity monitors indicate no processing is taking place. When a keystroke is entered, a mouse moves or data is received via a modem, the processor wakes up.Suspend states shut down more of the subsystems (e.g., display or hard drive) and can take a few seconds for operation to be restored. Suspend states may copy the present context of the system (sufficient for the computer to resume processing the application(s) presently opened) into memory (suspend to RAM) or to the hard drive (suspend to disk) and may also power down peripherals.For example, in a word processing application, a processor will do a brief burst of work after each letter is typed, then its operation is stopped until the next keystroke. Additionally, peripheral devices may be turned off to obtain more power savings. For example, the computer's hard drive may be suspended after a certain period of inactivity until it is needed again. If the system detects another period of inactivity, e.g., a few minutes, the display may be turned off. Such techniques are useful in conserving power, especially in battery-powered systems, and in the case of the processor, reducing the amount of heat needed to be dissipated. It is also common practice to use a cooling fan to increase the amount of heat removed from the system, lower processor temperature and prevent damage to the system.While the ACPI environment provides a number of mechanisms to deal with thermal and power issues, it fails to provide a sophisticated power management capability that can satisfactorily reduce power consumption in computer systems. While power consumption issues are particular important for small portable computers, power consumption issues are important for all types of computers as well. For example, while battery life may not be a consideration for desktop computers, thermal considerations are still an important criteria. In particular, for desktop computers, the hotter they run, the more likely fans are turned on to try and cool the processor, which results in fan noise or frequent cycling of the fans that may be objectionable to the computer user. In addition, saving power can have real economic benefits.Further, traditional throttling techniques have limitations for certain types of applications. More particularly, throttling has a time overhead associated with it that may disallow its use for some real time (e.g., a soft modem) applications. Thus, although throttling can achieve an "effective frequency", an effective frequency is not always as useful as an actual frequency. For example, assume legacy power management techniques are throttling a 1 GHz CPU down to an "effective speed" of 300 MHz. The latency (actual stopped time and switching time) involved in throttling can cause a CPU having an "effective speed" of 300 MHz, to be unable to satisfactorily support a real time application, while a processor actually running at 300 MHz could properly support the application. Thus, there is a difference between actual and effective frequencies for certain applications.In view of the above considerations, it would be desirable to save power in computer systems, such as desktop systems or portable systems, without affecting the performance perceived by the user. In order to do that it would be desirable for power management techniques to determine what performance states were required, and adapt power levels to meet the performance requirements. Those and other improvements in power management are desirable to more effectively provide high performance in conjunction with effective power management.SUMMARY OF THE INVENTIONAccordingly, in one embodiment, the invention provides a computer system that has multiple performance states. The computer system periodically determines the utilization information for the processor and adjusts the performance state according to the utilization information. If a performance increase is required, the computer system goes to the maximum performance state (or near maximum state) rather than a next higher state. If a performance decrease is required, the computer system steps the performance state down to a next lower performance state or to a level determined according to CPU utilization. In that way, user perception of system degradation due to performance state changes can be reduced.In another embodiment the invention provides a method of managing power consumption in a computing system having a plurality of performance states, including a maximum performance state and a plurality of other performance states that provide successively less performance capability for an integrated circuit. The method includes determining utilization of the integrated circuit, comparing the determined utilization to a threshold utilization value, and if the determined utilization is above the threshold utilization value, entering a maximum or near-maximum performance state as the next performance state, skipping any performance states between a current performance state and the next performance state.The method may further include comparing the CPU utilization to a second threshold utilization value. If the CPU utilization is below the second threshold utilization value, entering a lower performance state as the next performance state. The lower performance state may be the next lower performance state or a lower performance state determined according to CPU utilization.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings.FIG. 1 shows a prior art computer system capable of using throttling to effectuate power savings.FIG. 2 is a high level flow diagram of power management operation according to an embodiment of the invention.FIG. 3 illustrates switching between performance states according to the power management approach described herein.FIG. 4 illustrates exemplary statistics used to determine the utilization index.FIG. 5 shows a processor that can adjust its operating voltage and frequency in accordance with processor utilization.FIG. 6 shows the high level operation of switching performance states for the processor shown in FIG. 5.The use of the same reference symbols in different drawings indicates similar or identical items.DESCRIPTION OF THE PREFERRED EMBODIMENT(S)A computer system according to one embodiment of the invention has a plurality of processor performance states, generally based on unique voltage/frequency pairs. A power management control function in the computer system periodically determines the utilization level of the processor, i.e., how much of the available processor resources are being utilized, and selects a next performance state that is appropriate for the utilization level. Referring to FIG. 2, a flow diagram illustrates at a high level, operation of an embodiment of a power management function utilized to provide the requisite power management control. The current utilization is periodically determined in 201. That current utilization is then compared to a high threshold level, e.g., 80% of processing resources, in 203. If the utilization level is above the high threshold level, indicating that processor resources are being utilized at a level above 80%, the power management control function increases the performance state of the processor in 205. In one embodiment, that can be accomplished by selecting a voltage/frequency pair that provides greater performance and then causing the processor to operate at the new voltage and frequency, as described further herein.If the current utilization is below the high threshold, then the current utilization is compared to the low threshold in 207. An exemplary low threshold level is 55%. If the current utilization is below that low threshold, the power management control function decreases the performance state of the processor in 209. As described further herein, that may be accomplished by selecting, e.g., a voltage/frequency pair providing lower performance and then causing the performance change to occur. The power management control function then returns to 201 to periodically determine the utilization level and again compare the current utilization level to the high and low threshold levels. In that way the power management can tailor the performance state of the processor to the actual requirements.In a computer system with several possible processing performance states, if the management control function determines that more performance is necessary to meet performance requirements, one approach to providing increased performance is to increase the performance one step at a time until the current utilization is below the high threshold level. However, in a preferred embodiment, rather than increasing the performance state one step at a time, the power management control function selects the highest performance state regardless of the current performance state. The reasons for always selecting the highest possible performance state when a higher performance state is needed are as follows. In computer systems, performance demands are often of a bursty nature. When a higher performance state is required based on the current utilization level, stepping the performance state to a next higher level can result in degradation of performance that can be perceived by the user. That is especially true when the task that needs the increased performance requires a near real-time response, for instance, while decoding an audio or video file.FIG. 3 illustrates that concept. Assume the processor has five performance states P1-P5, with P5 being the highest and P1 the lowest. Whenever the power management determines that a higher performance state is required when operating at any of the levels P1-P4, the power management selects the maximum performance state P5 as the next performance state. Thus, if the performance state is always taken straight to the maximum performance state when a performance increase is required, rather than stepping up to the maximum performance state, there is less of a chance that a user could notice any performance degradation. In effect, the power management control function anticipates a peak loading by assuming that any indication of a required increase in performance is assumed to be a burst requiring peak performance.However, if a lower performance state is required, a next lower performance state is selected. Thus, if at performance state P5, P4 is selected as the next lower performance state. If the current performance state is P4, the next lower performance state selected is P3 when a performance decrease is effectuated by the power management control function. In that way, if the performance is still too high, successively lower performance states can be selected and the chance than any degradation is detected by a system user is reduced. Thus, in a preferred embodiment, if the utilization information indicates that an increase in performance is necessary, the power management control function selects the maximum (or near maximum), while a decrease in performance causes the power management control function to step to the next lowest performance state.In another embodiment, the selected lower performance state may be selected proportionally to the CPU utilization. For example, if CPU utilization is less than 20%, then the initial lower performance state may be two steps below the current performance state rather than just one. Assume that each step approximately halves the performance. Then if CPU utilization is less than 20%, a two step drop would bring the utilization to between the upper and lower thresholds (55%-80%). Alternatively, as described above, the two step drop could be accomplished one step at a time.Note that in another embodiment, the target performance state when a performance increase is needed may be other than the maximum performance state. For example, a performance state close to the maximum performance state may be sufficient to prevent noticeable performance degradation and thus, that slightly lower than maximum performance state can be selected as the target for all performance state increases. The availability of implementing such an embodiment depends on a variety of factors including the granularity of the performance levels provided by the system and whether the near maximum performance state sufficiently minimizes performance degradation problems.If the processor utilization is kept within the range of the high and low thresholds, then a user should experience a crisp, high performance system, while still getting the benefit of power savings for those applications or those portions of applications requiring less performance. That approach reduces power consumption, extends battery life, reduces temperature resulting in less need for cooling and thus less fan noise, while still maintaining high performance and thus maintaining a perception of fast response to the user. Note that running at a lower average CPU die temperature increases CPU reliability, and that a lower CPU temperature results in a lower system temperature, which increases system reliability.In one embodiment, the thresholds are programmable, which allows the thresholds to be tailored for particular systems. For example, a particular system may have mission critical performance requirements and therefore would want to keep the high threshold relatively low and check utilization levels more frequently to ensure the performance requirements are met.An important aspect of matching CPU performance to utilization is determining the utilization. As described with relation to FIG. 2, the power management control function periodically goes out and samples the utilization information. In a preferred embodiment, the power management control function is provided by power management software, which periodically extracts the utilization information by querying the operating system (OS).Assume a computer system platform is running a multi-tasking operating system as is typical in current systems. Multiple tasks in different states of execution are therefore utilizing the processing resources. A typical multi-tasking operating system tracks the time each task spends executing. That information may be collected by other tasks, such as the power management software (or if the power management is a component of the operating system, to that component of the operating system). In one embodiment, the power management software queries the operating system periodically for an enumeration of the tasks that are running on the operating system. In addition, the power management software obtains execution statistics for each of the enumerated tasks, including those tasks that are part of the operating system, in order to determine how much CPU time the various tasks have used. The power management software then uses that information to create an overall utilization index for comparison to the high and low thresholds. In addition to the amount of CPU time used by a task, each task also has a priority, which may also be utilized by the power management software in determining the utilization index as described further herein.Referring to FIG. 4, exemplary statistics that can be obtained for the utilization determination are illustrated. Assume the enumerated tasks, A, B, C, Power Management task, and idle task, are those shown in FIG. 4. The tasks may be operating system tasks or application tasks. In general, for each task, the operating system provides a cumulative total time for how much CPU time the task has used since the task started. As shown in FIG. 4, at measurement 1, the task A has used A1 time and at measurement 2, task A has used A2 time. The period of the measurement T is the time between the measurement M1 and M2. The amount of time utilized over the measurement period T, for task A is A2-A1. Similarly task B utilization time is B2-B1 and task C is C2-C1. Generally, the power management software is not interested in measuring its own utilization. In addition, the power management software may determine not to include tasks below a certain priority. Thus, certain IDLE tasks, which the operating system runs when the CPU is otherwise idle, are not counted in calculation of the utilization index. While the priorities shown in FIG. 4 are high, low, medium, and idle, in fact, priorities may be specified with greater granularity, e.g., with a value between 0-31. The power management software may select which priorities should be included, e.g., those tasks with priorities greater than three. The power management software sums the task utilization numbers for those tasks it determines are relevant for calculation of the utilization index and divides that number by the elapsed time T between successive measurements. The utilization index is thus determined as:[mathematical formula - see original document]The power management software periodically obtains the CPU utilization information for the enumerated tasks. That utilization information obtained at one measurement time constitutes one utilization sample. In one preferred embodiment, multiple samples, e.g., 3, are used to calculate the utilization index, which is then compared to the high and low thresholds. Averaging utilization information allows the system to react more slowly to changes. That can have an advantage if utilization dips for one sample but then resumes for the next sample. Using averaged utilization values means the system will not reduce performance states in response brief changes in utilization. When a more instantaneous response is desired to fluctuations in utilization, fewer samples can be averaged. More samples can be averaged when the system should respond to fluctuations in utilization less quickly. In addition, the frequency of sampling can be increased or reduced with similar goals in mind. In addition, the operating system may influence the frequency of sampling according to how often the OS has statistics available. For example, in a Windows 98 environment, samples may be taken every 15 milliseconds while in an NT environment, samples may be taken, e.g., only every 100 milliseconds.Note that the sampling frequency as well as the number of samples to average affect CPU utilization since the process of sampling and averaging consumes CPU cycles. In addition, as explained more fully herein, changing the performance state entails stopping processor operations and therefore also impacts system performance. Thus, some systems may want to lengthen the sample period and increase the number of samples averaged to reduce the cost that power management exacts in terms of CPU utilization or performance state change latency.Note that the process of obtaining task information and the task of enumerating those tasks may be separate. In fact, the process of enumerating those tasks can consume so much time that enumeration is not executed each time utilization statistics are obtained. In one embodiment, enumeration of tasks actually running occurs at approximately [1/8] the sample rate of utilization information. That can result in errors in that the samples may miss some tasks that began after the last enumeration, or may try to obtain statistics for tasks that have already ended. Since the tasks that are running change relatively infrequently, the time saved in enumerating tasks at a rate slower than obtaining samples can be beneficial in reducing CPU consumption by the power management software without introducing an inordinate amount of potential error.Another aspect of determining the utilization information is that there are various tasks that may be excluded from the calculation of the CPU utilization index. For example, during a particular measurement period T, all higher priority tasks may be suspended for at least a portion of the period, thereby giving very low priority tasks an opportunity to execute. If the execution time spent by those very low priority tasks are included in the utilization index, there is a risk that the system performance state will be increased to account for the CPU execution time utilized by very low priority tasks. For example, inclusion of low priority tasks could cause the utilization ratio to rise to 82% and without those tasks the utilization would be 77%. Assuming a high threshold of 80%, the inclusion of the low priority tasks would result in a performance state increase because the utilization index is above the high threshold. Thus, the inclusion of low priority tasks may be generally undesirable. Of course, system requirements and objectives may vary and all tasks, or different tasks may be considered in determination of the utilization index in various systems.In addition, other information may be utilized in combination with any or all of the above measurement statistics. One such piece of information is the mode in which the task is run. Typically, statistics can be obtained that provide not only cumulative execution time for a task but also how much of the task execution time was in user mode and how much was in kernel mode. For example, a task can run its code in user mode, make calls to operating system services and be interrupted by a hardware interrupt. In such a scenario, it may be desirable for the power management software to disregard the CPU time spent either in system mode, or interrupt mode, or both.An exemplary environment where the approach of ignoring kernel time may be effectively utilized is as follows. Assume an embedded system that has a task that operates in user mode and depends on network data. If the task is awaiting a network packet and the task makes a call to the operating system to obtain the packet, the OS may sit waiting for a packet to arrive. The time period that is of particular interest in that situation is the user mode time utilized by the task. The OS mode time was spent waiting for a packet, an operation that does not require a performance increase.Additional flexibility in calculating the utilization index can be provided by treating specific tasks differently. For example, those tasks belonging to a specific process or program or even those tasks belonging to the operating system itself, can be ignored or always accounted for differently from other tasks. Thus, the CPU time spent in all modes (user mode or kernel mode) or in one specific mode, may be disregarded for a specific task or group of tasks or the task(s) may be included or excluded from the determination of the utilization index regardless of task priority.That capability of discriminating based on task may be useful in several situations. Some applications are badly written in terms of power management. For example, screen savers have been written that run at above idle priority. The ability to identify threads such as those and not incorporate them into the calculation of the utilization index would be beneficial.Another special case may be presented when an application, typically a real time application, could fail because of the latency involved in performance state transitions. If such a task were identified, the power management software could stay in the current performance state until the task completed. In other scenarios, a task may always require a particular level of performance, e.g., the maximum, and when the power management software detects that task, it always changes to the maximum performance level regardless of the current utilization index.Thus, the utilization information can be determined based on CPU utilization by the various tasks. The particular calculation of a CPU utilization index may utilize a programmable number of samples over a programmable sample interval. The calculation may choose to ignore certain tasks, such as those tasks that have low priority, and may treat user mode time differently than kernel mode time. Thus, the power management software can be adapted to a wide variety of applications. While most of the discussion has been for computer systems such as laptops or desktops, in fact, the power savings technique described herein can be applied to any electronic device in which power management as described herein can be effectively utilized.In one embodiment, a user can select how the device operates. For example, a notebook user could selectably choose the notebook to operate at the maximum performance state, in automatic mode where the performance state is determined according to utilization, or in battery saver mode in which the lowest performance state is used.The power control software in one embodiment is a driver running under the operating system. In a preferred embodiment, the software to implement the driver is actually in two parts. A part resides at an application level. That part queries the OS for information on CPU utilization by the various tasks running under the OS. Software at the application level also performs the sample averaging, compares the samples to the high and low threshold levels and determines if a performance change is required.A second part of the power control software operates at a high privilege level (e.g., ring 0) and interacts directly with BIOS tables and hardware registers to determine actual run states in terms of VID/FID values (described further herein), how many performance states exist, and performs the actual write operations to the VID/FID register to initiate the change in the voltage/frequency settings for the processor. The application level software can query the privileged level software as to how many performance states exist for the processor and the current state. In order to change states, the application level software gives the privileged level driver abstracted performance requests in terms of a performance state (e.g., level 3), rather than actual FID/VID levels. Separating the driver into two drivers simplifies the development task and allows a large portion of the software (application level driver) to work on multiple platforms, thus reducing development time and cost.Many platforms for which the power management techniques described would be useful also employ other common power management frameworks used today in personal computers, e.g., Advanced Configuration and Power Interface (ACPI) and the Advanced Power Management (APM) framework. These legacy power management frameworks are widely implemented and because of their wide use, they may be difficult to modify in order to incorporate the new power management techniques described herein. Therefore, it would be desirable to utilize the new power management capabilities in such a way that both the legacy power management schemes and the new power management capabilities can co-exist on the same computer without interfering with either the robustness or the effectiveness of the other.Accordingly, in one embodiment, the power management software described herein does not involve any OS-BIOS communications. More specifically, the OS does not need to send APM commands to the BIOS for the purpose of the BIOS carrying them out. Such APM commands are used to cause changes in the power management state of the machine, request status, etc. Therefore, the BIOS has extensive, machine specific assembly language routines to support the commands. In contrast, a preferred embodiment of the power management software described herein utilizes a device driver and a policy daemon (i.e., "background task") to determine when and how to perform changes in the performance state of the machine. The changes in the performance state are done independently of the OS or of any operational code in the BIOS. While at any given performance state (frequency/voltage combination), APM and ACPI work as they normally do to throttle the CPU clock. The legacy power management frameworks have no knowledge that the CPU frequency has changed due to a performance state change. The legacy power management frameworks still handle idle periods (e.g., no activity for a prolonged period) by entering various sleep or suspend states and handle thermal overload situations in the same manner. Using this approach to the power management software allows platforms with the hardware to support performance state changes described herein to utilize both the power management software for performance state changes as well as the legacy power management schemes for more conventional power management approaches. In fact, under some scenarios, no BIOS change is even required and the performance state parameters may actually reside under the operating system, e.g., in the Windows registry or other appropriate persistent global data space.In order to select performance states, a table is provided, e.g., in the BIOS, that specifies appropriate voltage/frequency pairs. The voltage/frequency combinations in that table are determined, typically during production, by running the CPU at a variety of voltages and frequencies and determining at which voltage/frequency pair the CPU operates properly. The result of that effort is commonly referred to as a "Shmoo Plot". There will be a different Shmoo Plot for each process technology that a CPU is implemented in, and additionally for each speed grade within that process technology. Each Shmoo Plot is converted into a performance state table that is stored in BIOS for use by the power management software. Since BIOS typically supports multiple processors, there is generally more than one performance state table in BIOS and the power management software has to determine the appropriate table for the processor in the platform. In one implementation the power management software looks at the CPUID, which includes the CPU model and silicon revision as well as other system related factors. However, the CPUID is not a hard indicator of which performance state table in BIOS to use since the CPUID may only identify the silicon design and not necessarily the process technology. Four items are examined to select the appropriate performance state table: (1) the front side bus (FSB) speed, (2) CPUID, (3) Start VID, and (4) Max FID. The FSB speed is important since frequency is typically expressed in multiples of the FSB speed. The last three variables are determined by CPU design, CPU fabrication process, and speed grade of that CPU in the given hardware design and fabrication process.In another embodiment, a Shmoo class register can be implemented in hardware that informs the power management software which performance state table to use. Thus, the power management software can read that register to determine which performance state table to use. In a preferred embodiment, the register is implemented in fuse technology and is configured at the time the silicon speed grade is determined. The fuses can be electrically programmed or laser programmed to identify the appropriate silicon speed grade. That value can be used directly to indicate the performance state table to be used by the power management software. That does not mean that the CPU shmoo data will be used directly on the system since system-level factors can alter the actual voltage/frequency combinations that will run on a particular platform. Thus, the performance state tables are de-rated from the actual shmoo data obtained to account for system-level factors.The power control software may want to provide access to other applications that want access to control the frequency and voltage of the platform. Other applications may not want to access those controls for complexity reasons, and more importantly, any action other applications take can be negated by the power management driver.Accordingly, a standard interface is provided to allow other applications to utilize the power management software. In one embodiment, the other applications use a signaling method using Broadcast Windows Messages, which is a service built into the Windows operating system. The messaging technique may vary according to the operating system being used and within each operating system, multiple approaches can be used to allow an application to send a message to the power management software. For example registry keys may be utilized by the application. Other modes of communicating with a driver may be used according to the specific operating system and software design utilized.In a preferred embodiment, the application desiring to control the power management software could cause the power management software to (1) pause the automatic control sequence, (2) pause the automatic control sequence and go to the lowest performance state (power level), and (3) continue the automatic control sequence. Pausing the automatic control sequence allows a task to initiate control of power management control registers (e.g., VID/FID register) directly without fear that the power management control software will interfere. The second mode may be used to recover from overheating. The third mode may be used to continue normal operations. Other modes can be readily envisioned as well, e.g., a mode causing the power management software to change to the maximum performance mode.In order to effect changes to the performance state, the power management software has to cause the voltage and frequency used by the CPU to change. In one embodiment that can be accomplished as follows. Referring to FIG. 5, a processor is shown that can dynamically adjust its operating voltage and frequency to provide better thermal and power management in accordance with processor utilization. Processor 501 includes a programmable voltage ID (VID) field 503, core clock frequency control field (frequency ID (FID)) 504 and count field 505. Those fields may be located in one or more programmable registers. When the processor and/or system determines that a change to the operating voltage and/or frequency is desired to increase or decrease the performance state, the desired frequency and voltage control information are loaded into FID field 504 and VID field 503, respectively. Access to a register containing those fields, or an access to another register location, or access to a particular field in one of those registers can be used as a trigger to indicate that the processor should enter a stop grant state in which execution of operating system and application code is stopped. The access may result from, e.g., execution of a processor read or write instruction and in response to that access, power management control logic 507 supplies a stop signal 509 or other indication to indicate to CPU core logic 511 that the CPU core should stop execution of operating system and application code in order to enter a stop grant state.Depending upon the processor implementation, stop signal 509 causes the processor to finish executing the current instruction, complete any current bus transactions and leave the host bus in an appropriate state or take any other necessary action prior to stopping code execution. Once the processor has completed all necessary preparations to enter the stop grant state, which vary depending on processor implementation, CPU core logic 511 supplies an asserted stop grant signal 513 or other indication to indicate to power management control logic 507 that CPU core logic 511 has entered the internally generated stop grant state. Note that while an internally generated stop grant state is described, other embodiments may utilize an externally generated stop grant state.During the stop grant state, the processor can transition the voltage and frequency to the new states specified in VID field 503 and clock frequency control field 504. In some processor implementations, the processor core clocks are stopped after the processor enters the stop grant state. In other processor implementations, the processor core clock frequency is reduced to a frequency which can safely tolerate desired voltage changes.In one implementation clock control frequency information is supplied as multiplier values for a clock that is supplied to processor 501. Those of skill in the art appreciate that many other approaches can be used to specify the core operating frequency.In either case, the voltage control information specified in VID field 503 is supplied to voltage regulator 515 which in turn supplies CPU core logic 511 with the new voltage during the stop grant state.Because changing the voltage and frequency can not be done instantaneously, the stop grant state needs to be maintained for a period of time to allow the new voltage and clock frequency to stabilize. In one embodiment, that time period is controlled through count circuit 517. Count circuit 517 begins counting once stop grant signal 513 is asserted, that is, once the stop grant state begins. The count circuit 517 is designed to count a sufficient amount of time for the voltage and frequency changes to stabilize. In one embodiment, as illustrated in FIG. 5, that time period is programmable through count register 505, which specifies the duration of the stop grant state. Once count circuit 517 has counted to the desired count value, the power management control logic 507 causes stop signal 509 to be deasserted, which indicates to CPU core logic 511 that it should exit the stop grant state. On exiting the stop grant state, CPU core logic 517 resumes executing operating system and application code.In some processor implementations, CPU core logic 517 may resume executing code at the new clock frequency immediately on exiting the stop grant state. In other implementations, for example, when CPU core logic executes at a reduced clock speed during the stop grant state, clock generation logic 511 may increase the core clock speed in increments up to the newly specified operating frequency after exiting the stop grant state and the CPU core may resume execution of OS and application code after the core clock speed is at the specified operating frequency. In one embodiment, it takes on the order of 100-200 microseconds to change to a new performance state.Referring to FIG. 6, the high level operation of processor 501 in accordance with one embodiment of the invention is described. In 601, the processor (or system) determines there is a need to change operating frequency and voltage to enter a new performance state. The processor then writes desired voltage and frequency values to VID field 203 and FID field 204. The fields may be located in one or more model specific registers. In addition to writing fields 203 and 204, if necessary, the processor can write to count field 205 to specify the duration of the stop grant state. An access to a register containing those fields (or a read or write access to another register or an instruction) may be used as an indication to begin the process of entering the stop grant state.In one preferred embodiment, the VID/FID fields are located in a single register. Note that software in general prefers to do as few register I/O accesses as possible, in order to get the desired result. In addition software would prefer to build the contents of a control register using the register itself as opposed to building the various control fields in memory and then transferring the fields to the control register. In a typical register, any I/O write to the register causes a change in the control state of the machine. That is, a write to the VID/FID register would initiate the stop grant state sequence. Thus, one could not build the register bit-field by bit-field since each write to a bit-field would result in a change to the machine control state. It is potentially advantageous to modular software to have a register that does not begin a control sequence each time any one of its fields is accessed. If a different function is used to build each bit field, then a register whose access to any field causes a control state change would require a shared memory buffer between all functions so that each piece of modular software could build its portion of a bit field for the register in question. The shared memory buffer would be an additional overhead for each function.In one embodiment, given a register that has several bit fields defined, one of the bit fields serves a dual purpose of both holding some useful control information and serving as the trigger to actually change the state of the underlying hardware. All other bit fields in the register can be read and written without causing the hardware to change state. That is, the FID/VID control register only causes a stop grant state when one of the FID or VID fields is written or otherwise accessed. A write to the other bit field does not initiate a performance state change.Referring again to FIG. 6, once that indication is received, and the CPU core logic receives a request to enter the stop grant state in 605, the CPU takes all necessary actions to place the CPU in the stop grant state (e.g., completing instructions and/or bus cycles) and then asserts stop grant signal 513 to power management control logic 507 in 607 to indicate that the CPU has entered the stop grant state.The asserted stop grant signal from CPU core 513 causes the count circuit 517 to begin counting in 609. The count circuit 517 determines the duration of the stop grant state. Note that writing to the count field 505 may be omitted under certain circumstances. For example, the count circuit may be hard coded to wait a sufficient time for the new voltage and frequency values to stabilize. Alternatively, the count field may maintain its previous value and thus only has to be written once. In any case, once in the stop grant state, CPU clocks are stopped or reduced by clock generation circuit 519 to condition the CPU clocks in 611 to permit the desired voltage changes.During the stop grant state, the new VID values are applied to voltage regulator 515 and the new clock frequency control values are supplied to clock generation circuit 519 in 613. Clock generation circuit 519 typically includes a phase locked loop (PLL) and the circuit takes some time to lock in to the new frequency. Even if the PLL is generating a stable new clock at the desired frequency, the CPU core is still getting either no clock or a divided down clock as the voltage stabilizes. After the count has expired, i.e., the waiting period in 615 is determined to be over, power management control logic 507 deasserts its stop signal and the CPU core logic 511 resumes executing code in 617. Note that the latency involved in switching to a new performance state can be on the order of 200 microseconds.Note that changing both voltage and frequency to enter a new performance state can be particularly effective. Changes in the processor's core clock frequency have an approximately linear affect on the power dissipated by the processor. Thus, a 20% reduction in clock frequency reduces the power dissipated by the processor by 20%. The range of change is significant since a ratio of lowest frequency to highest frequency is usually greater than 2:1. Consequently, the processor's power may be changed by similar ratio. Changes in the processor's core voltage have an approximately square law effect. That is, potential power savings is proportional to the square of the percentage of voltage reduction. Although the range of change of voltage is generally less than 50%, the square law effect results in significant changes in the processor's power if the core voltage of the processor is reduced.There is a risk that under certain conditions, the power management software can get out of sync with the actual state of the machine. Certain operating systems, such as the Windows operating system, signal applications about changes in the power state of the platform, e.g., information as to whether the platform is operating on line voltage or battery power. However, those messages are not always received in the right order, received correctly, or in a timely manner. That can be especially problematic when the platform transitions to a sleep or suspend state and subsequently experiences power state changes, when, e.g., the platform is unplugged from AC line power. In addition, other applications may access the power management control registers (e.g., VID/FID registers) causing the platform to enter a higher or lower performance state without informing the power management software.Therefore, in order to avoid the possibility of the power management software becoming out of sync with the actual performance state of the platform, the power management software in one embodiment is self-correcting. In that embodiment, a separate resynchronization task periodically (e.g., every two seconds) determines the current state in hardware, which can be determined from the VID/FID register and the shmoo class table maintained in BIOS or elsewhere, as well as the current power state in which the power management software thinks the platform is operating. The resynchronization task does a comparison and if the comparison indicates a mismatch between the power management software control and actual performance state of the platform, corrective action is taken, such as reinitializing the power management software.In that way, if the power management software ever gets out of sync with the actual state of the machine, that lack of synchronization will be short lived.As described herein, a computer dynamically adapts its performance state to its utilization level to provide improved power and thermal management. Note that the description of the invention set forth herein is illustrative, and is not intended to limit the scope of the invention as set forth in the following claims. For instance, while this invention has been described with relation to computer systems such as desktops and a class of mobile computers referred to herein as notebooks (which may also be referred to as laptops or portable computers), the teachings herein may also be utilized in other computing devices, such as servers, work stations and/or portable computing devices such as personal digital assistants, (PDAs) which are handheld devices that typically combine computing, telephone/fax, and networking features or in other small form factor computing and/or communication equipment, where such power management techniques described herein may prove useful. Other variations and modifications of the embodiments disclosed herein, may be made based on the description set forth herein, without departing from the scope and spirit of the invention as set forth in the following claims.
A method and system for enforcing access control to system resources and assets. Security attributes associated with devices that initiate transactions in the system are automatically generated and forwarded with transaction messages. The security attributes convey access privileges assigned to each initiator. One or more security enforcement mechanisms are implemented in the system to evaluate the security attributes against access policy requirements to access various system assets and resources, such as memory, registers, address ranges, etc. If the privileges identified by the security attributes indicate the access request is permitted, the transaction is allowed to proceed. The security attributes of the initiator scheme provides a modular, consistent secure access enforcement scheme across system designs.
CLAIMS What is claimed is: 1. A method comprising: in a computer system having one or more processing cores, a memory interconnect via which processing core and memory resources are accessed, and an Input/Output (10) interconnect coupled to the memory interface component providing an 10 interface to one or more 10 devices, enforcing a secure access mechanism under which a transaction initiated by an initiator device to access a target resource includes security attributes defining access privileges associated with the initiator device that are evaluated against an access policy defined for the target resource; and allowing the transaction to proceed if the security attributes indicate access to the target resource by the initiator device is permitted by the access policy. 2. The method of claim 1, further comprising employing system hardware to generate security attributes in response to initiation of transactions in the system. 3. The method of claim 1, wherein access policies are enforced via read and write policy registers. 4. The method of claim 1 , further comprising employing a single trusted entity to control access to access policy data. 5. The method of claim 1, wherein the memory interconnect comprises a memory fabric employing memory coherence. 6. The method of claim 1, wherein the 10 interconnect comprises an 10 fabric. 7. The method of claim 1, wherein the system comprises a system on a chip (SoC) including a memory fabric comprising the memory interconnect and an 10 fabric comprising the 10 interconnect. 8. The method of claim 1, further comprising enforcing the secure access mechanism by assigning an access role to each initiator device in the system, each role defining access privileges associated with the role, wherein the security attributes provided with transactions initiated by a given initiator include information identifying the role associated with the initiator device. 9. The method of claim 1, wherein at least one of the memory interface component and the 10 interface component employs a first protocol and the system includes an interface component employing a second protocol, the method further comprising mapping security attributes between the first and second protocols. 10. A system on a chip (SoC), comprising: a memory fabric, including a memory interface to which memory may be operatively coupled; one or more processing cores operatively coupled to the memory fabric; an Input/Output (IO) fabric, operatively coupled to the memory fabric via an interface, the 10 fabric including interfaces to a plurality of 10 devices; and a secure access mechanism under which resource access requests associated with transactions initiated by an initiator device to access a target resources in the SoC include security attributes defining access privileges associated with the initiator device that are evaluated against an access policy defined for the target resource to determine whether the transaction is permitted, wherein the initiator devices include devices that can initiate transactions via the SoC including internal devices in the SoC and 10 devices operatively coupled to an interface of the SoC, and wherein target resources include memory and register resources included in the SoC, memory coupled to the memory fabric and 10 fabric and memory and register resources accessible via the 10 devices. 11. The SoC of claim 10, further comprising read and write policy registers configured to store read and write permission data that is used to determine whether a transaction involving read or write access to a target resource is allowed based on security attributes associated with the transaction. 12. The SoC of claim 11, further comprising a control policy register configured to store data identifying a trusted entity that is allowed to configure the read and write policy registers. 13. The SoC of claim 10, further comprising hardware-based mechanisms to generate security attributes for each transaction initiated by an initiator device involving accessing a target resource via the SoC. 14. The SoC of claim 10, wherein the memory fabric and IO fabric employ a first protocol, the SoC further comprising, a bus coupled to one of the memory fabric or IO fabric via a corresponding interface or bridge, the bus employing a second protocol; and a mapping mechanism configured to map security attribute information between the first and second protocols. 15. The SoC of claim 10, wherein the security access mechanism employs a role-based access scheme under which access privileges are associated with corresponding roles and each initiator device is assigned an access role, and further wherein the access role associated with an initiator device may be identified by security attributes associated with a transaction initiated by that device. 16. A computer system comprising: a system on a chip (SoC), including, a memory fabric, including a memory interface and a core interface one or more processing cores operatively coupled to the memory fabric via the core interface; an Input/Output (IO) fabric, operatively coupled to the memory fabric via an interface; and memory, operative coupled to the memory fabric via the memory interface; and a plurality of IO devices, operatively coupled to the IO Fabric, wherein the SoC further includes a secure access mechanism under which resource access requests associated with transactions initiated by an initiator device to access a target resource in the SoC include security attributes defining access privileges associated with the initiator device that are evaluated against an access policy defined for the target resource to determine whether the transaction is permitted, wherein the initiator devices comprise devices that can initiate transactions via the SoC including devices in the SoC and IO devices interfaced to the IO fabric, and wherein target resources include memory and register resources included in the SoC, memory coupled to the memory fabric and 10 fabric and memory and register resources accessible via the 10 devices. 17. The computer system of claim 16, wherein the memory fabric and 10 fabric employ a first protocol, the system further comprising: a bus employing a second protocol, operatively coupled to the 10 fabric via a bridge; and, a mapper in the bridge that maps security attribute data between the first and second protocols. 18. The computer system of claim 16, further comprising hardware-based mechanisms to generate security attributes for each transaction initiated by an initiator device in the computer system. 19. The system of claim 16, further comprising one or more read and write policy registers configured to store data used to enforce access policies. 20. The system of claim 16, wherein security attributes are automatically generated by system hardware and are immutable.
METHOD FOR ENFORCING RESOURCE ACCESS CONTROL IN COMPUTER SYSTEMS FIELD OF THE INVENTION The field of invention relates generally to computer systems and, more specifically but not exclusively relates to method for enforcing resource access control in computer systems including systems on a chip. BACKGROUND INFORMATION Security issues relating to computer systems have become an ever increasing problem. Viruses, Trojans, malware, and the like are common threats that are well-known to most computer users. The level of threat is so pervasive that an entire industry has been created to address these problems via use of security-related software and services, such as antivirus, antispyware, firewall software, etc. Most security attacks are targeted at the software level, and are designed to access various operating system or file resources. For examples, a virus may gain access to a computer system's files via download of an executable program containing the virus' code in a hidden manner. To prevent this type of attack, antivirus software may be used to "scan" downloaded files looking for known or suspicious code. As a result of security threats, many users employ security software. Although less common, security attacks can also be made at the hardware level. However, there is no equivalent to security software to prevent access to system-level hardware resources and assets, such as configuration registers, range registers, and the like. As a result, system architects design in various hardware- and firmware-based security measures for controlling access to important system resources. This is typically done on a per-system basis, leading to replication of design, debug, and validation work and inconsistent management of security across system designs. BRIEF DESCRIPTION OF THE DRAWINGS The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified: Figure 1 is a system schematic diagram illustrating an exemplary System on a Chip architecture and corresponding communication paths associated with transactions initiated by devices in the system; Figure 2 shows an overview of a secure access mechanism employing policy registers in accordance with one embodiment of the invention; Figure 3 shows an exemplary set of read and write policy registers in accordance with one embodiment of the invention; Figure 4 shows an exemplary control policy register in accordance with one embodiment of the invention; Figure 5 shows the SoC of Figure 1, further including a bus implementing a proprietary protocol and a mapper for mapping security attributes between the proprietary protocol and a protocol employed by the SoC fabrics; and Figure 6 shows an exemplary transaction and associated secure access enforcement mechanism facilities using the SoC of Figure 1, according to one embodiment of the invention. DETAILED DESCRIPTION Embodiments of methods and apparatus for enforcing resource access control are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. An architecture corresponding to an exemplary system on a chip (SoC) 100 is shown in Figure 1. SoC 100 includes one or more processor cores 102, each including a local cache 104 coupled to a memory fabric 106. SoC 100 also includes one or more accelerators 108 coupled to memory 1 10, which in turn is coupled to memory fabric 106 via core interface 1 12. Memory fabric 106 also includes a memory interface 114 and a south-facing interface 116. Memory interface 1 14 facilitates communication with dynamic random access memory (DRAM) 1 18. South-facing interface 116 provides an interconnect between memory fabric and an IO fabric 120. IO fabric 120 support input/output (I/O) communications with various IO devices, illustrated by devices 122 and 124, each coupled to respective memory 126 and 128. 10 fabric 120 also provides an interface between Static Random Access Memory (SRAM) 130 and the rest of the system components. During operation of SoC 100, various system components may access SoC assets held or provided by other components/devices. For example, processor 102 may access each of DRAM 118, accelerator 108, memory 1 10, device 124, memory 128, and SRM 130, as depicted by respective communication paths device 132, 134, 136, 138, 140, and 142. Similarly, various IO devices may access other assets, such as devices and memory resources, as depicted by communication paths 144, 146, 148, 150, and 152. The processor cores, accelerators, and devices interact with each other to process workloads handled by SoC 100. Interaction is facilitated, in part, by accessing memory and storage resources and/or registers associated with the cores, accelerators and devices, as well as common memory resources such as DRAM 1 18, SRAM 130, etc. Components that initiate such system resource access requests are referred to herein as "initiators." As can be seen in the architecture of Figure 1, some of the initiators, such as processor core 102 and accelerator 108 comprise internal components that are built into SoC 100, while other initiators, such as IO devices 122 and 124, may be internal or external to the SoC, depending on their particular function. Also external to the Soc are software and firmware entities that may attempt to access internal or external resources through internal or external initiators in the SoC. As a result, workloads of varying degrees of trustworthiness may be executing at the same time. The SoC 100 includes data and hardware assets, such as configuration registers, range registers, etc., that must be protected against unauthorized access. Currently, controlling access to these data and hardware assets is handled in an ad-hoc and fragmentary manner for each SoC by the particular architect of the SoC. Previously, there has been no comprehensive support in the SoC fabrics and interfaces to unambiguously determine the privileges of an initiator. Recent advances in SoC architectures have introduces memory and IO fabrics that support coherency across both internal (e.g., via memory fabric 106) and external (e.g., via IO fabric 120) memory resources. This is facilitated, in part, through a memory access and coherency framework. In some embodiments, this framework is utilized to define a uniform access control architecture that may be implemented across SoC architectures to support secure access to resources in a consistent manner. In one embodiment, memory fabric 106 and IO fabrics 120 employ Intel® QuickPath Interconnect (QPI) frameworks. In general, each of memory fabric and IO fabric comprise interconnects with corresponding control logic for facilitating transactions between devices and resources connected to the interconnects. In one embodiment, security attributes are assigned to subjects/initiators and used to determine the access rights (i.e., read, write, no access, etc.) of the initiators. These Security Attributes of the Initiator or SAI represent immutable properties of the initiator used for making access decisions. In one embodiment these security attributes are generated by SoC hardware and must accompany every transaction. In one embodiment read and write access policy registers are employed for implementing policies. Additionally, in one embodiment a control policy register is employed that determines what entity or entities can configure the read and write policy registers. Figure 2 shows an overview of an exemplary implementation of an SAI-based security scheme. Under this example, initiators Io, Ii, ...In, are shown accessing objects Oo, Oi, ...On. Access control for accessing objects that are coupled to a memory fabric 106 (i.e. fabric-based access) is facilitated via memory fabric read and write policy registers 200 and 202. Similarly, access control for accessing external targets (i.e., target-based access), such as IO devices, is facilitated via read and write policy registers 204 and 206. In the example of Figure 2, subject So desires to perform a read access to an object Oo (not shown) coupled to memory fabric 106. Each of the initiators Io, Ii, ...In, is assigned a set of security attributes SA, which define the access rights of each initiator as enforced by the SAI security scheme via associated policy registers. Information effecting the set of security attributes SA applicable to a subject is forwarded with each access message initiated by the subject, as described below in further detail. The policy registers store security attributes data for securely controlling access to corresponding objects. If the security attributes of an initiator subject matches the security attributes to access an object, the transaction is allowed to proceed. Conversely, if an initiator subject does not have the proper security attributes (as identified via its SAI information forwarded with its access messages), the transaction will be denied, with a corresponding message being returned to the initiator subject. Access Control Architecture As discussed above, term Security Attributes of Initiator (SAI) is defined to represent the immutable properties of a subject or initiator used for making access decisions. In one embodiment, these attributes are generated by hardware entities and accompany each transaction initiated by a corresponding subject or initiator. Unlike source IDs, SAI do not get transformed at bridges; they persist until the point of policy enforcement. Policy registers are employed for defining the policies for read and write access to an asset and for restricting the entity that can configure or update these policies. In one embodiment, the access control architecture is comprised of the following building blocks: SAI, SAI Generator, SAI Mapper, Read Policy Registers, Write Policy Registers and Control Policy Registers. Additionally, in one embodiment wrappers are used to enforce SAI for external ports to ensure that their accesses are appropriately characterized. SAI Security Attributes of Initiator or SAI represents the immutable properties of the initiators (and subjects) which are inspected to determine access to targets in a SoC platform. In one embodiment, these properties include a role, device mode and system mode. An initiator may have any combination of these properties. A role is assigned to a group of subjects/initiators with similar privileges. Roles are static and are assigned by the SoC architect. In one embodiment, the mapping of roles to subjects/initiators can be any of the following: R[0..n] -> S[0..n]: Each subject/initiator may have its unique own role. R[0] -> S[0..n]: Multiple subjects/initiators may be grouped under the same role. R[0..n] -> S[0]: Multiple roles may be assigned to the same subject/initiator. The Device mode is dynamic and captures the current internal mode of a device. For example, the mode could be a secure or normal mode. The System mode is dynamic and indicates the mode driven by a processor core. In one embodiment, the processor cores are IA cores, based on Intel 32- or 64-bit architecture (known in the industry as IA). For example, the system mode may be in SMM (System Management Mode) or secure mode, etc. Additionally, for multi-threaded initiators, a context attribute for indicating current thread is defined; these attributes would accompany the SAI. SAI Generator SAI is an encoding that is generated by SoC hardware and is generated by a function whose input parameters include Role, Device and System Mode. The interpretation of an SAI is specific to each SoC, and defined by the SoC architect. As an example implementation, under an example 7-bit SAI encoding, bit 6 set to 1 could indicate an access by a processor core. If bit 6 is set to 0, then bits 5-0 could be used for encoding device accesses. For example, 1000001b represents IA core access and 0010000b represents a device access. Of course, this is merely exemplary, as the number of bits and format of the SAI encoding may be configured by the architect. SAI Mapper The I/O devices in some SoCs are connected to non-vendor (i.e., not the vendor of the SoC) or legacy vendor fabrics. For example, some SoCs may incorporate OCP (Open Core Protocol), AMBA (Advanced Microcontroller Bus Architecture), IOSF (Intel On-Chip System Fabric) or other proprietary bus protocols. SAI Mappers are responsible for mapping the security attributes or SAIs that accompany transactions generated by agents in an SoC vendor's standard fabrics to security attributes that can be interpreted in the SoC-specific device domain (e.g., OCP domain). Similarly, for upstream transactions generated by devices in non-vendor fabrics, the security attributes generated by the devices have to be mapped to SAIs that can be interpreted in the memory/coherency and IOSF domains. Typically these mappers may be implemented in the bridges that map one fabric protocol to another. In some embodiment, these mappers are securely mapped in hardware and cannot be manipulated. An exemplary implementation of an SAI mapper is shown in Figure 5. In this example, a non-vendor or legacy vendor bus, such as an OCP, AMBA, IOSF, etc. bus 500 is coupled to IO fabric 120 via a bridge 502. One or more devices 504 with memory 506 is coupled to bus 500, wherein access to these devices is in accordance with the protocol implemented by bus 502. Meanwhile, a different protocol is implemented for transactions to access assets and resources connected to memory fabric 106 and IO fabric 120 in SoC 100. To facilitate transactions between devices connected to bus 500 and SoC 100, bridge 502 employs an SAI mapper 508 to map SAI data between the two protocols. Read and Write Policy Registers The Read and Write Policy registers contain the read and write permissions that are defined for each initiator by the SoC architect. The SAI accompanying the transaction serves as an index to the policy register. As an example, in one embodiment a 32-bit read and write policy register is defined in the memory fabric. A corresponding pair of read and write policy registers 300 and 302 are shown in Figure 3, wherein l 's indicate access is allowed and 0's indicate access is denied. In general, the SAI width is n-bits. The value of n may change from one generation to another and/or differ between products. In one embodiment the encoding space is 2Λ(η-1), where one of the n bits is used to differentiate core vs. device encodings. Use of a 32- bit register is merely exemplary, as the actual encodings will generally be specific to a product. SAI assignment to an initiator is flexible and depends on the particular product. For example, there could be one SAI per initiator or multiple SAIs per initiator or group multiple initiators into one SAI. The foregoing example employing a bit vector using a 32-bit register is merely one technique for effecting read and write permissions. Other techniques may also be readily employed, including schemes employing longer or shorter bit vectors, schemes including a hierarchy of permission rules implemented using one or more registers or analogous storage mechanisms, and various other permission logic that may be implemented via hardware, microcode, firmware, etc. Control Policy Register The contents of the Control Policy Register define the trusted entity that is allowed to configure the Read and Write Policy Registers. The Control Policy Register is a self-referential register; the SAI specified in the Control Policy Register is allowed to modify the read and write register policies as well as overwrite the contents of the Control Policy Register. By allowing a single trusted entity to configure the control policy register, the implication is that access to the policy registers is locked to all other agents. The entity specified by the SAI in the Control Policy Register may choose to extend the set of agents that can configure the Policy Registers beyond the initial value loaded at power-on/reset or the trusted entity may write 0s into the control policy register thus locking it until the next system reset/power-on. This provides flexibility for the SoC architect to implement locking down the policy registers until the next reset or allow the policy to be updated by a trusted entity during runtime. An exemplary 32-bit Control Policy Register 400 is shown in Figure 4. Figure 6 depicts an example of securely enforcing device accesses to memory. Under this example, device 122 initiates a transaction (e.g., read or write) to access DRAM 1 18. At an I/O bridge 152, appropriate SAIs are generated via the bridge hardware; these SAI will be forwarded with the transaction message across interfaces until reaching an applicable security enforcement entity, which in this case are policy registers 156 in memory fabric 106. At policy registers 156, the SAI will be inspected and evaluated against the applicable policy register in accordance with the type of transaction, e.g., read or write. The SAI secure access enforcement scheme disclosed herein provides many advantages over current approaches. It defines uniform access control building blocks such as SAI generators, SAI mappers, policy registers, etc. that can be employed consistently across SoC designs. It applies to SoC fabrics in a uniform manner. These benefits are achieved by associating a persistent attribute, the SAI, with each transaction. By forwarding SAI data within existing formats of transaction messages, support for adding access security measures can be achieved within existing interconnect frameworks, such as QPI. An SoC can use the SAI information to enforce access control on transactions generated by all initiators that target SoC assets such as memory, uncore registers, I/O devices, etc. SAIs can be used to allow exclusive access to memory regions to specific I/O devices or exclusive access to SoC assets when the processor runs in specific modes. The access control architecture is a powerful new paradigm that allows evaluation of all access control decisions within a consistent and modular framework. By carrying the SAI information persistently across interconnects, we simplify design, debug and validation of access control assertions since the initiator security role is immediately available across all micro-architectural structures that buffer transactions. The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the drawings. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
A bandgap voltage reference circuit ( 1 ) produces a bandgap voltage reference (V<SUB>ref</SUB>) on an output terminal ( 3 ) relative to a common ground voltage terminal ( 4 ). The circuit ( 1 ) develops a PTAT voltage across a primary resistor (r 3 ) which is reflected and gained up across an output resistor (r 4 ) and summed with a CTAT voltage to produce the voltage reference (V<SUB>ref</SUB>). A first circuit comprising a PTAT voltage cell ( 15 ) having first and second transistor stacks of first and second transistors (Q 1 ,Q 2 ) and (Q 3 ,Q 4 ) operated at different current densities develops a PTAT (2DeltaV<SUB>be</SUB>) across a first resistor (r 1 ). The PTAT voltage developed across the first resistor (r 1 ) is applied to an inverting input of a first op-amp (A 1 ), the output of which is coupled to a first end ( 9 ) of the primary resistor (r 3 ). A first voltage level relative to the ground terminal ( 4 ) is applied to the first end ( 9 ) of the primary resistor (r 3 ) through a feedback loop of the first op-amp (A 1 ) having a second resistor (r 2 ) and a third transistor (Q 5 ), similar to the first transistors (Q 1 ,Q 2 ). A second end ( 11 ) of the primary resistor (r 3 ) is held at a second voltage level of one first base-emitter voltage relative to the ground terminal ( 4 ) by a second op-amp (A 2 ) so that a PTAT voltage is developed across the primary resistor (r 3 ) by the difference of the first voltage level and the second voltage level. The PTAT voltage developed across the primary resistor (r 3 ) is reflected and gained up across the output resistor (r 4 ) in a negative feedback loop ( 20 ) of the second op-amp (A 2 ) and is summed with the first base-emitter voltage derived from the first transistor (Q 2 ) to produce the bandgap voltage reference (V<SUB>ref</SUB>) on the output terminal 3 , which is given by the equation: <maths id="MATH-US-00001" num="00001"> <MATH OVERFLOW="SCROLL"> <MROW> <MSUB> <MI>V</MI> <MI>ref</MI> </MSUB> <MO>=</MO> <MROW> <MROW> <MSUB> <MI>V</MI> <MI>be</MI> </MSUB> <MO>⁡</MO> <MROW> <MO>(</MO> <MN>1</MN> <MO>)</MO> </MROW> </MROW> <MO>+</MO> <MROW> <MN>2</MN> <MO>⁢</MO> <MI>Delta</MI> <MO>⁢</MO> <MSTYLE> <mspace width="0.3em" height="0.3ex"/> </MSTYLE> <MO>⁢</MO> <MROW> <MSUB> <MI>V</MI> <MI>be</MI> </MSUB> <MO>⁡</MO> <MROW> <MO>(</MO> <MROW> <MN>1</MN> <MO>+</MO> <MFRAC> <MI>r2</MI> <MI>r1</MI> </MFRAC> </MROW> <MO>)</MO> </MROW> </MROW> <MO>⁢</MO> <MFRAC> <MI>r4</MI> <MI>r3</MI> </MFRAC> </MROW> </MROW> </MROW> </MATH> </MATHS> <FIGREF IDREF="DRAWINGS">FIG. 5</FIGREF> to accompany the abstract.
The invention claimed is:1. A PTAT voltage generating circuit comprising:a primary impedance element across which a PTAT voltage is developed,a first circuit for generating a first voltage level for applying to a first end of the primary impedance element, the first voltage level being provided as a function of the difference of N first base-emitter voltages and M second base-emitter voltages, N and M being integer values greater than zero and being of values different to each other, the first circuit comprising a first transistor stack having at least one first transistor for providing at least one of the N first base-emitter voltages, and a second transistor stack having at least one second transistor for providing at least one of the M second base-emitter voltages, each first transistor being operated at a first current density, and each second transistor being operated at a second current density, the second current density being different to the first current density, a first impedance element and a first op-amp configured to operate in a closed loop mode co-operating with the first and second transistor stacks so that a voltage difference of the first and second base-emitter voltages in the respective first and second transistor stacks is developed across the first impedance element for providing at least a part of the first voltage level, anda second circuit for generating a second voltage level for applying to a second end of the primary impedance element, the second voltage level being provided as a function of P of said N first base-emitter voltages, where P is an integer value greater than zero, the second circuit comprising a second op-amp configured to operate in a closed loop mode and co-operating with the first transistor stack for producing the P of said N first base-emitter voltages, the second circuit co-operating with the first circuit and with the primary impedance element so that the voltage developed across the primary impedance element by the difference of the first and second voltage levels comprises said PTAT voltage.2. A PTAT voltage generating circuit as claimed in claim 1 in which the first impedance element is coupled between one of the inverting and non-inverting inputs of the first op-amp and one of the first and second transistor stacks, and the other of the inverting and non-inverting inputs of the first op-amp is coupled to the other one of the first and second transistor stacks.3. A PTAT voltage generating circuit as claimed in claim 2 in which a second impedance element is coupled to the one of the inverting and non-inverting inputs of the first op-amp to which the first impedance element is coupled for setting the closed loop gain of the first op-amp, and the voltage difference developed across the first impedance element is reflected onto the second impedance element, the second impedance element being coupled to the first end of the primary impedance element for applying the first voltage level to the first end of the primary impedance element.4. A PTAT voltage generating circuit as claimed in claim 3 in which the first op-amp co-operates with the second transistor stack for combining at least one of the second base-emitter voltages with the voltage developed across the second impedance element for producing the first voltage level.5. A PTAT voltage generating circuit as claimed in claim 3 in which the second impedance element is coupled to the first end of the primary impedance element through the base-emitter of at least one third transistor, each third transistor developing a first base-emitter voltage for combining with the voltage developed across the second impedance element for producing the first voltage level.6. A PTAT voltage generating circuit as claimed in claim 5 in which the number of first base-emitter voltages developed in the first voltage level by the third transistors is equal to the number P of first base-emitter voltages in the second voltage level.7. A PTAT generating circuit as claimed in claim 4 in which the number of first base-emitter voltages developed in the first transistor stack is greater than the number of second base-emitter voltages developed in the second transistor stack, the difference between the number of first base-emitter voltages developed in the first transistor stack and the number of second base-emitter voltages developed in the second transistor stack is equal to the number P of first base-emitter voltages provided in the second voltage level.8. A PTAT generating circuit as claimed in claim 3 in which the value of the first base-emitter voltages in the first voltage level derived from the first transistor stack is equal to the product of the number of first base-emitter voltages developed in the first transistor stack by the ratio of the impedance of the second impedance element to the impedance of the first impedance element, and the value of the second base-emitter voltages in the first voltage level derived from the second transistor stack is equal to the sum of the number of second base-emitter voltages developed in the second transistor stack plus the product of the number of second base-emitter voltages developed in the second transistor stack by the ratio of the impedance of the second impedance element to the impedance of the first impedance element.9. A PTAT voltage generating circuit as claimed in claim 8 in which the M second base-emitter voltages from which the first voltage level is produced are derived from the second transistor stack.10. A PTAT voltage generating circuit as claimed in claim 1 in which the P first base-emitter voltages from which the second voltage level is produced are applied to one of the inverting and non-inverting inputs of the second op-amp, and the second end of the primary impedance element is coupled to the other of the inverting and non-inverting inputs of the second op-amp, so that as the second op-amp operates to maintain the voltages on the respective inverting and non-inverting inputs thereof similar, the second voltage level is applied to the second end of the primary impedance element.11. A PTAT generating circuit as claimed in claim 10 in which an output impedance element co-operates with the primary impedance element for setting the closed loop gain of the second op-amp, the voltage developed across the primary impedance element being reflected across the output impedance element by the ratio of the impedance of the output impedance element to the impedance of the primary impedance element for providing an output voltage comprising a PTAT voltage across the output impedance element.12. A PTAT voltage generating circuit as claimed in claim 11 in which the first end of the primary impedance element is coupled to the output of one of the first and second op-amps, and the output impedance element is coupled between the one of the inverting and non-inverting inputs of the second op-amp to which the primary impedance is coupled and the output of the one of the first and second op-amps to which the primary impedance element is not coupled.13. A PTAT voltage generating circuit as claimed in claim 12 in which the one of the primary impedance element and the output impedance element which is coupled to the output of the second op-amp is coupled to one of the inverting and non-inverting inputs of the second op-amp to provide negative feedback from the output of the second op-amp.14. A PTAT voltage generating circuit as claimed in claim 12 in which the first end of the primary impedance element is coupled to the output of the first op-amp.15. A PTAT voltage generating circuit as claimed in claim 12 in which the first end of the primary impedance element is coupled to the output of the second op-amp.16. A PTAT voltage generating circuit as claimed in claim 1 in which the number of second base-emitter voltages developed in the second transistor stack is at least two second base-emitter voltages.17. A PTAT voltage generating circuit as claimed in claim 1 in which the number of first base-emitter voltages developed in the first transistor stack is equal to or greater than the number of second base-emitter voltages developed in the second transistor stack.18. A PTAT voltage generating circuit as claimed in claim 1 in which the first current density at which the first transistors are operated is greater than the second current density at which the second transistors are operated.19. A PTAT voltage generating circuit as claimed in claim 1 in which the first and second voltage levels are referenced to a common ground reference voltage of the PTAT voltage generating circuit.20. A PTAT voltage generating circuit as claimed in claim 1 in which each first and second transistor is provided by a bipolar substrate transistor.21. A PTAT voltage generating circuit as claimed in claim 1 in which each impedance element is a resistive impedance element.22. A PTAT voltage generating circuit as claimed in claim 1 in which the circuit is implemented in a CMOS process.23. A bandgap voltage reference circuit for producing a bandgap voltage reference, the bandgap voltage reference circuit comprising the PTAT voltage generating circuit as claimed in claim 1 for generating a PTAT voltage for summing with a CTAT voltage, and a means for summing the PTAT voltage with the CTAT voltage for providing the bandgap voltage reference.24. A bandgap voltage reference circuit for producing a bandgap voltage reference, the bandgap voltage reference circuit comprising:a CTAT voltage source for developing a CTAT voltage,a PTAT voltage source for developing a PTAT voltage for summing with the CTAT voltage, the PTAT voltage source comprising:a primary impedance element across which a PTAT voltage is developed,a first circuit for generating a first voltage level for applying to a first end of the primary impedance element, the first voltage level being provided as a function of the difference of N first base-emitter voltages and M second base-emitter voltages, N and M being integer values greater than zero and being of values different to each other, the first circuit comprising a first transistor stack having at least one first transistor for providing at least one of the N first base-emitter voltages and a second transistor stack having at least one second transistor for providing at least one of the M second base-emitter voltages each first transistor being operated at a first current density, and each second transistor being operated at a second current density, the second current density being different to the first current density, a first impedance element and a first op-amp configured to operate in a closed loop mode co-operating with the first and second transistor stacks so that a voltage difference of the first and second base-emitter voltages in the respective first and second transistor stacks is developed across the first impedance element for providing at least a part of the first voltage level, anda second circuit for generating a second voltage level for applying to a second end of the primary impedance element, the second voltage level being provided as a function of P of said N first base-emitter voltages, where P is an integer value greater than zero, the second circuit comprising a second op-amp configured to operate in a closed loop mode and co-operating with the first transistor stack for producing the P of said N first base-emitter voltages, the second circuit co-operating with the first circuit and with the primary impedance element so that the voltage developed across the primary impedance element by the difference of the first and second voltage levels comprises said PTAT voltage, anda means for summing the PTAT voltage with the CTAT voltage.25. A bandgap voltage reference circuit as claimed in claim 24 in which the first impedance element is coupled between one of the inverting and non-inverting inputs of the first op-amp and one of the first and second transistor stacks, and the other of the inverting and non-inverting inputs of the first op-amp is coupled to the other one of the first and second transistor stacks.26. A bandgap voltage reference circuit as claimed in claim 25 in which a second impedance element is coupled to the one of the inverting and non-inverting inputs of the first op-amp to which the first impedance element is coupled for setting the closed loop gain of the first op-amp, and the voltage difference developed across the first impedance element is reflected onto the second impedance element, the second impedance element being coupled to the first end of the primary impedance element for applying the first voltage level to the first end of the primary impedance element.27. A bandgap voltage reference circuit as claimed in claim 26 in which the first op-amp co-operates with the second transistor stack for combining at least one of the second base-emitter voltages with the voltage developed across the second impedance element for producing the first voltage level.28. A bandgap voltage reference circuit as claimed in claim 26 in which the second impedance element is coupled to the first end of the primary impedance element through the base-emitter of at least one third transistor, each third transistor developing a first base-emitter voltage for combining with the voltage developed across the second impedance element for producing the first voltage level.29. A bandgap voltage reference circuit as claimed in claim 28 in which the number of first base-emitter voltages developed in the first voltage level by the third transistors is equal to the number P of first base-emitter voltages in the second voltage level.30. A bandgap voltage reference circuit as claimed in claim 27 in which the number of first base-emitter voltages developed in the first transistor stack is greater than the number of second base-emitter voltages developed in the second transistor stack, the difference between the number of first base-emitter voltages developed in the first transistor stack and the number of second base-emitter voltages developed in the second transistor stack is equal to the number P of first base-emitter voltages provided in the second voltage level.31. A bandgap voltage reference circuit as claimed in claim 27 in which the value of the first base-emitter voltages in the first voltage level derived from the first transistor stack is equal to the product of the number of first base-emitter voltages developed in the first transistor stack by the ratio of the impedance of the second impedance element to the impedance of the first impedance element, and the value of the second base-emitter voltages in the first voltage level derived from the second transistor stack is equal to the sum of the number of second base-emitter voltages developed in the second transistor stack plus the product of the number of second base-emitter voltages developed in the second transistor stack by the ratio of the impedance of the second impedance element to the impedance of the first impedance element.32. A bandgap voltage reference circuit as claimed in claim 31 in which the M second base-emitter voltages from which the first voltage level is produced are derived from the second transistor stack.33. A bandgap voltage reference circuit as claimed in claim 24 in which the P first base-emitter voltages from which the second voltage level is produced are applied to one of the inverting and non-inverting inputs of the second op-amp, and the second end of the primary impedance element is coupled to the other of the inverting and non-inverting inputs of the second op-amp, so that as the second op-amp operates to maintain the voltages on the respective inverting and non-inverting inputs thereof similar, the second voltage level is applied to the second end of the primary impedance element.34. A bandgap voltage reference circuit as claimed in claim 33 in which an output impedance element is provided for co-operating with the primary impedance element so that the voltage developed across the primary impedance element is reflected onto the output impedance element by the ratio of the impedance of the output impedance element to the impedance of the primary impedance element for providing the PTAT voltage on the output impedance element for summing with the CTAT voltage.35. A bandgap voltage reference circuit as claimed in claim 34 in which the output impedance element co-operates with the primary impedance element for setting the closed loop gain of the second op-amp.36. A bandgap voltage reference circuit as claimed in claim 34 in which the P first base-emitter voltages of the second voltage level form the CTAT voltage, and the second op-amp co-operates with the output impedance for forming the summing means for summing the CTAT voltage provided by the P first base-emitter voltages with the PTAT voltage developed across the output impedance element for providing the bandgap voltage reference.37. A bandgap voltage reference circuit as claimed in claim 36 in which the first and second voltage levels are referenced to a common ground reference voltage of the bandgap voltage reference circuit, and the bandgap voltage reference is derived from the end of the output impedance element which is coupled to the output of one of the first and second op-amps, and is referenced to the common ground voltage.38. A bandgap voltage reference circuit as claimed in claim 34 in which the first end of the primary impedance element is coupled to the output of one of the first and second op-amps, and the output impedance element is coupled between the one of the inverting and non-inverting inputs of the second op-amp to which the primary impedance element is coupled and the output of the one of the first and second op-amps to which the primary impedance element is not coupled.39. A bandgap voltage reference circuit as claimed in claim 38 in which the one of the primary impedance element and the output impedance element which is coupled to the output of the second op-amp is coupled to one of the inverting and non-inverting inputs of the second op-amp to provide negative feedback from the output of the second op-amp.40. A bandgap voltage reference circuit as claimed in claim 38 in which the first end of the primary impedance element is coupled to the output of the first op-amp.41. A bandgap voltage reference circuit as claimed in claim 38 in which the first end of the primary impedance element is coupled to the output of the second op-amp.42. A bandgap voltage reference circuit as claimed in claim 24 in which the number of second base-emitter voltages developed in the second transistor stack is at least two second base-emitter voltages.43. A bandgap voltage reference circuit as claimed in claim 24 in which the number of first base-emitter voltages developed in the first transistor stack is equal to or greater than the number of second base-emitter voltages developed in the second transistor stack.44. A bandgap voltage reference circuit as claimed in claim 24 in which the first current density at which the first transistors are operated is greater than the second current density at which the second transistors are operated.45. A bandgap voltage reference circuit as claimed in claim 24 in which the first and second voltage levels are referenced to a common ground reference voltage of the PTAT voltage generating circuit.46. A bandgap voltage reference circuit as claimed in claim 24 in which each first and second transistor is provided by a bipolar substrate transistor.47. A bandgap voltage reference circuit as claimed in claim 24 in which each impedance element is a resistive impedance element.48. A bandgap voltage reference circuit as claimed in claim 24 in which the emitters of the first and second transistors of the respective first and second transistor stacks are forward biased with a PTAT current.49. A bandgap voltage reference circuit as claimed in claim 48 in which the bandgap voltage reference is provided with TlnT temperature curvature correction.50. A bandgap voltage reference circuit as claimed in claim 49 in which the forward biasing current of at least one of the second transistors of the second transistor stack comprises a CTAT current component for providing the TlnT temperature curvature correction of the bandgap voltage reference.51. A method for generating a PTAT voltage across a primary impedance element, the method comprising the steps of:applying a first voltage level to a first end of the primary impedance element, the first voltage level being provided as a function of the difference of N first base-emitter voltages and M second base-emitter voltages, N and M being integer values greater than zero and being of values different to each other, the first voltage level being produced by a first circuit comprising a first transistor stack having at least one first transistor for providing at least one of the N first base-emitter voltages, and a second transistor stack having at least one second transistor for providing at least one of the M second base-emitter voltages each first transistor being operated at a first current density, and each second transistor being operated at a second current density, the second current density being different to the first current density, a first impedance element and a first op-amp configured to operate in a closed loop mode co-operating with the first and second transistor stacks so that a voltage difference of the first and second base-emitter voltages in the respective first and second transistor stacks is developed across the first impedance element for providing at least a part of the first voltage level, andapplying a second voltage level to a second end of the primary impedance element, the second voltage level being provided as a function of P of said N first base-emitter voltages, where P is an integer value greater than zero, the second voltage level being produced by a second circuit comprising a second op-amp configured to operate in a closed loop mode and co-operating with the first transistor stack for producing the P of said N first base-emitter voltages, the first and second voltage levels being applied to the respective first and second ends of the primary impedance element by the first and second circuits so that the voltage developed across the primary impedance element by the difference of the first and second voltage levels comprises said PTAT voltage.52. A method for generating a bandgap voltage reference comprising the steps of:providing a CTAT voltage from a CTAT voltage source,providing a PTAT voltage for summing with the CTAT voltage, the PTAT voltage being provided by applying a first voltage level to a first end of a primary impedance element, the first voltage level being provided as a function of the difference of N first base-emitter voltages and M second base-emitter voltages, N and M being integer values greater than zero and being of values different to each other, the first voltage level being produced by a first circuit comprising a first transistor stack having at least one first transistor for providing at least one of the N first base-emitter voltages, and a second transistor stack having at least one second transistor for providing at least one of the M second base-emitter voltages, each first transistor being operated at a first current density, and each second transistor being operated at a second current density, the second current density being different to the first current density, a first impedance element and a first op-amp configured to operate in a closed loop mode co-operating with the first and second transistor stacks so that a voltage difference of the first and second base-emitter voltages in the respective first and second transistor stacks is developed across the first impedance element for providing at least a part of the first voltage level, andapplying a second voltage level to a second end of the primary impedance element, the second voltage level being provided as a function of P of said N first base-emitter voltages, where P is an integer value greater than zero, the second voltage level being produced by a second circuit comprising a second op-amp configured to operate in a closed loop mode and co-operating with the first transistor stack for producing the P of said N first base-emitter voltages, the first and second voltage levels being applied to the respective first and second ends of the primary impedance element by the first and second circuits so that the voltage developed across the primary impedance element by the difference of the first and second voltage levels comprises said PTAT voltage, andsumming the PTAT voltage developed across the primary impedance element with the CTAT voltage.
FIELD OF THE INVENTIONThe present invention relates to a PTAT voltage generating circuit for producing a PTAT voltage, and the invention also relates to a method for producing a PTAT voltage. The invention further relates to a bandgap voltage reference circuit for producing a bandgap voltage reference, and to a method for producing a bandgap voltage reference. In particular, the invention relates to a PTAT voltage generating circuit and to a method for generating a PTAT voltage, which is suitable for operating in relatively low supply voltage environments, and in which the effect of op-amp voltage offsets is minimised. Additionally, the invention relates to a bandgap voltage reference circuit and a method for producing a bandgap voltage reference which is suitable for operating in relatively low supply voltage environments, and in which the effect of op-amp voltage offsets in the bandgap voltage reference is minimised.BACKGROUND TO THE INVENTIONBandgap voltage reference circuits operate on the principle of adding two voltages having equal and opposite temperature coefficients to produce a bandgap voltage reference. This is typically achieved by adding the base-emitter junction voltage of a forward biased transistor which is complementary to absolute temperature (CTAT), and thus decreases with absolute temperature, to a voltage which is proportional to absolute temperature (PTAT), and thus increases with absolute temperature. Typically, the PTAT voltage is developed by amplifying the voltage difference of the base-emitter voltages of two forward biased transistors operating at different current densities.In FIG. 1 a typical prior art CMOS bandgap voltage reference circuit is illustrated. A CTAT voltage is derived from the base-emitter voltage of a first substrate bipolar transistor Q1, the temperature dependent base-emitter voltage of which is given by the following equation:[mathematical formula - see original document]whereVbe(Q1) is the temperature dependent base-emitter voltage of the first bipolar transistor Q1,VG0 is the bandgap energy voltage, assumed to be about 1.205 volts for silicon,T is the operating absolute temperature,T0 is the reference absolute temperature, generally, the middle point in the temperature range,Vbe(T0) is the base-emitter voltage of the first transistor Q1 at the reference temperature T0,K is Boltzmann's constant,q is the electron charge,Ic(T) is the collector current in the first bipolar transistor Q1 at temperature T,Ic(T0) is the collector current in the first bipolar transistor Q1 at the reference temperature T0,[sigma] is the saturation current temperature exponent of the first bipolar transistor Q1.A PTAT voltage which is derived from the difference of the base-emitter voltages of the first transistor Q1, and a second substrate bipolar transistor Q2, is developed across a first resistor r1 and is scaled onto a second resistor r2. The scaled PTAT voltage across the second resistor r2 is summed with the CTAT voltage of the first transistor Q1 to provide the bandgap voltage reference Vref across an output terminal 100 and a ground terminal 101.The bases of the first and second transistors Q1 and Q2 are coupled to the ground terminal 101, and thus are held at a common base voltage, namely, ground. The emitter area of the second transistor Q2 is n2 times the emitter area of the first transistor Q1, and the first transistor Q1 is operated at a higher current density than the second transistor Q2. An operational amplifier (op-amp) A1 holds its respective inverting input Inn and its non-inverting input Inp at substantially the same voltage, and thus, the difference in the base-emitter voltages of the first and second transistors Q1 and Q2, which is a PTAT voltage is developed across the first resistor r1. As a result, the current flowing through the first resistor r1 is a PTAT current Ip. The PTAT current Ip flowing through the resistor r1 is drawn through a pMOS transistor M2 of a current mirror circuit, which also comprises pMOS transistors M1 and M3. By providing the pMOS transistor M1 as a diode connected transistor with the same aspect ratio[mathematical formula - see original document]as the pMOS transistor M2, and by providing the pMOS transistor M3 with an aspect ratio n1 times larger than the aspect ratio of the pMOS transistors M1 and M2, the current flowing through the second resistor r2 which forward biases the first transistor Q1 is a PTAT current of value n1.Ip. Accordingly, the difference in base-emitter voltages of the first and second transistors Q1 and Q2 developed across the first resistor r1 is:[mathematical formula - see original document]whereVr1 is the voltage developed across the resistor r1 at temperature T,[Delta]Vbe is the difference in the base-emitter voltages of the first and second transistors Q1 and Q2,n1 is the aspect ratio of the pMOS transistor M3 to the pMOS transistor M1,n2 is the ratio of the emitter area of the second transistor Q2 to the emitter area of the first transistor Q1.The scaled value of the difference in the base-emitter voltages developed across the resistor r2 is given by the equation:[mathematical formula - see original document]wherer1 is the resistance value of the resistor r1 andr2 is the resistance value of the resistor r2.Thus, the bandgap voltage reference Vref relative to ground is given by the equation:[mathematical formula - see original document]Bandgap voltage reference circuits have been well known in the art since the early 1970s as is evidenced by the IEEE publications of Robert Widlar (IEEE Journal of Solid State Circuits Vol. SC-6 No. 1, February 1971) and A. Paul Brokaw (IEEE Journal of Solid State Circuits Vol. SC-9 No. 6, December 1974). A detailed discussion on bandgap voltage reference circuits including examples of prior art bandgap voltage reference circuits is provided in co-pending U.S. patent application Ser. No. 10/375,593 of Stefan Marinca, which was filed on Feb. 27, 2003, the contents of which are incorporated herein by reference. Bandgap voltage reference circuits are described in, for example, U.S. Pat. No. 4,808,908 of Lewis, et al and U.S. Pat. No. 5,352,973 of Audy.Typically, the CTAT base-emitter voltage of a bipolar transistor operating at room temperature is of the order of 0.7 volts, and the difference in the base-emitter voltages [Delta]Vbe of two transistors operating at room temperature at different current densities is in the order of 100 millivolts or less. Thus, in order to balance the CTAT base-emitter voltage of a bipolar transistor, the PTAT voltage developed by the difference in the base-emitter voltages [Delta]Vbe must be amplified by a gain factor of the order of five in order to provide a PTAT voltage of the order of 0.5 volts for summing with the CTAT voltage. Accordingly, the PTAT voltage developed across the resistor r1 of the prior art bandgap circuit of FIG. 1 must be amplified by a factor of five to produce the PTAT voltage developed across the resistor r2 for summing with the CTAT base-emitter voltage of the transistor Q1. With the PTAT voltage so amplified, the bandgap voltage reference circuit of FIG. 1 produces a bandgap voltage reference of approximately 1.25 volts with a temperature curvature error TlnT of approximately 2.5 millivolts over a typical industrial temperature range of from -40[deg.] C. to +85[deg.] C. Correction of the voltage reference to remove the TlnT temperature curvature, which is described in U.S. Pat. No. 5,352,973 of Audy, typically results in the bandgap voltage reference being reduced to approximately 1.16 volts.Due to process variations in CMOS processes, the bandgap voltage reference of bandgap voltage reference circuits varies from lot to lot, wafer to wafer within the same lot, and indeed even from part to part from the same wafer. The variation in the bandgap voltage reference from wafer to wafer of the same lot is due largely to voltage offsets in the op-amp and in the current mirror circuit. Voltage offsets due to current mirror offsets can be reduced by replacing the MOS transistors of the current mirror circuit with resistors, as is illustrated in the prior art bandgap voltage reference circuit of FIG. 2.The bandgap voltage reference Vref of the prior art bandgap voltage reference circuit of FIG. 2 is produced at the output of the op-amp A1 on a terminal 100, and is provided relative to ground 101. However, in the bandgap voltage reference circuit of FIG. 2, input voltage offset of the op-amp and the input noise of the op-amp are amplified into the bandgap voltage reference by the closed loop gain G of the op-amp A1, which is given by the following equation:[mathematical formula - see original document]In CMOS processes, op-amp input voltage offsets are typically of the order of millivolts, and where the PTAT base-emitter voltage difference [Delta]Vbe is amplified by a factor of the order of five, the op-amp input voltage offset appears in the amplified PTAT voltage as a voltage error of more than 6 millivolts. The bandgap voltage reference of the circuit of FIG. 2 is of the order of 1.25 volts, and thus the voltage error resulting from op-amp input voltage offset is approximately 6 millivolts in the 1.25 volts bandgap voltage reference.Bandgap voltage reference circuits have been provided to reduce the sensitivity of the bandgap voltage reference to op-amp voltage offsets, and one such prior art bandgap voltage reference circuit is illustrated in FIG. 3. The prior art bandgap voltage reference circuit of FIG. 3 comprises stacked first bipolar transistors Q1 and Q3, and stacked second bipolar transistors Q2 and Q4 of larger emitter areas than that of the first transistors Q1 and Q3. The stack of first transistors are operated at a higher current density than the stack of second transistors to produce a base-emitter voltage difference which is a PTAT voltage, and is developed across the resistor r1. In this case the PTAT voltage developed across the resistor r1 is 2[Delta]Vbe, and is gained up across the resistor r4, and summed with the CTAT voltages developed by the two transistors Q1 and Q3 to produce the bandgap voltage reference Vref between the output of the op-amp A1 on a terminal 100 and ground 101. The forward biasing emitter currents for the first and second transistors Q1 to Q4 are generated directly from the bandgap voltage reference through the resistors r2, r3, r4 and r5. However, the resistors r2, r4 and r5 could be replaced by a MOS current mirror device, if the error due to MOS transistors in the bandgap voltage reference could be tolerated.Since the CTAT voltage of the bandgap voltage reference circuit of FIG. 3 is provided by the base-emitter voltages of two transistors, namely, the transistors Q1 and Q3, the CTAT voltage is approximately 1.4 volts. Additionally, since the PTAT voltage developed across the resistor r1 results from the difference in the base-emitter voltages of the two pairs of transistors operating at different current densities, the PTAT voltage developed across the resistor r1 is approximately 200 millivolts. To balance the CTAT voltage of 1.4 volts, the PTAT voltage developed across the resistor r1 must be amplified by a factor of five and developed across the resistor r4, in order to produce a PTAT voltage of approximately 1 volt for summing with the CTAT voltage. Thus, the bandgap voltage reference produced by the prior art bandgap voltage reference circuit of FIG. 3 is approximately 2.5 volts, and is greater than the bandgap voltage reference produced by the circuits of FIGS. 1 and 2. However, since the PTAT voltage is amplified by a factor of five, the input voltage offset of the op-amp of the prior art bandgap voltage reference circuit of FIG. 3 is also amplified by a factor of five. Assuming a similar input voltage offset for the op-amp of the circuit of FIG. 3, as that for the op-amp of the circuit of FIG. 2, the absolute value of the voltage error resulting from the op-amp voltage offset which is reflected into the bandgap voltage reference of the circuit of FIG. 3 is similar at approximately 6 millivolts. However, the relative value of the op-amp voltage offset in the bandgap voltage reference is reduced to 6 millivolts in 2.5 volts, as opposed to the relative value of the op-amp voltage offset of 6 millivolts in the bandgap voltage reference of 1.25 volts in the prior art circuits of FIGS. 1 and 2. Accordingly, the relative contribution of the voltage offset of the op-amp in the bandgap voltage reference is reduced in the bandgap voltage reference circuit of FIG. 3, and thus the sensitivity of the bandgap voltage reference to such op-amp voltage offset is similarly reduced.U.S. Pat. No. 6,614,209 of Gregoire discloses a bandgap voltage reference circuit which avoids the need to amplify the PTAT voltage, or at least minimise the gain by which the PTAT voltage must be amplified. By providing the PTAT voltage without amplification, or if amplification is required, by minimising the gain, the effect of op-amp voltage offset in the bandgap voltage reference is minimised. Gregoire couples a plurality of PTAT voltage cells in series so that the PTAT voltages developed by the respective cells are summed together, and the summed PTAT voltages are then summed with a CTAT voltage developed across the base-emitter of a bipolar transistor. Each PTAT voltage cell of the bandgap voltage reference circuit of Gregoire comprises an op-amp and two stacks of bipolar transistors, one of which is coupled to the inverting input of the corresponding op-amp, and the other of which is coupled to the non-inverting input of the op-amp. One of the stacks in each PTAT voltage cell of Gregoire comprises two transistors, while the other comprises three transistors. The third transistor is provided for complementing a non-PTAT voltage component which would otherwise arise in the sum of the PTAT voltages.However, the bandgap voltage reference circuit of Gregoire suffers from a serious disadvantage in that a relatively high supply voltage is required to power the op-amps, and in particular, the op-amp of the last PTAT voltage cell in the series. Even with only two PTAT voltage cells, the voltages on the inverting and non-inverting inputs of the op-amp in the last PTAT voltage cell in the series will be the equivalent of three base-emitter voltages of bipolar transistors plus three base-emitter voltage differences [Delta]Vbe. At a temperature of -40[deg.] C. the base-emitter voltage of each transistor is of the order of 0.8 volts, and each base-emitter voltage difference [Delta]Vbe is of the order of 50 millivolts. As a result the common input voltage of the op-amp of the second PTAT voltage cell is approximately 2.55 volts at -40[deg.] C. This, thus, will require a supply voltage of at least 2.8 volts for the current mirrors supplying the forward biasing currents to the uppermost bipolar transistors of the second PTAT voltage cell. Accordingly, the bandgap voltage reference circuit of Gregoire, in general, is unsuitable for implementing in circuits with low supply voltages, such as low voltage CMOS circuits, where the supply voltage is typically limited to 2.5 volts to 2.7 volts.In low voltage CMOS circuits, op-amps provided with PMOS input pairs require a supply voltage of approximately 0.8 volts higher than the common input voltage of the op-amp. Accordingly, if the op-amp in the last PTAT voltage cell of Gregoire were provided with pMOS input pairs, a supply voltage of more than 3.35 volts would be required. The supply voltage required by the op-amp in the last PTAT voltage cell could be reduced by providing the op-amp with nMOS input pairs, which would require a supply voltage of approximately 2.75 volts. However, even with NMOS input pairs, the op-amp in the last of the series of PTAT voltage cells of the bandgap voltage reference circuit of Gregoire would still be unable to operate within the supply voltage of 2.5 volts to 2.7 volts of low voltage CMOS processes.However, a disadvantage of using an op-amp with an NMOS input pair, as opposed to a pMOS input pair, is that the low frequency 1/f noise for frequencies below 10 Hz increases as the frequency decreases, and in general, is approximately five times greater in an op-amp with an NMOS input pair, than in an op-amp with a pMOS input pair. Thus, in order to minimise noise from the op-amp and in turn op-amp voltage offset being reflected into the bandgap voltage reference, it is preferable to use op-amps with pMOS input pairs. However, as discussed above, this imposes a further limitation on the available headroom within the op-amp can operate.Accordingly, there is a need for a bandgap voltage reference circuit for producing a bandgap voltage reference which is suitable for operating in relatively low supply voltage environments, and in which the effect of op-amp voltage offsets is minimised.The present invention is directed towards providing such a bandgap voltage reference circuit, and the invention is also directed towards providing a method for producing a bandgap voltage reference from a relatively low supply voltage, and with the effect of op-amp voltage offsets in the bandgap voltage reference minimised. The invention is also directed towards providing a PTAT voltage generating circuit for generating a PTAT voltage, which is suitable for operating in a relatively low supply voltage environment, and in which the effect of op-amp voltage offsets in the PTAT voltage is minimised.SUMMARY OF THE INVENTIONAccording to the invention there is provided a PTAT voltage generating circuit comprising:a primary impedance element across which a PTAT voltage is developed,a first circuit for generating a first voltage level for applying to a first end of the primary impedance element, the first voltage level being provided as a function of the difference of N first base-emitter voltages and M second base-emitter voltages, N and M being interger values greater than zero and being of values different to each other, the first circuit comprising a first transistor stack having at least one first transistor for providing at least one of the N first base-emetter voltages, and a second transistor stack having at least one second transistor for providing at least one of the M second base-emitter voltages, each first transistor being operated at a first current density, and each second transistor being operated at a second current density, the second current density being different to the first current density, a first impedance element and a first op-amp configured to operate in a closed loop mode co-operating with the first and second transistor stacks so that a voltage difference of the first and second base-emitter voltages in the respective first and second transistor stacks is developed across the first impedance element for providing at least a part of the first voltage level, anda second circuit for generating a second voltage level for applying to a second end of the primary impedance element, the second voltage level being provided as a function of P of said N first base-emitter voltages, where P is an integer value greater than zero, the second circuit comprising a second op-amp configured to operate in a closed loop mode and co-operating with the first transistor stack for producing the P of said N first base-emitter voltages, the second circuit co-operating with the first circuit and with the primary impedance element so that the voltage developed across the primary impedance element by the difference of the first and second voltage levels comprises said PTAT voltage.In one embodiment of the invention the first circuit comprises a first transistor stack having at least one first transistor for providing at least one of the first base-emitter voltages, and a second transistor stack having at least one second transistor for providing at least one of the second base-emitter voltages, a first impedance element and a first op-amp configured to operate in a closed loop mode co-operating with the first and second transistor stacks for developing a voltage difference of the first and second base-emitter voltages developed in the respective first and second transistor stacks across the first impedance element from which a part of the first voltage level is derived.Preferably, the first impedance element is coupled between one of the inverting and non-inverting inputs of the first op-amp and one of the first and second transistor stacks, and the other of the inverting and non-inverting inputs of the first op-amp is coupled to the other one of the first and second transistor stacks.Advantageously, a second impedance element is coupled to the one of the inverting and non-inverting inputs of the first op-amp to which the first impedance element is coupled for setting the closed loop gain of the first op-amp, and the voltage difference developed across the first impedance element is reflected onto the second impedance element, the second impedance element being coupled to the first end of the primary impedance element for applying the first voltage level to the first end of the primary impedance element.In one embodiment of the invention the first op-amp co-operates with the second transistor stack for combining at least one of the second base-emitter voltages with the voltage developed across the second impedance element for producing the first voltage level.In another embodiment of the invention the second impedance element is coupled to the first end of the primary impedance element through the base-emitter of at least one third transistor, each third transistor developing a first base-emitter voltage for combining with the voltage developed across the second impedance element for producing the first voltage level.In one embodiment of the invention the number of first base-emitter voltages developed in the first voltage level by the third transistors is equal to the number P of first base-emitter voltages in the second voltage level.In another embodiment of the invention the number of first base-emitter voltages developed in the first transistor stack is greater than the number of second base-emitter voltages developed in the second transistor stack, the difference between the number of first base-emitter voltages developed in the first transistor stack and the number of second base-emitter voltages developed in the second transistor stack is equal to the number P of first base-emitter voltages provided in the second voltage level.Preferably, the value of the first base-emitter voltages in the first voltage level derived from the first transistor stack is equal to the product of the number of first base-emitter voltages developed in the first transistor stack by the ratio of the impedance of the second impedance element to the impedance of the first impedance element, and the value of the second base-emitter voltages in the first voltage level derived from the second transistor stack is equal to the sum of the number of second base-emitter voltages developed in the second transistor stack plus the product of the number of second base-emitter voltages developed in the second transistor stack by the ratio of the impedance of the second impedance element to the impedance of the first impedance element.Advantageously, the M second base-emitter voltages from which the first voltage level is produced are derived from the second transistor stack.In another embodiment of the invention the P first base-emitter voltages from which the second voltage level is produced are applied to one of the inverting and non-inverting inputs of the second op-amp, and the second end of the primary impedance element is coupled to the other of the inverting and non-inverting inputs of the second op-amp, so that as the second op-amp operates to maintain the voltages on the respective inverting and non-inverting inputs thereof similar, the second voltage level is applied to the second end of the primary impedance element.Preferably, each first base-emitter voltage of the P first base-emitter voltages of the second voltage level is derived from a corresponding one of the first base-emitter voltages developed in the first transistor stack.In one embodiment of the invention an output impedance element co-operates with the primary impedance element for setting the closed loop gain of the second op-amp, the voltage developed across the primary impedance element being reflected across the output impedance element by the ratio of the impedance of the output impedance element to the impedance of the primary impedance element for providing an output voltage comprising a PTAT voltage across the output impedance element.In another embodiment of the invention the first end of the primary impedance element is coupled to the output of one of the first and second op-amps, and the output impedance element is coupled between the one of the inverting and non-inverting inputs of the second op-amp to which the primary impedance is coupled and the output of the one of the first and second op-amps to which the primary impedance element is not coupled.In a further embodiment of the invention the one of the primary impedance element and the output impedance element which is coupled to the output of the second op-amp is coupled to one of the inverting and non-inverting inputs of the second op-amp to provide negative feedback from the output of the second op-amp.In one embodiment of the invention the first end of the primary impedance element is coupled to the output of the first op-amp. In an alternative embodiment of the invention the first end of the primary impedance element is coupled to the output of the second op-amp.Preferably, the number of second base-emitter voltages developed in the second transistor stack is at least two second base-emitter voltages.Advantageously, the number of first base-emitter voltages developed in the first transistor stack is equal to or greater than the number of second base-emitter voltages developed in the second transistor stack.In one embodiment of the invention the first current density at which the first transistors are operated is greater than the second current density at which the second transistors are operated.Preferably, the first and second voltage levels are referenced to a common ground reference voltage of the PTAT voltage generating circuit.Advantageously, each first and second transistor is provided by a bipolar substrate transistor.Ideally, each impedance element is a resistive impedance element.In one embodiment of the invention the circuit is implemented in a CMOS process.The invention also provides a bandgap voltage reference circuit for producing a bandgap voltage reference, the bandgap voltage reference circuit comprising the PTAT voltage generating circuit according to the invention for generating a PTAT voltage for summing with a CTAT voltage, and a means for summing the PTAT voltage with the CTAT voltage for providing the bandgap voltage reference.Additionally, the invention provides a bandgap voltage reference circuit for producing a bandgap voltage reference, the bandgap voltage reference circuit comprising:a CTAT voltage source for developing a CTAT voltage,a PTAT voltage source for developing a PTAT voltage for summing with the CTAT voltage, the PTAT voltage source comprising:a primary impedance element across which a PTAT voltage is developed,a first circuit for generating a first voltage level for applying to a first end of the primary impedance element, the first voltage level being provided as a function of the difference of N first base-emitter voltages and M second base-emitter voltages, N and M being interger values greater than zero and being of values different to each other, the first circuit comprising a first transistor stack having at least one first transistor for providing at least one of the N first base-emitter voltages, and a second transistor stack having at least one second transistor for providing at least one of the M second base-emitter voltages, each first transistor being operated at a first current density, and each second transistor being operated at a second current density, the second current density being different to the first current density, a first impedance element and a first op-amp configured to operate in a closed loop mode co-operating with the first and second transistor stacks so that a voltage difference of the first and second base-emitter voltages in the respective first and second transistor stacks is developed across the first impedance element for providing at least a part of the first voltages level, anda second circuit for generating a second voltage level for applying to a second end of the primary impedance element, the second voltage level being provided as a function of P of said N first base-emitter voltages, where P is an integer value greater than zero, the second circuit comprising a second op-amp configured to operate in a closed loop mode and co-operating with the first transitor stack for producing the P of said N first base-emitter voltages, the second circuit co-operating with the first circuit and with the primary impedance element so that the voltage developed across the primary impedance element by the difference of the first and second voltage levels comprises said PTAT voltage, anda means for summing the PTAT voltage with the CTAT voltage.In one embodiment of the invention an output impedance element is provided for co-operating with the primary impedance element so that the voltage developed across the primary impedance element is reflected onto the output impedance element by the ratio of the impedance of the output impedance element to the impedance of the primary impedance element for providing the PTAT voltage on the output impedance element for summing with the CTAT voltage.In another embodiment of the invention the output impedance element co-operates with the primary impedance element for setting the closed loop gain of the second op-amp.Preferably, the P first base-emitter voltages of the second voltage level form the CTAT voltage, and the second op-amp co-operates with the output impedance for forming the summing means for summing the CTAT voltage provided by the P first base-emitter voltages with the PTAT voltage developed across the output impedance element for providing the bandgap voltage reference.Advantageously, the first and second voltage levels are referenced to a common ground reference voltage of the bandgap voltage reference circuit, and the bandgap voltage reference is derived from the end of the output impedance element which is coupled to the output of one of the first and second op-amps, and is referenced to the common ground voltage.In one embodiment of the invention the emitters of the first and second transistors of the respective first and second transistor stacks are forward biased with a PTAT current.Preferably, the bandgap voltage reference is provided with TlnT temperature curvature correction.Advantageously, the forward biasing current of at least one of the second transistors of the second transistor stack comprises a CTAT current component for providing the TlnT temperature curvature correction of the bandgap voltage reference.Further the invention provides a method for generating a PTAT voltage across a primary impedance element, the method comprising the steps of:applying a first voltage level to a first end of the primary impedance element, the first voltage level being provided as a function of the difference of N first base-emitter voltages and M second base-emitter voltages, N and M being integer values greater than zero and being of values different to each other, and the first voltage level being produced by a first circuit comprising a first transistor stack having at least one first transistor for providing at least one of the N first base-emitter voltages, and a second transistor stack having at least one second transistor for providing at least one of the M second base-emitter voltages, each first transistor beiong operated at a first current density, and each second transistor being operated at a second current density, the second current density being different to the first current density, a first impedance element and a first op-amp configured to operate in a closed loop mode co-operating with the first and second transistor stacks so that a voltage difference of the first and second base-emitter voltages in the respective first and a second transistor stacks is developed across the first impedance element for providing at least a part of the first voltage level, andapplying a second voltage level to a second end of the primary impedance element, the second voltage level being provided as a function of P of said N first base-emitter voltages, where P is an integer value greater than zero, the second voltage level being produced by a second circuit comprising a second op-amp configured to operate in a closed loop mode and co-operating with the first transistor stack for producing the P of said N first base-emitter voltages, the first and second voltage levels being applied to the respective first and second ends of the primary impedance element by the first and second circuits so that the voltage developed across the primary impedance element by the difference of the first and second voltage levels comprises said PTAT voltage.Additionally, the invention provides a method for generating a bandgap voltage reference comprising the steps of:providing a CTAT voltage from a CTAT voltage source,providing a PTAT voltage for summing with the CTAT voltage, the PTAT voltage being provided by applying a first voltage level to a first end of a primary impedance element, the first voltage level being provided by as a function of the difference of N first base-emitter voltages and M second base-emitter voltages, N and M being integer values greater than zero and being of values different to each other, the first voltage level being producede by a first circuit comprising a first transistore stack having at least one first transistor for providing at least one of the N first base-emitter voltages, and a second transistor stack having at least one second transistor for providing at least one of the M second base-emitter voltages, each first transistor being operated at a first current density, and each second transistor being operated at a second current density, the second current density being different to the first current density, a first impedance element and a first op-amp configured to operate in a closed loop mode co-operating with the first and second transistor stacks so that a voltage difference of the first and second base-emitter voltages in the respective tirst and second transistor stacks is developed across the first impedance element for providing at least a part of the first voltage level, andapplying a second voltage level to a second end of the primary impedance element, the second voltage level being provided as a function of P of said N first base-emitter voltages, where P is an integer value greater than zero, the second voltage level being produced by a second circuit comprising a second op-amp configured to operate in a closed loop mode and co-operating with the first transistor stack for producing the P of said N first base-emitter voltages, the first and second voltage levels being applied to the respective first and second ends of the primary impedance element by the first and second circuits so that the voltage developed across the primary impedance element by the difference of the first and second voltage levels comprises said PTAT voltage, andsumming the PTAT voltage developed across the primary impedance element with the CTAT voltage.ADVANTAGES OF THE INVENTIONThe advantages of the invention are many. The bandgap voltage reference circuit according to the invention is particularly suitable for operating with relatively low supply voltages, and is thus particularly suitable for use in low voltage environments, such as low voltage CMOS environments where the supply voltage is limited to 2.5 to 2.7 volts. Additionally, and of particular importance, the bandgap voltage reference circuit according to the invention produces a bandgap voltage reference with sensitivity to offsets and noise, and in particular, op-amp voltage offsets minimised. By providing the bandgap voltage reference circuit in the form of a first circuit and a second circuit, which develop respective first and second voltage levels, which are applied to the first and second ends, respectively, of the primary impedance element, so that the first and second voltage levels co-operate for developing a voltage across the primary impedance element as a PTAT voltage, provides the particular advantage that where the first and second circuits comprise first and second op-amps, respectively, the common input voltage to the respective first and second op-amps can be minimised. This, thus, allows the bandgap voltage reference circuits according to the invention to be operated at relatively low supply voltages. Additionally, by providing the bandgap voltage reference circuit with the first and second circuits with respective first and second op-amps, the PTAT voltage which is developed across the output impedance element from the primary impedance element is gained up by a significantly greater gain factor than the gain factors by which the input voltage offsets of the respective first and second op-amps are gained up and reflected in the PTAT voltage developed across the output impedance element. Accordingly, the sensitivity of the bandgap voltage reference to op-amp voltage offsets is minimised, and is significantly reduced over prior art bandgap voltage reference circuits.Furthermore, by virtue of the fact that the common input voltages of the respective first and second op-amps can be maintained relatively low, the first and second op-amps can be provided with pMOS input pairs even when the bandgap voltage reference circuits according to the invention are operating in low voltage CMOS environments. The fact that the first and second op-amps can be provided with pMOS input pairs minimises the noise in the bandgap voltage reference produced by the bandgap voltage reference circuit.The advantages which are achieved from the bandgap voltage reference circuits according to the invention are also obtained from the PTAT voltage generating circuits according to the invention.The invention and its many advantages will be readily apparent to those skilled in the art from the following description of some preferred embodiments thereof, which are given by way of example only, with reference to the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a circuit diagram of a prior art bandgap voltage reference circuit,FIG. 2 is a circuit diagram of another prior art bandgap voltage reference circuit,FIG. 3 is a circuit diagram of a further prior art bandgap voltage reference circuit,FIG. 4 is a block representation of a bandgap voltage reference circuit according to the invention,FIG. 5 is a circuit diagram of the bandgap voltage reference circuit of FIG. 4,FIGS. 6(a) to (c) illustrate waveforms of voltages and currents developed in the bandgap voltage reference circuit of FIG. 4,FIG. 7 is a circuit diagram of a bandgap voltage reference circuit according to another embodiment of the invention,FIGS. 8(a) to (c) illustrate waveforms of voltages and currents developed in the bandgap voltage reference circuit of FIG. 7,FIG. 9 is a circuit diagram of a bandgap voltage reference circuit according to a still further embodiment of the invention,FIG. 10 illustrates waveforms of voltage references developed in a simulation of the bandgap voltage reference circuit of FIG. 2 for the purpose of comparative analysis with the bandgap voltage reference circuit of FIG. 9, andFIG. 11 illustrates waveforms of voltage references developed in a simulation of the bandgap voltage reference circuit of FIG. 9.DETAILED DESCRIPTION OF SOME PREFERRED EMBODIMENTS OF THE INVENTIONReferring to the drawings and initially to FIGS. 4 and 5, there is illustrated a bandgap voltage reference circuit according to the invention, indicated generally by the reference numeral 1, for producing a bandgap voltage reference Vref on a voltage reference output terminal 3, which is referenced to a common ground voltage terminal 4 of the bandgap voltage reference circuit 1. The bandgap voltage reference circuit 1 is suitable for operating with a relatively low supply voltage Vdd, and produces the bandgap voltage reference Vref with the effect of op-amp voltage offsets in the bandgap voltage reference Vref minimised. The bandgap voltage reference circuit 1 comprises a PTAT voltage generating circuit which is also according to the invention and indicated generally by the reference numeral 5 for producing a PTAT voltage across a primary impedance element, namely, a primary resistor r3, which is in turn reflected across an output impedance element, namely, an output resistor r4 to produce an output PTAT voltage. The output PTAT voltage, which is developed across the output resistor r4 is summed with a CTAT voltage as will be described below for producing the bandgap voltage reference Vref on the output terminal 3.The PTAT voltage generating circuit 5 comprises a first circuit 8 for developing a first voltage level relative to the common ground voltage terminal 4 for applying to a first end 9 of the primary resistor r3, and a second circuit 10 for developing a second voltage level relative to the common ground voltage terminal 4 for applying to a second end 11 of the primary resistor r3. The first voltage level is derived from the voltage difference between N first base-emitter voltages, and M second base-emitter voltages as will be described below, and the second voltage level is derived from P first base-emitter voltages, as will also be described below, so that the voltage developed across the primary resistor r3 resulting from the difference of the first voltage level and the second voltage level is a PTAT voltage.The first circuit 8 comprises a PTAT voltage generating cell 15 comprising a first transistor stack 13 having two first substrate bipolar transistors Q1 and Q2, and a second transistor stack 14 having two second substrate bipolar transistors Q3 and Q4. The emitter area of each of the first transistors Q1 and Q2 is assumed to be unit area, and the emitter area of each of the second transistors Q3 and Q4 is n times the emitter area of one of the first transistors Q1 and Q2. Identical PTAT currents I1, I2, I3 and I4 supplied from a current mirror circuit 17, as will be described below, forward bias the first and second transistors Q1 to Q4, respectively, for operating the first transistors Q1 and Q2 at a first current density for producing first base-emitter voltages, and for operating the second transistors Q3 and Q4 at a second current density which is less than the first current density for producing second base-emitter voltages. In this embodiment of the invention the P first base-emitter voltages of the second voltage level which is provided by the second circuit 10 are derived from the first transistor stack 13, and some of the N first base-emitter voltages of the first voltage level provided by the first circuit 8 are derived from the first transistor stack 13. The M second base-emitter voltages of the first voltage level provided by the first circuit 8 are derived from the second transistor stack 14.The first circuit 8 also comprises a first op-amp A1, the non-inverting input of which is coupled to the emitter of the uppermost second transistor Q3 of the second transistor stack 14, and the inverting input of which is coupled through a first impedance element, namely, a first resistor r1 to the emitter of the uppermost first transistor Q1 of the first transistor stack 13. Thus, as the first op-amp A1 operates to maintain the voltage on its inverting input similar to the voltage on its non-inverting input, a PTAT voltage 2[Delta]Vbe provided by the difference of the first base-emitter voltages of the first transistors Q1 and Q2 and the second base-emitter voltages of the second transistors Q3 and Q4 is developed across the first resistor r1.The output of the first op-amp A1 is coupled to the first end 9 of the primary resistor r3. A feedback loop 18 comprising a second impedance element, namely, a second resistor r2 and a third substrate bipolar transistor Q5 is coupled between the output and the inverting input of the first op-amp A1 for operating the first op-amp A1 in a closed loop mode. The second resistor r2 co-operates with the first resistor r1 for setting the closed loop gain of the first op-amp A1. The second resistor r2 is coupled through the emitter and base of the third transistor Q5 to the output of the first op-amp A1 and also to the first end 9 of the primary resistor r3. The third transistor Q5 is identical to each of the first transistors Q1 and Q2, and is of unity emitter area similar to the emitter areas of the first transistors Q1 and Q2. The third transistor Q5 is forward biased by a PTAT current through the second resistor r2, which operates the third transistor Q5 substantially at the first current density, thereby developing one first base-emitter voltage, which provides one of the N first base-emitter voltages of the first voltage level provided by the first circuit 8. The value of the first voltage level which is applied to the first end 9 of the primary resistor r3, and its derivation will be described in detail below.The second circuit 10 comprises a second op-amp A2, the non-inverting input of which is coupled to the second end 11 of the primary resistor r3. In this embodiment of the invention the number of P first base-emitter voltages of the second voltage level, which is applied by the second circuit 10 to the second end 111 of the primary resistor r3 is one first base-emitter voltage, which is derived from the first transistor Q2 of the first transistor stack 13. The first base-emitter voltage of the first transistor Q2 is applied to the inverting input of the second op-amp A2. A negative feedback loop 20, which comprises the output resistor r4 and a pMOS transistor M1 of the current mirror circuit 17 operates the second op-amp A2 in a closed loop mode. Feedback signals through the gate and drain of the pMOS transistor M1 are inverted, thereby providing negative feedback through the feedback loop 20. The output resistor r4 and the primary resistor r3 co-operate to set the closed loop gain of the second op-amp A2. As the second op-amp A2 operates to maintain the voltage on its non-inverting input similar to the voltage on its inverting input, the second op-amp A2 applies the second voltage level to the second end 11 of the primary resistor r3, which in this embodiment of the invention is the one first base-emitter voltage derived from the first transistor Q2 of the first transistor stack 13. The PTAT voltage developed across the primary resistor r3 is gained up and reflected across the output resistor r4 in the ratio of the resistance of the output resistor r4 to the resistance of the primary resistor r3, as will be described below to provide the output PTAT voltage.The first base-emitter voltage derived from the first transistor Q2 of the first transistor stack 13, which is applied to the inverting input of the second op-amp A2 is a CTAT voltage, and also provides the CTAT voltage to be summed with the output PTAT voltage to produce the bandgap voltage reference on the output terminal 3. Since the second op-amp A2 operates to maintain its non-inverting input at the same voltage as its inverting input, the voltage on the non-inverting input of the second op-amp is likewise the first base-emitter CTAT voltage. The bandgap voltage reference on the output terminal 3 relative to the common ground voltage terminal 4 is thus the summation of the CTAT first base-emitter voltage applied to the inverting input of the second op-amp A2, and the output PTAT voltage developed across the output resistor r4, which is thus substantially temperature stable for a specific ratio of the resistances of the first and second resistors r1 and r2, and for a specific ratio of the primary and output resistors r3 and r4.Since the voltage developed across the output resistor r4 is a PTAT voltage, the current flowing through the output resistor r4 is a PTAT current. Thus, a PTAT current is pulled through the pMOS transistor of the current mirror circuit 17, which is thus reflected in pMOS transistors M2 to M5 which provide the PTAT forward biasing currents I1, I2, I3 and I4 to the first transistors Q1 and Q2, and the second transistors Q3 and Q4, respectively. The pMOS transistors M2 to M5 are scaled relative to the pMOS transistor M1 so that the forward biasing PTAT currents I1, I2, I3 and I4 are identical to each other.In this embodiment of the invention the collectors of the first and second transistors Q1, Q2, Q3 and Q4, and the third transistor Q5 are held at ground, and the bases of the lowermost first and second transistors Q2 and Q4 of the first and second transistor stacks 13 and 14, respectively, are coupled to the common ground voltage terminal 4.The PTAT voltage and its derivation which is developed across the primary resistor r3, and which is in turn reflected onto the output resistor r4 to provide the output PTAT voltage will now be described in detail with reference to the following equations.The voltage developed across the first resistor r1 is the difference 2[Delta]Vbe of the first and second base-emitter voltages developed by the first transistors Q1 and Q2, and the second transistors Q3 and Q4, respectively. Since the base-emitter voltage difference 2[Delta]Vbe developed across the first resistor r1 is a PTAT voltage, the current flowing through the first resistor Ir1 is similarly a PTAT current, and is thus given by the equation:[mathematical formula - see original document]whereVr1 is the voltage developed across the first resistor r1, andr1 is the resistance value of the first resistor r1.The inverting and non-inverting inputs of the first op-amp A1 are high impedance inputs, and thus the current flowing through the second resistor r2 is equal to the current flowing through the first resistor r1, namely, the PTAT current Ir1. The first op-amp operates to keep its inverting input at the same voltage as its non-inverting input, and accordingly, the voltage V25 on the node 25 between the second resistor r2 and the emitter of the third transistor Q5 relative to the common ground voltage terminal 4 is equal to the difference of the two second base-emitter voltages developed in the second transistor stack 14, and the PTAT voltage developed across the second resistor r2, and is given by the equation:[mathematical formula - see original document]whereVbe(n) is the second base-emitter voltage of each of the second transistors Q3 and Q4, andr2 is the resistance value of the second resistor r2.From equation (7) the voltage VO1 on the output of the first op-amp A1, which is the first voltage level applied to the first end 9 of the primary resistor r3, is given by the equation:[mathematical formula - see original document]whereVbe(1) is the first base-emitter voltage of the third transistor Q5, which is assumed to be the same as the first base-emitter voltage Vbe(1) developed by each of the first transistors Q1 and Q2.Since the second op-amp A2 operates to maintain the voltage on its non-inverting input at the same voltage as its inverting input, the voltage on the non-inverting input of the second op-amp A2 relative to the common ground voltage terminal 4 is Vbe(1), namely, the first base-emitter voltage which is derived from the lowermost first transistor Q2 of the first transistor stack. This is the second voltage level which is applied to the second end 11 of the primary resistor r3. Therefore, the voltage developed across the primary resistor r3, namely, the voltage Vr3 is given by the equation:Vr3=Vbe(1)-VO1 (9)Substituting in equation (9) for VO1 from equation (8) gives:[mathematical formula - see original document]Equation (10) can be rewritten as:[mathematical formula - see original document]which in turn can be rewritten as:[mathematical formula - see original document]Accordingly, in this embodiment of the invention the voltage developed across the primary resistor r3 is a pure PTAT voltage which comprises two components. The second component of equation (11), namely,[mathematical formula - see original document]is part of the first voltage level, and is the PTAT voltage scaled up from the PTAT voltage developed across the first resistor r1, and is scaled up by the resistance ratio of the second to the first resistors, namely,[mathematical formula - see original document]The first component from equation (11), namely, 2[Delta]Vbe is a PTAT voltage, and is provided by the first and second voltage levels. One of the first base-emitter voltages is the first base-emitter voltage of the second voltage level provided by the second circuit 10, and is derived from the first transistor Q2 of the first transistor stack 13. The first base-emitter voltage from the first transistor Q2 is applied to the inverting input of the second op-amp A2 relative to the ground reference voltage on the ground reference terminal 4. The other first base-emitter voltage is provided by one of the first base-emitter voltages of the first voltage level from the first circuit 8, and is derived from the third transistor Q5 of the first circuit 8. The two second base-emitter voltages of the first component 2[Delta]Vbe of equation (11) are provided by the first voltage level from the first circuit 8, and are derived from the two second transistors Q3 and Q4 of the second transistor stack 14, due to the fact that the two second base-emitter voltages of the two second transistors Q3 and Q4 as well as contributing to the development of the PTAT voltage developed across the first resistor r1 also raise the voltage on the non-inverting input of the first op-amp A1 above the common ground reference of the common ground terminal 4 by the value of the two second base-emitter voltages, which in turn are applied directly to the first end 9 of the primary resistor r3. Accordingly, the first voltage level which is applied to the first end 9 of the primary resistor r3 by the first circuit 8 is as follows:[mathematical formula - see original document]The second voltage level from the second circuit 10 is Vbe(1).Accordingly, in this embodiment of the invention the number N of first base-emitter voltages of the first voltage level is[mathematical formula - see original document]first base-emitter voltages, and the number M of second base-emitter voltages in the first voltage level is[mathematical formula - see original document]second base-emitter voltages. The number P of first base-emitter voltages of the second voltage level is one first base-emitter voltage. If the resistances of the first and second resistors are, for example, similar in order to provide the ratio[mathematical formula - see original document]to be one, then in this embodiment of the invention the number N of first base-emitter voltages in the first voltage level would be three first base-emitter voltages, and the number M of second base-emitter voltages in the first voltage level would be four, while the number P of first base-emitter voltages in the second voltage level would be one.On the other hand, if the resistances of the first and second resistors r1 and r2 were selected to provide a resistance ratio[mathematical formula - see original document]to be equal to, for example, four, then the number N of first base-emitter voltages in the first voltage level would be nine, and the number M of second base-emitter voltages in the first voltage level would be ten, while the number P of first base-emitter voltages in the second voltage level would still be one.The current flowing through the primary resistor r3 is given by the equation:[mathematical formula - see original document]wherer3 is the resistance value of the primary resistor r3.Thus, substituting for Vr3 in equation (12) from equation (11) gives:[mathematical formula - see original document]The inverting and non-inverting inputs of the second op-amp A2 are high impedance inputs, and thus the current flowing through the output resistor r4 is equal to the current flowing through the primary resistor r3, namely, Ir3 Accordingly, the voltage developed across the output resistor r4, namely, Vr4 is given by the equation:Vr4=Ir3r4 (14)wherer4 is the resistance value of the output resistor r4.Substituting for the current Ir3 from equation (13) in equation (14) gives the voltage developed across the output resistor r4, namely, Vr4 as:[mathematical formula - see original document]which can be rewritten as:[mathematical formula - see original document]Accordingly, the voltage developed across the output resistor r4 is reflected from the primary resistor r3 and is gained up by the ratio of the resistance r4 of the output resistor r4 to the resistance r3 of the primary resistor r3. Thus, the voltage Vr4 developed across the output resistor r4 is a pure PTAT voltage.Since the first base-emitter voltage derived from the first transistor Q2is applied to the inverting input of the second op-amp, the inverting input of the second op-amp is at a voltage equal to one base-emitter voltage above the common ground voltage of the common ground terminal 4, which is thus a CTAT voltage. Accordingly, as the second op-amp A2 operates to maintain the voltage on its non-inverting input similar to the first base-emitter CTAT voltage on its inverting input, the first base-emitter CTAT voltage on the inverting input of the second op-amp A2 is summed with the output PTAT voltage developed across the output resistor r4 to provide the bandgap voltage reference Vref on the output terminal 3 relative to the common ground voltage terminal 4 which is given by the equation:[mathematical formula - see original document]The sensitivity of the gained up PTAT voltage developed across the output resistor r4, and in turn the bandgap voltage reference on the output terminal 3 to input voltage offsets in the respective first and second op-amps A1 and A2 is minimised, and is significantly reduced over and above the sensitivity to op-amp voltage offsets of the bandgap voltage reference produced by prior art bandgap voltage reference circuits. A comparison of the effect of voltage offsets of the first and second op-amps A1 and A2 on the bandgap voltage reference produced by the bandgap voltage reference circuit 1 of FIG. 4, with the effect of voltage offsets of the op-amp A1 on the bandgap voltage reference produced by the prior art bandgap voltage reference circuit of FIG. 2, gives an indication of the significant reduction of the effect of op-amp voltage offsets of the first and second op-amps A1 and A2 on the bandgap voltage reference produced by the bandgap voltage reference circuit 1 of FIG. 4.If it is assumed that the op-amp A1 of the prior art bandgap voltage reference circuit of FIG. 2, and the first and second op-amps A1 and A2 of the bandgap voltage reference circuit 1 of FIG. 4 have the same input voltage offsets, namely, Voff, and if the ratio[mathematical formula - see original document]of the resistances of the first and second resistors r1 and r2 of the prior art bandgap voltage reference circuit of FIG. 2 is four, in order to provide a closed loop gain of five for the op-amp A1, then the voltage offset Voff of the op-amp A1 of the prior art circuit of FIG. 2 is reflected into the bandgap voltage reference of the prior art bandgap voltage reference circuit of FIG. 2 as:[mathematical formula - see original document]Thus, in the prior art bandgap voltage reference circuit of FIG. 2 the input voltage offset of the op-amp A1 is amplified by a factor of five when it appears in the bandgap voltage reference of the prior art bandgap voltage reference circuit of FIG. 2.In the bandgap voltage reference circuit 1 of FIG. 4, the input voltage offsets of each of the first and second op-amps A1 and A2 make a contribution to the bandgap voltage reference Vref. The voltage offset Voff of the first op-amp A1 is reflected into the bandgap voltage reference Vref produced by the bandgap voltage reference circuit 1 of FIG. 4 as follows:[mathematical formula - see original document]where V21(off) is the value of the voltage offset of the first op-amp A1 which appears in the bandgap voltage reference Vref.The input voltage offset Voff of the second op-amp A2 is reflected into the bandgap voltage reference produced by the bandgap voltage reference circuit 1 of FIG. 4 as:[mathematical formula - see original document]whereV22(Off) is the value of the voltage offset resulting from the second op-amp A2 which appear in the bandgap voltage reference Vref.If the voltage offsets V21(off) and V22(off) appearing in the bandgap voltage reference of the bandgap voltage reference circuit 1 of FIG. 4 resulting from the first and second op-amps A1 and A2 are to be equal, then from equations (19) and (20), the following equation must hold:[mathematical formula - see original document]Since one base-emitter voltage difference [Delta]Vbe is equal to approximately 100 millivolts, and since the value of the output PTAT voltage developed across the output resistor r4, which is to be added to the first base-emitter CTAT voltage on the inverting input of the second op-amp A2 should be of the order of 400 millivolts, from equation (17) by setting the resistances r1 and r2 of the first and second resistors r1 and r2, respectively, equal to each other, and also by setting the resistances r3 and r4 of the primary and output resistors r3 and r4, respectively, also equal to each other, the PTAT voltage developed across the output resistor r4 is equal to four base-emitter voltage differences, namely, 4[Delta]Vbe, which is approximately 400 millivolts.The compound voltage offset of the op-amps A1 and A2 reflected into the voltage reference Vref produced by the bandgap voltage reference circuit 1 of FIG. 4 is:Vref(off)=[square root of]{square root over (V21(off)<2> +V22(off)<2> )}{square root over (V21(off)<2> +V22(off)<2> )}=2[square root of]{square root over (2)}Voff (22)Since r1=r2 and r3=r4, then V21(off)=2V(off) and V22(off)=2V(off).From equations (18) and (22) it can be shown that the effect of op-amp voltage offsets in the bandgap voltage reference produced by the bandgap voltage reference circuit 1 of FIG. 4 is reduced by a factor of approximately 1.77 over the effect of the op-amp voltage offset in the bandgap voltage reference produced by the prior art bandgap voltage reference circuit of FIG. 2.In order to confirm the significant improvements achieved by the bandgap voltage reference circuit according to the invention over the prior art bandgap voltage reference circuit of FIG. 2 in reducing the effect of op-amp voltage offsets in the bandgap voltage reference, a computer simulation of the prior art circuit of FIG. 2 was made, and a computer simulation of the bandgap voltage reference circuit of FIG. 4 was made. The computer simulations were made in similar conditions. The bipolar transistors of the bandgap voltage reference circuit of FIG. 2 were of the same size as the corresponding bipolar transistors of the bandgap voltage reference circuit 1 of FIG. 4. Similarly, the bipolar transistors of the respective bandgap voltage reference circuits of FIGS. 2 and 4 were forward biased with similar forward biasing currents. The op-amps of the respective bandgap reference circuits of FIGS. 2 and 4 were assumed to have similar input voltage offsets, each of 1 millivolt.In the bandgap voltage reference circuit of FIG. 2, the 1 millivolt input voltage offset of the op-amp translated into a 4.5 millivolt offset in the bandgap voltage reference. However, in the bandgap voltage reference circuit 1 of FIG. 4 the 1 millivolt input voltage offset of the first op-amp A1 translated into a 1.67 millivolt offset in the bandgap voltage reference, and the 1 millivolt input voltage offset of the second op-amp A2 translated into a 2 millivolt offset in the bandgap voltage reference. The corresponding compound effect of the voltage offsets of the first and second op-amps A1 and A2 in the bandgap voltage reference produced by the bandgap voltage reference circuit 1 of FIG. 4 was 2.6 millivolts. Accordingly, the effect of the op-amp voltage offset in the bandgap voltage reference produced by the bandgap voltage reference circuit 1 of FIG. 4 was reduced by a factor of approximately 1.76 from the effect of the op-amp voltage offset in the bandgap voltage reference produced by the prior art bandgap voltage reference circuit of FIG. 2. The factor of 1.76 is very close to the theoretical value of 1.77 computed from equation (22).Referring now to FIGS. 6(a) to 6(c), the results of the simulation of the bandgap voltage reference circuit 1 of FIGS. 4 and 5 according to the invention are illustrated. FIG. 6(a) illustrates a waveform A which represents the bandgap voltage reference produced by the bandgap voltage reference circuit 1 of FIGS. 4 and 5 over a normal industrial temperature range of -40[deg.] C. to +85[deg.] C. However, in this simulation, correction for TlnT temperature curvature correction was made in the bandgap voltage reference. The temperature curvature correction was achieved by including a CTAT current component in the forward biasing PTAT currents I3 and I4 of the second transistors Q3 and Q4. Such TlnT temperature curvature correction is described in detail in co-pending U.S. patent application Ser. No. 10/375,593 of Stefan Marinca. The temperature in [deg.] C. is plotted on the X-axis of FIGS. 6(a) to 6(c), while the voltage in volts is plotted on the Y-axis of FIGS. 6(a) to 6(c). As can be seen, the bandgap voltage reference is produced with a negligible residual temperature curvature deviation, which is approximately 7 microvolts. This temperature deviation in the bandgap voltage reference over the industrial temperature range of -40[deg.] C. to 85[deg.] C. corresponds to a temperature coefficient of approximately 0.05 parts per million per [deg.] C. The bandgap voltage reference represented by the waveform A was prepared with the bipolar transistors properly forward biased. However, the bandgap voltage reference was produced in the simulation on the assumption that non-ideal factors, such as process dependent second and third order factors, which would otherwise affect the bandgap voltage reference were absent.FIGS. 6(b) and 6(c) illustrate waveforms B, C and D. The waveform B represents each of the forward biasing current I1 and I2 with which the first transistors Q1 and Q2, respectively, were forward biased over the temperature range of -40[deg.] C. to +85[deg.] C. The waveform C represents the PTAT current Ir1, which forward biased the third transistor Q5 over the temperature range of -40[deg.] C. to +85[deg.] C., while the waveform D represents each of the forward biasing currents I3 and I4 with which the second transistors Q3 and Q4, respectively, were forward biased over the temperature range of -40[deg.] C. to +85[deg.] C. In FIGS. 6(b) and 6(c) temperature is plotted in [deg.] C. on the X-axis and the current in microamps is plotted on the Y-axis. As can be seen, the forward biasing currents I1 and I2 increased linearly from approximately 14 microamps to 21 microamps over the temperature range of -40[deg.] C. to +85[deg.] C., while the current Ir which forward biased the third transistor Q5 increased linearly from approximately 12.5 microamps to approximately 20.5 microamps over the temperature range of -40[deg.] C. to +80[deg.] C. The forward biasing currents I3 and I4 increased linearly from approximately 5.2 microamps to 6.3 microamps over the temperature range of -40[deg.] C. to +85[deg.] C.Referring now to FIG. 7, there is illustrated a bandgap voltage reference circuit according to another embodiment of the invention, indicated generally by the reference numeral 40. The bandgap voltage reference circuit 40 is substantially similar to the bandgap voltage reference circuit 1, and similar components are identified by the same reference numerals and letters. The main difference between the bandgap voltage reference circuit 40 and the bandgap voltage reference circuit 1 is that the third transistor Q5 has been omitted from the feedback loop 18 of the first op-amp A1. However, in this embodiment of the invention the first transistor stack 13 is provided with one first transistor more than the number of second transistors in the second transistor stack 14. The extra first transistor is identified as the first transistor Q6, and develops a first base-emitter voltage. Accordingly, in this embodiment of the invention three first base-emitter voltages are developed in the first transistor stack 13, while two second base-emitter voltages are developed in the second transistor stack 14.As in the bandgap voltage reference circuit 1 of FIG. 4, the first transistors Q1, Q2 and Q6 and the second transistors Q3 and Q4 are substrate bipolar transistors, and the emitter areas of the second transistors Q3 and Q4 are similar, and are each n times the area of each of the first transistors Q1, Q2 and Q6, which are each assumed to be of unit emitter area. The first transistor Q6 is forward biased by a PTAT current I5, which is of similar value to the PTAT forward biasing currents I1 to I4, which are similar to each other. The forward biasing current I5 is derived from the current mirror circuit 5 through a PMOS transistor M6.The base-emitter voltage difference developed across the first resistor r1 is Vr1, and in this embodiment of the invention is given by the equation:Vr1=3Vbe(1)-2Vbe(n) (23)Equation (23) can be rewritten as:Vr1=Vbe(1)+2[Delta]Vbe (24)Accordingly, the voltage VO1 at the output of the first op-amp A1, which is the first voltage level, and which is applied to the first end 9 of the primary resistor r3, is given by the equation:[mathematical formula - see original document]The second voltage level which is applied to the second end 11 of the primary resistor r3 is one first base-emitter voltage, which is derived from the first transistor Q2 of the first transistor stack. Therefore, the voltage Vr3 developed across the primary resistor r3 is given by the equation:[mathematical formula - see original document]Equation (26) can be rewritten as:[mathematical formula - see original document]Thus, the current Ir3 through the primary resistor r3 is given by the equation:[mathematical formula - see original document]The inverting and non-inverting inputs of the op-amp A2 are high impedance inputs, and thus the current Ir4 flowing through the output resistor r4 is the same as the current Ir3 flowing through the primary resistor r3, as has already been described with reference to the bandgap voltage reference circuit 1 of FIG. 4. Therefore, the output PTAT voltage Vr4 developed across the output resistor r4 is given by the equation:[mathematical formula - see original document]which is the voltage developed across the primary resistor r3 gained up by the ratio of the resistance r4 of the output resistor r4 to the resistance r3 of the primary resistor r3, and reflected onto the output resistor r4.The bandgap voltage reference Vref is given by the equation:Vref=Vbe(1)+Vr4 (30)Substituting for Vr4 in equation (30) from equation (29) gives:[mathematical formula - see original document]In this embodiment of the invention the voltage developed across the primary resistor r3 has a non-PTAT component along with the PTAT component. The non-PTATcomponent is given by the term[mathematical formula - see original document]However, the PTAT component of the voltage developed across the primary resistor r3 of the bandgap voltage reference circuit 40 is identical to the PTAT component developed across the primary resistor r3 of the bandgap voltage reference circuit 1. In this case, the M second base-emitter voltages of the first voltage level are derived from the two second transistors Q3 and Q4 of the second transistor stack 14. The P first base-emitter voltage of the second voltage level is derived from the first transistor Q2 of the first transistor stack. However, in this case, all the N first base-emitter voltages of the first voltage level are derived from the three first transistors, namely, the transistors Q1, Q2 and Q6 of the first transistor stack 13.The value of the bandgap voltage reference Vref produced by the bandgap voltage reference circuit 40 of FIG. 7 is more flexible than that of the bandgap voltage reference produced by the bandgap voltage reference circuit 1 of FIGS. 4 and 5. In the bandgap voltage reference circuit 40 of FIG. 7, the bandgap voltage reference Vref can be scaled lower than 1.25 volts or 1.17 volts, and can be scaled down to a voltage reference value of 1.024 volts. Thus, the bandgap voltage reference circuit 40 of FIG. 6 is particularly suitable for use in digital to analogue converters and in analogue to digital converters, since the number 1024 is equal to 2<10> , and by representing the value 1024 by 1.024 volts, one Least Significant Bit (LSB) can be represented by 1 millivolt.FIGS. 8(a) to (c) illustrate waveforms of three simulations which were carried out of the bandgap voltage reference circuit 40 of FIG. 7. The waveforms are illustrated over the operating temperature range of -40[deg.] C. to +85[deg.] C. which is plotted on the X-axis of FIGS. 8(a) to 8(c). In FIG. 8(a) the waveforms E, F and G represent three bandgap voltage references produced in the three simulations. The voltage of FIG. 8(a) is plotted on the Y-axis in volts. Waveform H of FIG. 8(b) represents each of the emitter currents I1, I2 and I5 with which the first transistors Q1, Q2 and Q6 were forward biased. Waveform J of FIG. 8(c) represents each of the emitter currents I3 and I4 with which the second transistors Q3 and Q4 were forward biased. The currents are plotted in FIGS. 8(b) and (c) on the Y-axis in microamps. In the simulation of the bandgap voltage reference circuit 40 the emitter areas of the first transistors Q1, Q2 and Q6 were identical to each other, and the emitter areas of the second transistors Q3 and Q4 were also identical to each other, and of ratio n times the emitter areas of the first transistor.The forward biasing emitter currents I1, I2 and I5 with which the first transistors Q1, Q2 and Q6 were forward biased, each increased linearly over the temperature range of -40[deg.] C. to +85[deg.] C. from 11.5 mA approximately to 21 mA, see waveform H. The forward biasing emitter currents I3 and I4 with which the second transistors Q3 and Q4 were forward biased, each increased over the temperature range of -40[deg.] C. to +85[deg.] C. from approximately 6.025 mA to 6.375 mA, see waveform J of FIG. 8(c).In the first simulation the first and second op-amps A1 and A2 were assumed to have no input voltage offsets, and the simulation produced a bandgap voltage reference Vref on the output terminal 3 represented by the waveform E of FIG. 8(a). As can be seen, the bandgap voltage reference Vref had a value of approximately 1.0245 volts. In the second simulation the first op-amp A1 was assumed to have an input voltage offset of approximately 1 millivolt, and the second op-amp A2 was assumed to have no input voltage offset. The simulation produced a bandgap voltage reference represented by the waveform F of FIG. 8(a) with a voltage of approximately 1.023 volts. In the third simulation the second op-amp A2 was assumed to have an input voltage offset of approximately 1 millivolt, and the first op-amp was assumed to have no input voltage offset. The simulation produced a bandgap voltage reference Vref represented by the waveform G of FIG. 8(a) of approximately 1.026 volts. Thus, the voltage offset of 1 millivolt of the first op-amp was reflected as a 1.26 millivolts offset into the bandgap reference voltage Vref, and the voltage offset of 1 millivolt of the second op-amp A2 was reflected as a 1.69 millivolts offset into the bandgap voltage reference Vref. The corresponding compound voltage offset which would have been reflected into the bandgap reference voltage if the first and second op-amps A1 and A2 were each assumed to have a 1 millivolt offset would be approximately 2.1 millivolts.Referring now to FIG. 9, there is illustrated a bandgap voltage reference circuit according to another embodiment of the invention, indicated generally by the reference numeral 60. The bandgap voltage reference circuit 60 is somewhat similar to the bandgap voltage reference circuit 1 of FIGS. 4 and 5, and similar components are identified by the same reference numerals and letters. The main difference between the bandgap voltage reference circuit 60 and the bandgap voltage reference circuit 1 of FIGS. 4 and 5 is in the arrangement and configuration of the first and second op-amps A1 and A2, respectively, and the fact that in this embodiment of the invention the primary resistor r3 is located in the feedback loop 20 of the second op-amp A2, and the output resistor r4 is coupled between the output of the first op-amp A1 and the inverting input of the second op-amp A2. Thus, the bandgap voltage reference is produced on the output terminal 3 which is coupled to a node 61 between the output resistor r4 and the output of the first op-amp A1, and is referenced to the common ground terminal 4.The PTAT voltage cell 15 is identical to the PTAT voltage cell 15 of the bandgap voltage reference circuit 1 of FIGS. 4 and 5, and comprises a first transistor stack 13 having two first transistors Q1 and Q2 and a second transistor stack 14 having two second transistors Q3 and Q4 which are identical to the first and second transistors Q1 and Q2, and Q3 and Q4, respectively of the PTAT cell 15 of the bandgap voltage reference circuit 1. The first and second transistors Q1 to Q4 are forward biased by forward biasing the PTAT currents I1 to I4, which are similar to the forward biasing PTAT currents I1 to I4 of the bandgap voltage reference circuit 1. The forward biasing currents 11 to 14 are derived from a current mirror circuit 17, which may derive a PTAT current from the bandgap voltage reference circuit 60 or from an external source.The first resistor r1 across which the base-emitter voltage difference 2[Delta]Vbe of the first and second base-emitter voltages developed by the first and second transistors Q1 to Q4 is coupled to the non-inverting input of the first op-amp A1, and the inverting input of the first op-amp A1 is coupled to the uppermost second transistor Q3 of the second transistor stack 14. The first end 9 of the primary resistor r3 is coupled to the output of the second op-amp A2, and the first voltage level relative to the common ground voltage terminal 4 is applied to the first end 9 of the primary resistor r3 through the second resistor r2 and a third transistor Q5, which is similar to the third transistor Q5 of the bandgap voltage reference circuit 1 of FIGS. 4 and 5, and develops a first base-emitter voltage.The second voltage level relative to the common ground voltage terminal 4, which in this embodiment of the invention is also one first base-emitter voltage which is derived from the first transistor Q2 of the first transistor stack 13, is applied to the non-inverting input of the second op-amp A2, and in turn is applied to the second end 11 of the primary resistor r3 as the second op-amp A2 operates to maintain its inverting input at the same voltage as its non-inverting input. The inverting and non-inverting inputs of the second op-amp A2 are high impedance inputs, and thus the current flowing through the output resistor r4 is the same as the current flowing through the primary resistor r3, therefore the voltage developed across the primary resistor r3 is reflected across the output resistor r4, and gained up by the ratio of the resistance r4 of the output resistor r4 to the resistance r3 of the primary resistor r3 to form the output PTAT voltage across the output resistor r4. The PTAT voltage developed across the output resistor r4 is in turn summed with the first base-emitter CTAT voltage, which is derived from the first transistor Q2, and which is applied to the non-inverting input of the second op-amp A2 to provide the bandgap voltage reference on the output terminal 3 referenced to the common ground terminal 4.The following is an explanation of how the PTAT voltage is developed across the primary resistor r3, and in turn is gained up and reflected across the output resistor r4 to provide the output PTAT voltage for summing with the CTAT voltage to produce the bandgap voltage reference.The PTAT voltage Vr1 developed across the first resistor r1 of the bandgap reference circuit 60 is given by the equation:Vr1=2[Delta]Vbe (32)The current Ir1 through the first resistor r1 is a PTAT current, and is given by the equation:[mathematical formula - see original document]For the same reason as described with reference to the bandgap voltage reference circuit 1 of FIGS. 4 and 5, the current Ir2 through the second resistor r2 is equal to the current Ir1. Accordingly, the voltage Vr2 developed across the second resistor r2 is equal to:[mathematical formula - see original document]The voltage VO2 at the output of the second op-amp A2 which is the first voltage level and is applied to the first end 9 of the primary resistor r3 is given by the following equation:[mathematical formula - see original document]The first voltage level as discussed above which is applied to the second end 11 of the primary resistor r3 is the first base-emitter voltage derived from the first transistor Q2. Accordingly, the voltage developed across the primary resistor r3 is given by the following equation:Vr3=Vbe(1)-Vo2 (36)Substituting for VO2 in equation (36) from equation (35) gives:[mathematical formula - see original document]Equation (37) can be rewritten as follows:[mathematical formula - see original document]Therefore, the voltage developed across the primary resistor r3 is a pure PTAT voltage, which is similar to the PTAT voltage developed across the primary resistor r3 of the bandgap voltage reference circuit 1 of FIGS. 4 and 5.The current flowing through the primary resistor r3 is given by the equation:[mathematical formula - see original document]Since for reasons explained above the current Ir4 flowing through the output resistor r4 is the same as the current Ir3 flowing through the primary resistor r3, the output voltage Vr4 developed across the output resistor r4 is given by the following equation:[mathematical formula - see original document]Accordingly, in this embodiment of the invention the PTAT voltage developed across the primary resistor r3 is reflected onto the output resistor r4 and is gained up by the resistance r4 of the output resistor r4 to the resistance r3 of the primary resistor r3, and is thus a pure PTAT voltage similar to the output PTAT voltage developed across the output resistor r4 of the bandgap voltage reference circuit 1 of FIGS. 4 and 5.The bandgap voltage reference Vref on the output terminal 3 is given by the following equation:Vref=Vbe(1)+Vr4 (41)Substituting for Vr4 from equation (40) in equation (41) gives:[mathematical formula - see original document]which is similar to the bandgap voltage reference produced by the bandgap voltage reference circuit 1 of FIGS. 4 and 5.Accordingly, in this embodiment of the invention the N first base-emitter voltages of the first voltage level are derived from the first transistors Q1 and Q2 in the first transistor stack 13 and the third transistor Q5. The M base-emitter voltages of the first voltage level are derived from the second transistors Q3 and Q4 of the second transistor stack 14. The P base-emitter voltage of the second voltage level is derived from the first transistor Q2 of the first transistor stack 13.Referring now to FIGS. 10 and 11, in order to compare the sensitivity of the bandgap voltage reference Vref produced by the bandgap voltage reference circuit 60 of FIG. 9 to input voltage offsets of the first and second op-amps A1 and A2 with the sensitivity to op-amp input voltage offset of the bandgap voltage reference produced by the prior art bandgap voltage reference circuit of FIG. 2, two simulations of the prior art bandgap voltage reference circuit of FIG. 2 were made, and three simulations of the bandgap voltage reference circuit 60 of FIG. 9 were made. In the first simulation of the prior art bandgap voltage reference circuit of FIG. 2, the op-amp was assumed to have no input voltage offset, and in the second simulation the op-amp was assumed to have an input voltage offset of 1 millivolt. Waveforms K and L of FIG. 10 represent the voltage reference produced by the two simulations of the prior art bandgap voltage reference circuit of FIG. 2 over the operating temperature range of -40[deg.] C. to +85[deg.] C. The temperature is plotted on the X-axis in [deg.] C., and the voltage is plotted on the Y-axis in volts. The waveform K represents the bandgap voltage reference with the op-amp having no input voltage offset. The waveform L represents the bandgap voltage reference with the op-amp having an input voltage offset error of 1 millivolt. As can be seen, the 1 millivolt input voltage offset error of the op-amp is reflected as 5 millivolts into the bandgap voltage reference of the waveform L.In the first simulation of the bandgap voltage reference circuit 60 of FIG. 9 the first and second op-amps A1 and A2 were assumed to have no input voltage offset. In the second simulation of the bandgap voltage reference circuit 60, the first op-amp A1 was assumed to have a 1 millivolt input voltage offset, and the second op-amp A2 was assumed to have no input voltage offset. In the third simulation of the bandgap voltage reference circuit 60 the second op-amp A2 was assumed to have a 1 millivolt input voltage offset, and the first op-amp A1 was assumed to have no input voltage offset. The waveforms M, N and P of FIG. 11 represent the bandgap voltage references produced by the three simulations of the bandgap voltage reference circuit 60over the operating temperature range of -40[deg.] C. to +85[deg.] C. In FIG. 11 the temperature is plotted on the X-axis in [deg.] C., and the voltage is plotted on the Y-axis in volts. The waveform M represents the bandgap voltage reference with the first and second op-amps of the bandgap voltage reference circuit 60 having no input voltage offsets. The waveform N represents the bandgap voltage reference with the first op-amp A1 having a 1 millivolt input voltage offset, while the waveform P represents the bandgap voltage reference with the second op-amp A2 having a 1 millivolt input voltage offset.As can be seen from FIG. 11, the input voltage offset of the first op-amp A1 is reflected into the bandgap voltage reference as 1.9 millivolts, while the 1 millivolt input voltage offset of the second op-amp A2 is reflected into the bandgap voltage reference as 1.7 millivolts. Accordingly, the compound offset voltage of the 1 millivolts input voltage offsets of the first and second op-amps A1 and A2, respectively, in the bandgap voltage reference circuit 60 of FIG. 9 is given by the equation:V2(off)=[square root of]{square root over (1.9<2> +1.7<2> )}=2.55mV Therefore, the bandgap voltage reference circuit 60 of FIG. 9 is approximately two times less sensitive to input voltage offsets of the op-amps A1 and A2 than is the prior art bandgap voltage reference of the bandgap voltage reference circuit of FIG. 2 to input voltage offsets of the op-amp of the prior art circuit of FIG. 2.Additionally, the bandgap voltage reference circuits 1, 40 and 60 of FIGS. 4, 5, 7 and 9, respectively, can comfortably operate with a supply voltage of the order of 2.5 volts to 2.7 volts, and are thus particularly suitable for implementation in low voltage CMOS environments. The common input voltage on the inverting and non-inverting inputs of the first op-amps A1 of the circuits 1, 40 and 60 is two second base-emitter voltages above the common ground terminal 4. In other words, at -40[deg.] C. the common input voltage on the first op-amps A1 of the circuits 1, 40 and 60 is approximately 1.6 volts above the common ground terminal 4. Accordingly, the first op-amps A1 can be provided with pMOS input pairs, since the supply voltage required by pMOS input pairs is approximately 0.8 volts above the common input voltage. At a common input voltage of 1.6 volts, allowing for the additional 0.8 volts by which the supply voltage of pMOS input pairs must be above the common input voltage of 1.6 volts, a supply voltage of 2.4 volts would be required for the first op-amps A1 of the bandgap voltage reference circuits 1, 40 and 60, which is well within the supply voltage of 2.5 volts to 2.7 volts of low voltage CMOS environments. Additionally, since the second op-amps of the bandgap voltage reference circuits 1, 40 and 60 of FIGS. 4, 5, 7 and 9 operate with a common input voltage of one first base-emitter voltage above the common ground terminal 4, the second op-amps A2 can also be provided with pMOS input pairs and operate well within the supply voltage of 2.5 volts to 2.7 volts of low voltage CMOS environments.While the bandgap voltage reference circuits 1 and 60 of FIGS. 4, 5 and 9 have been described as comprising a first transistor stack and a second transistor stack, of two first transistors and two second transistors, respectively, and while the bandgap voltage reference circuit 40 of FIG. 7 has been described as comprising three first transistors in the first transistor stack and two second transistors in the second transistor stack, the PTAT voltage cells may be provided with any number of first and second transistors in the respective first and second transistor stacks from one first transistor, and one second transistor, upwards. However, the more transistors which are stacked in the respective first and second transistor stacks, the greater will be the PTAT voltage ultimately developed. However, the headroom required by the bandgap voltage reference increases for each transistor included in a transistor stack. Thus, for low voltage applications, such as, for low voltage CMOS applications, two second transistors in a second transistor stack, and two or three first transistors in a first transistor stack is optimum. Additionally, the second voltage level which is applied to the second end of the primary resistor r3 must be at least one first base-emitter voltage, but may be more than one first base-emitter voltage. However, the greater the number of first base-emitter voltages provided in the second voltage level, the higher will be the headroom required by the bandgap voltage reference circuit.If the second voltage level was provided by two base-emitter voltages in the bandgap voltage reference circuit 1 of FIG. 4 and the bandgap voltage reference circuit 60 of FIG. 9, the third transistor which is coupled between the second resistor and the first end of the primary resistor r3 would not be required, and thus, could be omitted.Where the number of first and second transistors in the respective first and second transistor stacks of the PTAT cell are similar, the number of third transistors coupling the second resistor r2 to the first end of the primary resistor r3 will depend on the number of first base-emitter voltages in the second voltage level applied to the second end of the primary resistor r3, and the number of second transistors in the second transistor stack. In order to provide a pure PTAT voltage across the primary resistor r3, the sum of the number of first base-emitter voltages in the second voltage level plus the sum of the number of first base-emitter voltages provided in the feedback loop through which the second resistor r2 is coupled to the first end of the primary resistor r3 should be equal to the number of second transistors in the second transistor stack of the PTAT voltage cell.Where all N first base-emitter voltages of the first voltage level are derived from the first transistor stack, the number of first base-emitter voltages developed in the first transistor stack should be greater than the number of second base-emitter voltages developed in the second transistor stack by an amount equal to the number P of first base-emitter voltages provided in the second voltage level.Additionally, while the first base-emitter voltages of the second voltage level have been described as being derived from the first base-emitter voltages developed by the first transistors, the first base-emitter voltages of the second voltage level may be derived from any other suitable transistor or transistors capable of providing base-emitter voltages corresponding to the first base-emitter voltages of the first transistors in the first transistor stack. Where more than one first base-emitter voltage is required in the feedback loop of the first op-amp for coupling the second resistor to the first end of the primary resistor, the number of first base-emitter voltages may be obtained from any suitable number of transistors. Needless to say, the first and second base-emitter voltages of the PTAT cell may likewise be obtained from any suitable number of first and second transistors.It is envisaged that each single transistor may be implemented as a plurality of transistors the base-emitters of which would be connected in parallel. For example, where the bandgap voltage reference circuit is implemented in a CMOS process, each transistor may be implemented as a plurality of bipolar substrate transistors each of unit area, and the area of each of the first and second transistors would be determined by the number of bipolar substrate transistors of unit area connected with their respective base-emitters in parallel. Similarly, where the bandgap voltage reference circuits according to the invention are implemented in a CMOS process, the third transistors could also typically be provided by a plurality of bipolar substrate transistors of unit area, and each third transistor would be provided by the appropriate number of transistors of unit area connected with their base and emitters in parallel to provide the appropriate emitter area.In general, where the bandgap voltage reference circuits according to the invention are implemented in a CMOS process, the transistors will be bipolar substrate transistors, and the collectors of the transistors will be held at ground, although the collectors of the transistors may be held at a reference voltage other than ground.Additionally, it will be appreciated that while the CTAT voltage which is added to the output PTAT voltage developed across the output resistor r4 has been derived from one of the first transistors in the first transistor stack in the bandgap voltage reference circuits of FIGS. 4, 5, 7 and 8, it will be appreciated that the CTAT voltage to be summed with the output PTAT voltage developed across the output resistor r4 may be derived from any other suitable transistor.While the bandgap voltage reference of the bandgap voltage reference circuits 1, 40 and 60 have been described for producing a bandgap voltage reference without TlnT temperature correction, it is envisaged that the bandgap voltage reference circuits according to the invention may include TlnT temperature curvature correction to produce a bandgap voltage reference with TlnT temperature curvature correction. It is envisaged that TlnT temperature curvature correction could be provided by forward biasing one or both of the second transistors Q3 and Q4 with a forward biasing current comprising a PTAT current component and a CTAT current component. The introduction of a CTAT current component into the PTAT forward biasing current or currents of either one or both of the second transistors Q3 and Q4 would cause the base-emitter CTAT voltages developed by the relevant second transistors to be developed with a curvature complementary to an uncorrected CTAT voltage with TlnT temperature curvature, and the complementary TlnT temperature curvature would be reflected in the output PTAT voltage developed across the first resistor r1. Accordingly, when the amplified PTAT voltage with the complementary TlnT temperature correction would be summed with an uncorrected CTAT base-emitter voltage, the complementary TlnT temperature curvature would cancel out the TlnT temperature curvature of the CTAT voltage.While in the embodiment of the invention described with reference to FIGS. 4 and 5 the currents I1, I2, I3 and I4 which are provided for biasing the first transistors Q1 and Q2, and the second transistors Q3 and Q4, respectively have been described as identical currents, it will be readily apparent to those skilled in the art that while the currents I1 and I2 should preferably be identical to each other, and the currents I3 and I4 should likewise preferably be identical to each other, the currents I1 and I2 could be greater than the currents I3 and I4, for further increasing the ratio of the current densities at which the first transistors Q1 and Q2 are operating relative to the current densities at which the second transistors Q3 and Q4 are operating for further increasing the value of the base-emitter voltage difference to [Delta]Vbe. Similarly, in the embodiment of the invention described with reference to FIG. 7, the currents I1, I2 and I5, while being identical to each other could be greater than the currents I3 and I4, which likewise would be identical to each other. Similar comments apply to the embodiment of the invention described with reference to FIG. 9 as apply to the embodiment of the invention described with reference to FIGS. 4 and 5 insofar as the currents I1, I2, I3 and I4 are concerned.While a number of preferred bandgap voltage reference circuits have been described, the invention is not to be considered as to be limited to such circuits, and the invention is only limited by the scope of the claims.
Methods and systems for facilitating improved power consumption control of a plurality of processing cores are disclosed. The methods improve the power consumption control by performing power throttling based on a determined excess power consumption. The methods include the steps of: monitoring using at least one event count component in the respective processing core a plurality of distributed events; calculating an accumulated weighted sum of the distributed events from the event count component; determining an excess power consumption by comparing the accumulated weighted sum with a threshold power value; and adjusting power consumption of the respective processing core based on the determined excess power consumption.
CLAIMSWhat is claimed is:1. A processing system having at least one execution unit, each execution unit comprising: at least one first event count component configured to: monitor a plurality of distributed events in the execution unit, and calculate an accumulated weighted sum of the distributed events; a first master accumulation component coupled with the first event count component, the first master accumulation component configured to: determine an excess power consumption by comparing the accumulated weighted sum with a threshold power value, and adjust power consumption of the respective execution unit based on the determined excess power consumption.2. The processing system of claim 1, wherein: the threshold power value is selectable from one of a short-term power usage threshold or a long-term power usage threshold, the first master accumulation component further comprises a first logic component configured to compare the accumulated weighted sum with the short-term power usage threshold and a second logic component configured to compare the accumulated weighted sum with the long-term power usage threshold, and the first logic component is configured to compare the accumulated weighted sum with the short-term power usage threshold simultaneously as the second logic component comparing the accumulated weighted sum with the long-term power usage threshold.3. The processing system of claim 1, further comprising at least one processor engine coupled with the at least one execution unit, each processor engine comprising: a first slave interface operably coupled with the first master accumulation component, the first slave interface configured to receive the accumulated weighted sum from the first master accumulation component; at least one second event count component configured to: receive from the first slave interface the accumulated weighted sum, monitor a plurality of distributed events in the processor engine, and calculate a second accumulated weighted sum of the distributed events including the accumulated weighted sum from the first master accumulation component; and a second master accumulation component configured to: determine an aggregated power consumption for the execution unit and the engine based on the second accumulated weighted sum, determine the excess power consumption by comparing the aggregated power consumption with the threshold power value, and adjust power consumption of the respective execution unit or the processor engine based on the determined excess power consumption.4. The processing system of claim 3, wherein the second master accumulation component further comprises a first logic component configured to compare the accumulated weighted sum with the short-term power usage threshold and a second logic component configured to compare the accumulated weighted sum with the long-term power usage threshold.5. The processing system of claim 3, further comprising a plurality of processor engines and a cache memory coupled with the plurality of processor engines, the cache memory comprising: a second slave interface operably coupled with the second master accumulation component, the second slave interface configured to receive the aggregated power consumption from the second master accumulation component; at least one third event count component configured to: receive from the second slave interface the aggregated power consumption, monitor a plurality of distributed events in the cache memory, and calculate a third accumulated weighted sum of the distributed events including the accumulated weighted sum from the first master accumulation component and the aggregated power consumption from the second master accumulation component; and a third master accumulation component configured to: determine a second aggregated power consumption for the cache memory based on the third accumulated weighted sum, determine the excess power consumption by comparing the second aggregated power consumption with the threshold power value, and adjust power consumption of the respective execution unit, the processor engine, or the cache memory based on the determined excess power consumption.6. The processing system of claim 5, wherein the third master accumulation component further comprises a first logic component configured to compare the accumulated weighted sum with the short-term power usage threshold and a second logic component configured to compare the accumulated weighted sum with the long-term power usage threshold.7. The processing system of claim 5, further comprising: an arbiter coupled with the third master accumulation component and configured to adjust the power consumption by sending a power throttling signal to one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the determined excess power consumption.8. The processing system of claim 7, wherein the power throttling signal either (a) causes a reduction in instructions per cycle (IPC) of the one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the excess power consumption therein, or (b) is a pulse-width modulation (PWM) throttle signal sent to the one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the excess power consumption therein.9. The processing system of claim 5, further comprising a plurality of first event count components, second event count components, and/or third event count components, wherein each respective set of event count components is interconnected via at least one ring bus.10. A memory controller operatively coupled with a cache memory configured to be shared by a plurality of processing cores, each processing core comprising an execution unit and a processor engine, the memory controller configured to: cause the respective processing core to: monitor a plurality of distributed events in the execution unit and calculate an accumulated weighted sum of the distributed events using at least one first event count component, and determine an excess power consumption by comparing the accumulated weighted sum with a threshold power value and adjust power consumption of the execution unit based on the determined excess power consumption using a first master accumulation component.11. The memory controller of claim 10, wherein the threshold power value is selectable from one of a short-term power usage threshold or a long-term power usage threshold, and the memory controller is further configured to cause the respective processing core to simultaneously compare the accumulated weighted sum with the short-term power usage threshold and compare the accumulated weighted sum with the long-term power usage threshold, and the memory controller is further configured to: cause respective processing core to simultaneously compare the accumulated weighted sum with the short-term power usage threshold and compare the accumulated weighted sum with the long-term power usage threshold.12. The memory controller of claim 10, further configured to: cause the respective processing core to: receive the accumulated weighted sum from the first master accumulation component using a first slave interface operably coupled with the first master accumulation component, receive from the first slave interface the accumulated weighted sum, monitor a plurality of distributed events in the processor engine, and calculate a second accumulated weighted sum of the distributed events including the accumulated weighted sum from the first master accumulation component using at least one second event count component, and determine an aggregated power consumption for the execution unit and the engine based on the second accumulated weighted sum, determine the excess power consumption by comparing the aggregated power consumption with the threshold power value, and adjust power consumption of the execution unit or the processor engine based on the determined excess power consumption using a second master accumulation component.13. The memory controller of claim 12, further configured to: receive the second accumulated weighted sum from the second master accumulation component using a second slave interface operably coupled with the second master accumulation component, receive from the second slave interface the aggregated power consumption, monitor a plurality of distributed events in the cache memory and calculate a third accumulated weighted sum of the distributed events including the accumulated weighted sum from the first master accumulation component and the aggregated power consumption from the second master accumulation component using at least one third event count component; and determine a second aggregated power consumption for the cache memory based on the third accumulated weighted sum, determine the excess power consumption by comparing the second aggregated power consumption with the threshold power value, and adjust power consumption of the respective execution unit, the processor engine, or the cache memory based on the determined excess power consumption using a third master accumulation component.14. The memory controller of claim 13, the memory controller further configured to cause an arbiter coupled with the third master accumulation component to adjust the power consumption by sending a power throttling signal to one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the determined excess power consumption, wherein the power throttling signal either (a) causes a reduction in instructions per cycle (IPC) of the one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the excess power consumption therein, or (b) is a pulse-width modulation (PWM) throttle signal sent to the one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the excess power consumption therein.15. The memory controller of claim 13, further coupled with a plurality of first event count components, second event count components, and/or third event count components, wherein each respective set of event count components is interconnected via at least one ring bus.16. A method of controlling power consumption of a plurality of processing cores, the method comprising: monitoring, using at least one event count component, a plurality of distributed events that occur within a respective processing core; calculating an accumulated weighted sum of the distributed events from the event count component; determining an excess power consumption by the respective processing core by comparing the accumulated weighted sum with a threshold power value; and adjusting power consumption of the respective processing core based on the determined excess power consumption.17. The method of claim 16, wherein the threshold power value is selectable from one of a short-term power usage threshold or a long-term power usage threshold, the distributed events are monitored on an execution unit of the respective processing core, and adjusting the power consumption of the respective processing core includes performing power throttling for the execution unit of the respective processing core.18. The method of claim 16, wherein the distributed events are further monitored on a processor engine of the respective processing core using a plurality of event count components, and adjusting the power consumption of the respective processing core includes performing power throttling for the processor engine, the method further comprising: determining an aggregated power consumption for the execution unit and the processor engine by aggregating the accumulated weighted sum from the plurality of event count components; and determining the excess power consumption by comparing the aggregated weighted sum with the threshold power value, wherein the plurality of event count components in the execution unit and the processor engine are interconnectedly coupled via at least one ring bus.19. The method of claim 18, wherein the distributed events are further monitored on a cache memory shared by the processing cores using the event count components, and adjusting the power consumption of the respective processing core includes performing power throttling for the shared cache memory, the method further comprising: determining a second aggregated power consumption for the execution units, the processor engines, and the shared cache memory by aggregating the aggregated power consumptions from all of the processing cores; and determining the excess power consumption by comparing the second aggregated weighted sum with the threshold power value, wherein the plurality of event count components in the execution units, the processor engines, and the shared cache memory are interconnectedly coupled via a plurality of ring buses.20. The method of claim 19, further comprising sending a power throttling signal to the respective processing core based on the determined reduction of the power consumption, wherein (a) the power throttling signal causes a reduction in instructions per cycle (IPC) of the respective processing core based on the excess power consumption of the respective processing core, or the power throttling signal is a pulse-width modulation (PWM) throttle signal sent to the respective processing core based on the excess power consumption of the respective processing core.
SYSTEM AND METHOD FOR CONTROLLING POWER CONSUMPTION IN PROCESSOR USING INTERCONNECTED EVENT COUNTERS AND WEIGHTEDSUM ACCUMULATORSBACKGROUND OF THE DISCLOSURE[0001] In complementary metal oxide semiconductor (CMOS) integrated circuits, in order to adjust power consumption, modern microprocessors have adopted dynamic power management using “P-states.” A P-state is a voltage and frequency combination. An operating system (OS) determines the frequency to complete the current tasks and causes an on-chip power state controller to set the clock frequency and operating voltage accordingly. For example, if on average the microprocessor is heavily utilized, then the OS may determine that the frequency should be increased. On the other hand if on average the microprocessor is lightly utilized, then the OS may determine that the frequency should be decreased. The available frequencies and corresponding voltages for proper operation at those frequencies are stored in a P-state table. As the operating frequency increases, the corresponding power supply voltage also increases, but it is important to keep the voltage low while still ensuring proper operation.[0002] Processor cores may use performance counters to make processing power measurements of specific events related to the instruction execution and data movement through the cores. Specifically, a digital power monitor (DPM) may use event counters to measure specific events in a core or group of cores over a corresponding time period and use them to calculate power consumed by the core during that time period. That calculated power can then be compared with the thermal design current (TDC) limit of the core or group of cores. Additionally, an electrical design current (EDC) monitor may use event counters to calculate current drawn by a core or group of cores and compare that to the EDC limit of that core or group of cores.[0003] The integrated circuits need to complete separate runs or iterations of performance monitoring with respect to TDC and EDC, since they are two separate mechanisms that are associated with different time frames. TDC is the maximum electrical current sustainable over thermally significant time frames that are measured in milliseconds, for example, while EDC is the maximum electrical current sustainable over much shorter, non-thermally significant time frames that are measured in microseconds. Because completing separate runs for these constraints in view of their differing scales of time frame takes time and computing resources, there is a need to combine them together such that a single run of performance monitoring would be sufficient. BRIEF DESCRIPTION OF THE DRAWINGS[0004] The implementations will be more readily understood in view of the following description when accompanied by the below figures, wherein like reference numerals represent like elements, and wherein:[0005] FIG. 1 is an example functional block diagram of a multi-processor core system according to embodiments disclosed herein;[0006] FIG. 2 is an example functional block diagram of the subcomponents of an event count logic component implemented in the system of FIG. 1 according to embodiments disclosed herein;[0007] FIG. 3 is an example functional block diagram of the subcomponents of a core and a shared (L3) cache implemented in the system of FIG. 1 according to embodiments disclosed herein;[0008] FIG. 4 is an example functional block diagram of data flow from the core to the shared cache in the system of FIG. 1 according to embodiments disclosed herein; and [0009] FIG. 5 is an example functional block diagram of the subcomponents of the event count logic component from FIG. 2 according to embodiments disclosed herein;[0010] FIG. 6 is an example flow diagram of a process implemented in the system according to embodiments disclosed herein;[0011] FIG. 7 is an example flow diagram of a process involving weighted sum accumulation corresponding to the execution unit and the processor engine as implemented in the system according to embodiments disclosed herein; and[0012] FIG. 8 is an example flow diagram of a process involving weighted sum accumulation corresponding to the execution unit, the processor engine, and the shared memory as implemented in the system according to embodiments disclosed herein.DETAILED DESCRIPTION OF IMPLEMENTATIONS[0013] Briefly, systems and methods facilitate improved power consumption control of a plurality of processing cores by adjusting power consumption of the respective processing core with excessive power consumption. Specifically, the method of controlling power consumption of a plurality of processing cores includes the steps of: monitoring, using at least one event count component, a plurality of distributed events; calculating an accumulated weighted sum of the distributed events from the event count component; determining an excess power consumption by comparing the accumulated weighted sum with a threshold power value; and adjusting power consumption of the respective processing core based on the determined excess power consumption. The threshold power value is selectable from one of a short-term power usage threshold or a long-term power usage threshold.[0014] In some examples, the method includes simultaneously comparing the accumulated weighted sum with the short-term power usage threshold and comparing the accumulated weighted sum with the long-term power usage threshold. In some examples, the power consumption is adjusted by generating a power throttle control signal for the respective processing core with excess power consumption. In some examples, the distributed events are monitored on an execution unit of the respective processing core, and the power consumption is adjusted by performing power throttling for the execution unit of the respective processing core. In some examples, the distributed events are further monitored on a processor engine of the respective processing core using a plurality of event count components, and the power consumption is adjusted by performing power throttling for the processor engine. In such examples, the method further includes the steps of: determining an aggregated power consumption for the execution unit and the processor engine by aggregating the accumulated weighted sum from the plurality of event count components, and determining the excess power consumption by comparing the aggregated weighted sum with the threshold power value.[0015] In some examples, the plurality of event count components in the execution unit and the processor engine are interconnectedly coupled via at least one ring bus. In some examples further to the above, the distributed events are further monitored on a cache memory shared by the processing cores using the event count components, and the power consumption is adjusted by performing power throttling for the shared cache memory. The method in such examples further include the steps of: determining a second aggregated power consumption for the execution units, the processor engines, and the shared cache memory by aggregating the aggregated power consumptions from all of the processing cores, and determining the excess power consumption by comparing the second aggregated weighted sum with the threshold power value.[0016] In some examples, the plurality of event count components in the execution units, the processor engines, and the shared cache memory are interconnectedly coupled via a plurality of ring buses. In some examples, the method further includes the step of sending a power throttling signal to the respective processing core based on the determined excess power consumption. In some examples, the power throttling signal causes a reduction in instructions per cycle (IPC) of the respective processing core based on the excess power consumption of the respective processing core. In some examples, the power throttling signal is a pulse-width modulation (PWM) throttle signal sent to the respective processing core based on the excess power consumption of the respective processing core. In some examples, the execution unit is a floating point unit.[0017] According to certain implementations, a processing system for controlling power consumption of at least one processing core includes at least one execution unit. The execution unit includes at least one first event count component and a first master accumulation component coupled with the first event count component. The first event count component monitors a plurality of distributed events in the execution unit, and calculates an accumulated weighted sum of the distributed events. The first master accumulation component determines an excess power consumption by comparing the accumulated weighted sum with a threshold power value, and adjusts power consumption of the execution unit based on the determined excess power consumption. The threshold power value selectable from one of a short-term power usage threshold or a long-term power usage threshold. The first master accumulation component includes a first logic component to compare the accumulated weighted sum with the short-term power usage threshold, and a second logic component to compare the accumulated weighted sum with the long-term power usage threshold.[0018] In some examples, the first logic component compares the accumulated weighted sum with the short-term power usage threshold simultaneously as the second logic component compares the accumulated weighted sum with the long-term power usage threshold. In some examples, the processing system further includes a plurality of first event count components in the execution unit, and at least one ring bus operably coupled to the plurality of first event count components such that the plurality of first event count components are interconnected. [0019] In some embodiments, the processing system further includes at least one processor engine coupled with the at least one execution unit. The processor engine includes a first slave interface operably coupled with the first master accumulation component, at least one second event count component, and a second master accumulation component. The first slave interface receives the accumulated weighted sum from the first master accumulation component. The second event count component receives from the first slave interface the accumulated weighted sum, monitors a plurality of distributed events in the processor engine, and calculates a second accumulated weighted sum of the distributed events including the accumulated weighted sum from the first master accumulation component. The second master accumulation component determines an aggregated power consumption for the execution unit and the engine based on the second accumulated weighted sum, determines the excess power consumption by comparing the aggregated power consumption with the threshold power value, and adjusts power consumption of the execution unit or the processor engine based on the determined excess power consumption. The second master accumulation component further includes a first logic component to compare the accumulated weighted sum with the short-term power usage threshold, and a second logic component to compare the accumulated weighted sum with the long-term power usage threshold.[0020] In some examples, the processing system further includes a plurality of second event count components in the processor engine, and at least one ring bus operably coupled to the plurality of second event count components such that the plurality of second event count components are interconnected.[0021] In some embodiments, the processing system further includes a plurality of processor engines and a cache memory coupled with the plurality of processor engines. The cache memory includes a second slave interface operably coupled with the second master accumulation component, at least one third event count component, and a third master accumulation component. The second slave interface receives the aggregated power consumption from the second master accumulation component. The third event count component receives from the second slave interface the aggregated power consumption, monitors a plurality of distributed events in the cache memory, and calculates a third accumulated weighted sum of the distributed events including the accumulated weighted sum from the first master accumulation component and the aggregated power consumption from the second master accumulation component. The third master accumulation component determines a second aggregated power consumption for the cache memory based on the third accumulated weighted sum, determines the excess power consumption by comparing the second aggregated power consumption with the threshold power value, and adjusts power consumption of the execution unit, the processor engine, or the cache memory based on the determined excess power consumption. The third master accumulation component further includes a first logic component to compare the accumulated weighted sum with the short term power usage threshold, and a second logic component to compare the accumulated weighted sum with the long-term power usage threshold.[0022] In some examples, the processing system further includes an arbiter coupled with the third master accumulation component. The arbiter is configured to send a power throttling signal to one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the determined excess power consumption.[0023] In some examples, the power throttling signal causes a reduction in instructions per cycle (IPC) of the one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the excess power consumption therein. In some examples, the power throttling signal is a pulse-width modulation (PWM) throttle signal sent to the one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the excess power consumption therein. In some examples, the execution unit is a floating point unit.[0024] According to certain implementations, a memory controller operatively coupled with a cache memory configured to be shared by a plurality of processing cores may be configured to perform the process disclosed herein, where each processing core includes an execution unit and a processor engine. That is, the memory controller causes each of the processing cores to: monitor a plurality of distributed events in the execution unit and calculate an accumulated weighted sum of the distributed events using at least one first event count component, and determine an excess power consumption by comparing the accumulated weighted sum with a threshold power value and adjusts power consumption of the execution unit based on the determined excess power consumption using a first master accumulation component. The threshold power value selectable from one of a short-term power usage threshold or a long-term power usage threshold.[0025] In some examples, the memory controller causes respective processing core to simultaneously compare the accumulated weighted sum with the short-term power usage threshold and compare the accumulated weighted sum with the long-term power usage threshold. In some examples, the memory controller further cause the each of the processing cores to: receive the accumulated weighted sum from the first master accumulation component using a first slave interface operably coupled with the first master accumulation component, receive from the first slave interface the accumulated weighted sum, monitor a plurality of distributed events in the processor engine, and calculate a second accumulated weighted sum of the distributed events including the accumulated weighted sum from the first master accumulation component using at least one second event count component, and determine an aggregated power consumption for the execution unit and the engine based on the second accumulated weighted sum, determine the excess power consumption by comparing the aggregated power consumption with the threshold power value, and adjusts power consumption of the execution unit or the processor engine based on the determined excess power consumption using a second master accumulation component.[0026] In some examples, the memory controller further receives the aggregated power consumption from the second master accumulation component using a second slave interface operably coupled with the second master accumulation component; receives from the second slave interface the aggregated power consumption, monitor a plurality of distributed events in the cache memory, and calculate a third accumulated weighted sum of the distributed events including the accumulated weighted sum from the first master accumulation component and the aggregated power consumption from the second master accumulation component using at least one third event count component; and determines a second aggregated power consumption for the cache memory based on the third accumulated weighted sum, determine the excess power consumption by comparing the second aggregated power consumption with the threshold power value, and adjusts power consumption of the execution unit, the processor engine, or the cache memory based on the determined excess power consumption using a third master accumulation component.[0027] In some examples, the memory controller further causes an arbiter coupled with the third master accumulation component to receive the determined excess power consumption and send a power throttling signal to one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the determined excess power consumption. In some examples, the power throttling signal may be an instructions per cycle (IPC) reduction signal of the one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the excess power consumption therein. In some examples, the power throttling signal may be a pulse-width modulation (PWM) throttle signal sent to the one or more of the at least one execution unit, the at least one processor engine, or the cache memory based on the excess power consumption therein.[0028] In the following description, numerous specific details are set forth to provide a thorough understanding of the methods and mechanisms presented herein. However, one having ordinary skill in the art should recognize that the various embodiments may be practiced without these specific details. In some instances, well-known structures, components, signals, computer program instructions, and techniques have not been shown in detail to avoid obscuring the approaches described herein. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements.[0029] FIG. 1 illustrates a high-level view of an exemplary multi -processor core system 100 according to embodiments disclosed herein. The system 100 includes a plurality of N+l cores (from core 0 to core N) that are each coupled with a shared cache memory 104 such as L3 cache via a data communication bus 103 such as a system bus to transmit information to and from the cores 102 and the shared cache 104. The cores 102 and the shared cache 104 are disposed on a common die with a clock and an interface for connection to a northbridge (not shown). The northbridge may include a memory controller 128 that may be implemented in any suitable manner such as one or more state machines, a programmable processor and/or a combination of processor executing software instructions and one or more state machines.[0030] Each core 102 includes an execution unit 106 which may be a floating point unit as shown or any other suitable type of execution unit. The core 102 also includes an engine 108, cache memory 110 such as L2 cache, an event count logic component 112, and a master accumulation logic component 114. The shared cache 104 includes a slave interface 116 that is coupled with each of the master accumulation logic components 114 in the cores 102, as well as its own event count logic component 118 and master accumulation logic component 120. Furthermore, each execution unit 106 includes its own event count logic component 122 and master accumulation logic component 124. The engine 108 also has its own slave interface 126 that is coupled to the master accumulation logic component 124 of the execution unit 106.[0031] Each of the aforementioned logic components (that is, the event count logic components and the master accumulation logic components) as well as the slave interfaces may be implemented using one or more digital circuit components including but not limited to logic units such as arithmetic logic units (ALU), multiplexers (MUX), registers, control units, etc., according to their functionalities. Such components may be collectively referred to as digital power monitor (DPM). The engine 108 is a processing engine with components including but not limited to a decoder, a branch predictor, an instruction unit, a load-store unit, an integer execution unit, etc., excluding the execution unit 106, which in the example shown is a floating point unit.[0032] FIG. 2 illustrates a high-level view of the subcomponents of an exemplary event count logic component, for example the event count logic component 112 for the core 102, as well as how the event count logic component operates, according to embodiments disclosed herein. The event count logic component 112 includes a plurality of event detectors 200, where each event detector is capable of detecting the occurrence of a distributed event and maintains count of its occurrence in a register, as shown. The input receives an event (in the example shown, there are X+l events that may be detected, that is, EventO through EventX) and the counter within the event detector 200 maintains count of the event until the event count is transferred to the next subcomponent of the event count logic component 112, which is a scaler 204. [0033] Some of the events that are monitored may include, for example: predicting the outcome of a branch instruction; accessing instruction data in an instruction cache; accessing instruction data in an “op cache”; dispatching an operation from an instruction decode unit to an integer execution unit; selecting a ready floating-point operation to be executed; selecting a ready integer ALU operation to be executed; selecting a ready load or store operation to be executed; training a cache prefetch mechanism; reading or writing from an L2 cache memory; reading or writing from an L3 cache memory; or reading or writing from external memory, among others.[0034] The scaler 204 receives the event count (EventCnt) from one of the event detectors 200 as selected by a MUX 202. The selected event count (EventCnt) is scaled or multiplied using an expanded event weight (EventW eight) as selected from an expanded event weight register 206 that corresponds to the selected type of event. The bits in the scaled event (EventWord) are sent to an adder 208 to be added to the corresponding bits in a temporary aggregated register (AggTmp) 210 and stored therein. The stored bits within the AggTmp register 210 are then sent to a local aggregated register (AggLocal) 214 as well as a shared aggregated register (AggShare) 218. The bits from the AggTmp register 210 are added to the corresponding bits in the AggLocal register 214 via an adder 212, and also added to the corresponding bits in the AggShare register 218 via an adder 216.[0035] In some examples, the AggLocal register 214 stores a growing sum of local events such that it may be referred to when performing power consumption analysis, for example, in order to determine how each of the events affect the power consumption of the cores, among other types of analysis as suitable. The AggShare register 218 stores the aggregated value of all the events’ weighted power consumption (referred to as “Cac” or dynamic capacitance). The aggregated value is also referred as an accumulated weight sum of all the events. The AggShare register 218 is capable of communicating with the master accumulation logic component 114 such that the accumulated weighted sum from the event count logic component 112 may be sent to be compared with the appropriate threshold values.[0036] Each core 102 has its own set of event count logic components 112 and master accumulation logic component 114, and the master accumulation logic component 114 is capable of receiving the accumulated weighted sum from each of the event count logic components 112 and aggregate the weighted sums into a single value to be compared with a threshold in order to determine whether the total value of the weighted sums is over the threshold, and if so, by how much. The threshold may be determined based on the thermal design current (TDC) and the electrical design current (EDC) of the cores. The total of the weight sums indicates the total Cac estimation for the event. In some examples, the values of the AggShare register 218 are transferred serially to the master accumulation logic component 114, and as soon as a bit is transferred, the register clears the transferred bit one at a time to reset itself such that the accumulated weight sum of the next set of events may be stored therein.[0037] FIG. 3 illustrates a high-level view of the subcomponents of exemplary core and shared cache memory, for example the core 102 and shared cache memory 104 coupled therewith, according to some embodiments. The cache memory 104 includes the slave interface 116 which is coupled with the cores 102 via the master accumulation logic component 114, as well as its own master accumulation logic component 120. In each of these master accumulation logic components 114 and 120, there is an aggregator module 300 which aggregates all the weighted sums from each of the individual AggShare registers 218 in the event count logic components 112 (in the core 102) or 118 (in the shared cache 104) and a threshold comparator module 302 which compares the aggregated weighted sums with a threshold to determine if the event exceeds the Cac estimation threshold (that is, in the state of OverThresh), and if so, by how much it exceeds the budget (that is, the amount of OverBudget).[0038] In some examples, the master accumulator logic component 120 is coupled with a scheduler 308 with an arbiter module 306 such that the OverThresh and OverBudget signals are sent to the arbiter module 306 in the form of a throttle signal 304 that is generated by the master accumulation logic component 120 based on the results of the threshold comparator module 302. The throttle signal 304 is a signal sent to the arbiter module 306 in facilitate power throttling such as PWM throttling or a reduction in IPC in the component that is determined to be at the OverThresh state, that is, the events therein exceed the Cac estimation threshold. In some examples, alternative methods of adjusting or modifying power consumption of the respective processor core or the shared memory may involve clock stretching or operating frequency reduction, as suitable. As such, the power throttling facilitates adjustment or modification of the Cac estimation in order to bring the power consumption to below the threshold value through any suitable means, including but not limited to PWM throttling or IPC reduction.[0039] Furthermore, each of the master accumulation logic components 114 and 120 includes separate accumulators for short-term and long-term power usage decisions. Each of the separate accumulators may include any suitable components such as logic units including but not limited to ALU, MUX, registers, control units, counters, etc., for example, to facilitate comparing the collected power usage data to the appropriate power usage threshold. For example, the aggregator 300 and the threshold comparator 302 may be implemented as part of a first logic component or a short-term accumulator 310 which facilitates power throttling with a very short time interval (for example, once per each accumulation packet cycle) associated with the EDC-based threshold, hereinafter referred to as the “EDC threshold”. Additionally, a second logic component or a long-term accumulator 312 is included in the master accumulation logic component to facilitate power throttling with a longer time interval associated with the TDC-based threshold, hereinafter referred to as the “TDC threshold”. [0040] That is, the long-term accumulator 312 includes an aggregator 301 and a threshold comparator 303 which operates similar to the aggregator 300 and the threshold comparator 302, respectively, but instead of collecting short-term power usage data and comparing it to the EDC threshold, these components allow for periodically collecting the long-term power usage data (lasting a plurality of accumulation packet cycles), comparing the collected power usage data to the TDC threshold, and managing the voltage and frequency levels of the core(s) 102, and in some cases also the shared cache memory 104, that exceed the TDC threshold, such that the system 100 remains within the limit set forth by the TDC threshold. As such, each of the master accumulation logic components is capable of performing power throttling based on both the short-term EDC threshold and the long-term TDC threshold. [0041] In some examples, the separate accumulators 310 and 312 are capable of simultaneously comparing the collected short-term or long-term power usage data to the EDC threshold or the TDC threshold, respectively, such that the appropriate power throttling may be performed. In some examples, the accumulators 310 and 312 are operable independently of each other, such that either the short-term (or EDC) threshold comparison or the long-term (or TDC) threshold comparison may be selected to be performed by the appropriate accumulator.[0042] As explained above, one of the methods of performing power throttling includes reducing the IPC, for example. Reducing the IPC may be done by reducing the allowed bandwidth for branch prediction, instruction fetch, instruction decode, or instruction execution, for example. Power throttling may also be performed by reducing processor frequency by various means, including reducing phase-locked loop (PLL) or delay-locked loop (DLL) frequency or using a digital frequency divider, for example. Power throttling may also be performed by reducing power supply voltage either at a shared voltage regulator or at a voltage regulator local to a specific component, such as a processor core or cache memory, for example. [0043] Each of the master accumulator logic components 114, 120, and 124 are arranged hierarchically, that is, the result of the threshold comparator 302 therein facilitates PWM throttling of the component of equal or lower hierarchy. For example, the master accumulation logic component 120 associated with the shared cache memory 104 is of the highest hierarchy because the throttling signal 304 affects the shared cache memory 104 and the core 102, which is of lower hierarchy than the shared cache memory 104. Similarly, the result of the threshold comparator 302 in the master accumulation logic component 114 associated with the core 102 affects the operation of the engine 108 and the FPU 106, which is of lower hierarchy than the engine 108. Lastly, the result of the threshold comparator 302 in the master accumulation logic component 124 associated with the FPU 106 affects only the operation of the FPU 106, which may be of the lowest hierarchy of these components in some examples. In some examples, there may be any arbitrary number of additional levels of hierarchy that are supported by the system.[0044] FIG. 4 illustrates a high-level view of how the weighted sums of events are accumulated within the core 102 and how they are aggregated with those accumulated within the shared cache memory 104, according to embodiments disclosed herein. As illustrated, the core 102 has a plurality of event count logic components 112 and 400 such that each event count logic component may be capable of counting a different type of event within the core 102. The master accumulation logic component 114 of the core 102 begins the process by sending an initial aggregation packet 402 to the first event count logic component 112, which may be an empty packet to be filled by the corresponding event count logic components within the core 102. In some examples, the initial aggregation packet is issued at equal time interval such as approximately 50 ns, and new packets may not be issued until the previous aggregation packet is completely sent but not necessarily received.[0045] The first event count logic component 112 accumulates the initial aggregation packet 402 with the value of the weighted sum from its AggShare register 218 to generate aggregation packet 404. The first event count logic component 112 sends the aggregation packet 404 to the second event count logic component 400 such that the second event count logic component 400 then accumulates the received aggregation packet 404 with the value of the weighted sum (that is, the accumulated Cac) from its AggShare register 218 to generate aggregation packet 406, which is sent back to the master accumulation logic component 114. Although only two event count logic components are shown, it is to be understood that any suitable number of event count logic components may be employed. [0046] The aggregation packet 406 represents the total accumulated Cac from all the event count logic components in the core 102. After the master accumulation logic component 114 receives the aggregation packet 406 which includes the weighted sums from all the event count logic components associated with the core 102, the aggregation packet 406 may be then transferred to a slave interface, if present. In this instance, the slave interface 116 of the shared cache memory 104 receives the aggregation packet 406 from the core 102. In some examples, the master accumulation logic component 114 may compare the aggregation packet 406 to a threshold value to determine whether the events local to core 102 cause an Over Thresh state local to the core 102.[0047] In the shared cache memory 104, the master accumulation logic component 120 sends an initial aggregation packet 408, similar to the initial aggregation packet 402 in the core 102. The slave interface 116 aggregates the initial aggregation packet 408 with the aggregation packet 406 received from the master accumulation logic component 114 of the core 102 to generate an aggregation packet 410 which in effect is identical to the aggregation packet 406 if the initial aggregation packet 408 is empty. Subsequently, one or more event count logic component 118 associated with the shared cache memory 104 accumulates the aggregation packet 412 with the value of the weighted sum from its AggShare register 218 to generate a total aggregation packet 412, which is the final aggregation packet that aggregates the weighted sum of events from not only the event count logic components associated with the cores 102 but also those associated with the shared cache memory 104. The master accumulation logic component 120 receives the total aggregation packet 412 and compares the results to the threshold value to determine whether the events cause the OverThresh state in the components, as described.[0048] As shown in FIG. 4, the event count logic components, the master accumulation logic component, and the slave interface, if applicable, may be interconnected using serial buses or ring buses such that the aggregation packet are transferred from one component to the next in the aggregation process. The transfer of data is performed in a hierarchical manner as defined by the slave interfaces; that is, the slave interfaces ensure the unidirectional data flow from the master accumulation logic component to the slave interface during the aggregation process. The hierarchy of the master accumulation logic components may be ascertained in the following order: (1) Cac of the events in each core’s execution unit, (2) Cac of the events in each core and in all the execution units therein, (3) Cac of the shared cache memory associated with the cores, and (4) Cac of all the cores and the shared cache memory associated with the cores. [0049] FIG. 5 shows an event count logic component 500 in one example of implementation, according to embodiments disclosed herein. The event count logic component 500 may represent any one of the event count logic components in the execution unit, core, or shared cache memory, and uses portions of an associated register 504 to perform the calculations and store the results thereof as shown in FIG. 2.[0050] The event count logic component 500 includes a plurality of event detectors 200 to detect the same number of events, or more specifically eight (8) events, EventO through Event7. The output of each event detector 200 is a 6-bit event counter string (one of Cnt0[5:0] through Cnt7[5:0]) which is sent to the MUX 202 to be selected. Once selected, the 6-bit event counter is multiplied with a selected 8-bit weight string by the scaler 204, where the weight string (one of Weight0[7:0] through Weight7[7:0]) is selected by a MUX 502 according to the selected event counter.[0051] Once scaled, the scaler 204 outputs a 26-bit scaled data string to be accumulated and stored in the register 504. Specifically, each of the subsections EventWord0[15:0] through EventWord7[15:0] of the register 504 operates as a portion of the registers 210, 214, and 218 shown in FIG. 2. That is, in this example, EventWord0[15:0] through EventWord2[15:0] are collectively used as a 48-bit AggShare register 218, EventWord3[15:0] and EventWord4[15:0] are collectively used as a 32-bit AggTmp register 210, and EventWord5[15:0] through EventWord7[15:0] are collectively used as a 48-bit AggLocal register 214. An adder 506 collectively operates as the adders 208, 212, and 216 in order to calculate the weighted sums from the plurality of events and store the results in the corresponding subsections of the register 504.[0052] The system 100 may be any type of processor system such as a central processing unit (CPU) or a graphics processing unit (GPU). For example, the system 100 may be implemented as an x86 processor with x86 64-bit instruction set architecture and is used in desktops, laptops, servers, and superscalar computers; an Advanced Reduced Instruction Set Computer (RISC) Machine (ARM) processor that is used in mobile phones or digital media players; or a digital signal processor (DSP) that is useful in the processing and implementation of algorithms related to digital signals, such as voice data and communication signals, and microcontrollers that are useful in consumer applications, such as printers and copy machines.[0053] The cores 102 form the computational centers of the system 100 and are responsible for performing a multitude of computational tasks. For example, the processor cores 102 may include, but are not limited to, execution units that perform additions, subtractions, shifting and rotating of binary digits, and address generation and load and store units that perform address calculations for memory addresses and the loading and storing of data from memory. The operations performed by the processor cores 102 enable the running of computer applications.[0054] The processor cores 102 operate according to certain performance states (P-states), for example as controlled by the controller 128. P-states are described as follows. The Advanced Configuration and Power Interface (ACPI) standard is an operating system-based specification that regulates a computer system’s power management. For example, the ACPI standard may control and direct the processor cores for better management of battery life. In doing so, ACPI assigns processor power states, referred to as C-states, and forces a processor to operate within the limits of these states. There are varying levels of C-states (e.g., CO for a fully working state, with full power consumption and full dissipation of energy; Cl for a sleeping state, where execution of instructions are stopped and the processor may return to execute instructions instantaneously; or C2 for another sleeping state where the processor may take longer to go back to CO state) that a processor may be assigned, along with the corresponding implication for a processor’s performance.[0055] While a processor is in the fully working CO state, it will be associated with another state, referred to as the performance state or the P-state. There are varying levels of P-states that are each associated with an operating voltage and frequency. The highest performance state is P0, which may correspond to maximum operating power, voltage and frequency. However, a processor may be placed in lower performance states, for example PI or P2, which correspond to lower operating power, voltage and/or frequency. Generally, when a processor moves to a lower P-state it will operate at a lower capacity than before.[0056] FIG. 6 is a flow diagram of an exemplary process 600 according to embodiments disclosed herein. This process, as well as any other process disclosed herein, may be performed by any suitable means such as one or more state machines, a programmable processor and/or a combination of processor executing software instructions and one or more state machines. In step 602, a plurality of distributed events are monitored in the respective processing core using event counters. In step 604, an accumulated weighted sum of the distributed events from the event counters is calculated. In step 606, the excess power consumption is determined by comparing the accumulated weighted sum with a threshold power value. In step 608, the power consumption of the respective processing core is adjusted or modified (e.g., reduced) based on the determined excess power consumption. In some examples, the power consumption is adjusted via power throttling. [0057] In some examples, the distributed events may be monitored on an execution unit of the respective processing core, and adjusting the power consumption includes performing power throttling for the execution unit of the respective processing core. The method 600 is applicable for the distributed events that are monitored in any of the execution unit, the processing engine of the core, or the shared cache memory. As such, the distributed events in some examples include those only detected in the execution unit, those detected in the execution unit and the processing engine of the core, or those detected in the execution unit, processing engine, and the shared cache memory.[0058] FIG. 7 is a flow diagram of an exemplary process 700, according to embodiments disclosed herein. The process 700 facilitates aggregating the power consumption estimation values of a plurality of distributed events in the execution unit and the processor engine, both of which are implemented as part of the processor core. In step 702, a plurality of distributed events are monitored in each execution unit and processor engine using event counters. The monitoring of distributed events may be performed separately or simultaneously in each of the cores. In step 704, the weighted sum of the distributed events from each of the event counters corresponding to the execution unit is accumulated to calculate an accumulated weighted sum for the events corresponding to the execution unit.[0059] In step 706, the accumulated weighted sum from step 704 corresponding to the execution unit is aggregated with the weighted sum of the distributed events from the event counters corresponding to the processor engine. The aggregation may be performed by receiving via the slave interface of the processor engine the accumulated weighted sum from the master accumulation logic component of the execution unit and further accumulating the same with the weighted sum of the distributed events from the event counters corresponding to the processor engine.[0060] In step 708, the excess power consumption of the execution unit and the processor engine is determined by comparing the aggregated power consumption with a threshold power value. In step 710, the power consumption of the respective processing core is adjusted based on the determined excess power consumption. In some examples, the power consumption is adjusted via power throttling. Specifically, the amount of power throttling may be proportional to the amount of excess power consumption by the execution unit and processor engine. The plurality of event count components in the execution unit and the processor engine may be interconnectedly coupled via at least one ring bus.[0061] FIG. 8 is a flow diagram of an exemplary process 800, according to embodiments disclosed herein. The process 800 facilitates aggregating the power consumption estimation values of a plurality of distributed events in the execution unit, the processor engine, and the shared cache memory that is operably coupled with the processor core implementing the execution unit and the processor engine.[0062] In step 802, the plurality of distributed events in each of the execution unit, processor engine, and shared memory are monitored using event counters corresponding to these components. In step 804, the weighted sum of the distributed events from each of the event counters corresponding to the execution unit is accumulated to calculate an accumulated weighted sum for the events corresponding to the execution unit.[0063] In step 806, the accumulated weighted sum from step 804 corresponding to the execution unit is aggregated with the weighted sum of the distributed events from the event counters corresponding to the processor engine. The aggregation may be performed by receiving via the slave interface of the processor engine the accumulated weighted sum from the master accumulation logic component of the execution unit and further accumulating the same with the weighted sum of the distributed events from the event counters corresponding to the processor engine.[0064] In step 808, the accumulated weighted sum from step 806 corresponding to the execution unit and the processor engine is aggregated with the weighted sum of the distributed events from the event counters corresponding to the shared memory. The aggregation may be performed by receiving via the slave interface of the shared memory the aggregated weighted sum from the master accumulation logic component of the processor engine and further accumulating the same with the weighted sum of the distributed events from the event counters corresponding to the shared memory.[0065] In step 810, the excess power consumption of the execution unit, the processor engine, and the shared memory is determined by comparing the aggregated power consumption with a threshold power value. In step 812, the power consumption of the respective processing core or the shared memory is adjusted based on the determined excess power consumption.In some examples, the power consumption is adjusted via power throttling. A power throttling signal may be sent to the respective processing core or shared memory with excessive power consumption based on the determined excess power consumption. Specifically, the amount of power throttling may be proportional to the amount of excess power consumption by the execution unit, processor engine, and shared memory. The plurality of event count components in the execution unit, the processor engine, and the shared memory may be interconnectedly coupled via at least one ring bus. [0066] Advantages in implementing the interconnected event counters and weighted sum accumulators as disclosed herein include a more efficient use of the events to track power usage with respect to the TDC and EDC of the cores, despite the former corresponding to thermally significant time frames and the latter to a much shorter, non-thermally significant time frames. The methods and systems disclosed herein also facilitate a more flexible and efficient calibration of the processor by using only a single calibration to encompass the power usage tracking of both TDC and EDC, which may vary depending on the type and implementation of the processor.[0067] Furthermore, the methods and systems disclosed herein facilitate a more accurate measurement of the power usage and track the same with respect to the TDC and EDC limitations. As such, more of the EDC budget power may be used, causing a smaller performance margin. When it is determined that the power usage exceeds the threshold set based on the TDC limit or the EDC limit, power throttling is performed to adjust such power usage. Meeting the TDC and EDC power usage limitations facilitates higher performance in the processor for heavier workloads. In some examples, the use of a combined ring bus configuration for the event counters for monitoring power usage with respect to the TDC and EDC of the cores, that is, instead of having two separate sets of event counters, may also reduce the size of the overall processor system and thereby also improving the manufacturing efficiency of the processor system.[0068] Although features and elements are described above in particular combinations, each feature or element can be used alone without the other features and elements or in various combinations with or without other features and elements. The methods provided may be implemented in a general purpose computer, a processor, or a processor core. Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine. Such processors may be manufactured by configuring a manufacturing process using the results of processed hardware description language (HDL) instructions and other intermediary data including netlists (such instructions capable of being stored on a computer readable media). The results of such processing may be mask works that are then used in a semiconductor manufacturing process to manufacture a processor which implements aspects of the embodiments. [0069] The methods or flow charts provided herein may be implemented in a computer program, software, or firmware incorporated in a non-transitory computer-readable storage medium for execution by a general purpose computer or a processor. Examples of non- transitory computer-readable storage mediums include a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).[0070] In the preceding detailed description of the various embodiments, reference has been made to the accompanying drawings which form a part thereof, and in which is shown by way of illustration specific preferred embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized, and that logical, mechanical and electrical changes may be made without departing from the scope of the invention. To avoid detail not necessary to enable those skilled in the art to practice the invention, the description may omit certain information known to those skilled in the art. Furthermore, many other varied embodiments that incorporate the teachings of the disclosure may be easily constructed by those skilled in the art. Accordingly, the present invention is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the scope of the invention. The preceding detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims. The above detailed description of the embodiments and the examples described therein have been presented for the purposes of illustration and description only and not by limitation. For example, the operations described are done in any suitable order or manner. It is therefore contemplated that the present invention covers any and all modifications, variations or equivalents that fall within the scope of the basic underlying principles disclosed above and claimed herein.[0071] The above detailed description and the examples described therein have been presented for the purposes of illustration and description only and not for limitation.
The disclosure includes a method and system of configuring a translation lookaside buffer (TLB). In an embodiment, the TLB includes a first portion and a second portion. The first portion or the second portion may be selectively disabled in response to a value of a TLB configuration indicator.
1.A method comprising:Receiving at least one translation lookaside buffer (TLB) configuration indicator; andThe number of searchable entries of the TLB is modified in response to the value of the TLB configuration indicator.2.The method of claim 1, wherein the at least one TLB configuration indicator is received from an operating system.3.The method of claim 2, wherein the at least one TLB configuration indicator is determined in response to a TLB miss rate exceeding a threshold.4.The method of claim 1, wherein the at least one TLB configuration indicator comprises a bit in a configuration register.5.The method of claim 1, wherein modifying the number of searchable entries includes enabling a portion of a TLB to increase the number of searchable entries.6.The method of claim 5, further comprising setting an invalidation indicator for each of the searchable entries in the enabled portion of the TLB.7.The method of claim 1, wherein the TLB configuration indicator determines whether the TLB has a first number of available entries or a second number of available entries.8.The method of claim 1, further comprising:Deactivating a portion of the TLB to reduce the number of searchable entries.9.The method of claim 8, further comprising:Copy data from at least one entry of the deactivated portion of the TLB to at least another portion of the TLB.10.The method of claim 8, further comprising powering down the deactivated portion of the TLB.11.A method comprising:Determine the miss rate of the translation lookaside buffer (TLB);Detecting that the TLB miss rate exceeds a threshold; andAnd sending an instruction to increase the size of the TLB after detecting that the TLB miss rate has exceeded the threshold.12.The method of claim 11, wherein the TLB miss rate is based on the number of TLB queries attempted that caused an exception compared to the total number of TLB queries.13.The method of claim 11, further comprising setting at least one configuration indicator at a configuration register to indicate a number of enabled portions of a TLB.14.A computer-readable medium including:A configuration register containing a translation look-aside buffer (TLB) configuration field, the TLB configuration field containing a TLB configuration value;Wherein the TLB configuration value identifies a first set value or a second set value, and when the TLB configuration value identifies the first set value, the TLB has a first number of searchable entries, And when the value identifies the second set value, the TLB has a second number of searchable entries, and the second number is different from the first number.15.The medium of claim 14, wherein the TLB configuration field is programmable by a processor under software control.16.The medium of claim 15, wherein the software control is performed by an operating system.17.The medium of claim 14, wherein the TLB configuration field has at least two bits, and wherein the value is configured to further identify a third setting value or a fourth setting value, the third setting value It is related to the third part of the TLB, and the fourth set value is related to the fourth part of the TLB.18.A system including:Translation look-aside buffer (TLB) configuration bits, which are stored in memory;TLB, which includes a first portion and a second portion, wherein the first portion is selectively disabled in response to a value of the TLB configuration bit.19.The system of claim 18, further comprising a logic element responsive to the memory, the logic element having an output coupled to the TLB, wherein the first portion is responsive to the output of the logic element Selective deactivation.20.The system of claim 19, wherein the logic element is further responsive to a memory management unit control signal.21.The system of claim 18, wherein the first portion contains half of an entry in the TLB.22.The system of claim 18, wherein the first part contains a different number of entries than the second part.23.The system of claim 18, further comprising a multiplexer responsive to the output of the TLB, and wherein the multiplexer selects those outputs that are enabled in response to a TLB configuration bit set value.24.The system of claim 18, wherein a plurality of entries in the TLB are populated by a software program.25.The system of claim 18, wherein the TLB is incorporated into a processor configured to execute a software application, and wherein the software application has a first mode of operation using a first number of TLB entries, TLB entries The first number of is less than the number of entries in the first portion of the TLB.
Configurable translation lookaside bufferTechnical fieldThe invention relates generally to translation lookaside buffers.Background techniqueAdvances in technology have resulted in smaller and more powerful personal computing devices. For example, there are currently many portable personal computing devices, including wireless computing devices, such as portable wireless telephones, personal digital assistants (PDAs), and paging devices, which are smaller, lighter, and easier to carry by users. More specifically, portable wireless phones such as cellular phones and IP phones can transmit voice and data packets via wireless networks. In addition, many of these wireless telephones include other types of devices incorporated therein. For example, a wireless telephone may also include a digital still camera, digital video camera, digital recorder, and audio file player. Moreover, these wireless phones can process executable instructions, including software applications, such as a web browser application that can be used to access the Internet. As such, these wireless telephones can include considerable computing power.Processes performed at a portable computing device may use virtual addresses to reference data and instructions, which must be translated into physical addresses for processing. A translation lookaside buffer (TLB) can store data for rapid translation of virtual addresses into physical addresses, and can improve application performance by reducing the delays associated with translating virtual addresses. However, power consumption can also increase due to the operation of the TLB. The increased power consumption may cause a corresponding reduction in the operating time of the portable personal computing device before it needs to be replaced or recharged.Summary of the inventionIn a particular embodiment, a method is disclosed that includes receiving at least one translation lookaside buffer (TLB) configuration indicator. The method further includes modifying the number of searchable entries of the TLB in response to the value of the TLB configuration indicator.In another particular embodiment, a method is disclosed that includes determining a translation lookaside buffer (TLB) miss rate. The method includes detecting that the TLB miss rate exceeds a threshold. The method further includes sending an instruction to increase the size of the TLB after detecting that the TLB miss rate has exceeded the threshold.In another particular embodiment, a system is disclosed that includes a translation look-aside buffer (TLB) configuration bit that is stored in memory. The system also includes a TLB, which includes a first part and a second part. The first part is selectively disabled in response to the value of the TLB configuration bit.In another particular embodiment, a computer-readable medium is disclosed. The computer-readable medium includes a configuration register including a first field and a second field. The second field contains a translation lookaside buffer (TLB) configuration value. The TLB configuration value identifies a first set value or a second set value. When the TLB configuration value identifies the first set value, the TLB has a first number of searchable entries, and when the value identifies the second set value, the TLB has a first Two number of searchable entries.One particular advantage provided by the disclosed embodiments is the reduced power consumption achieved by selectively deactivating several searchable TLB entries.Other aspects, advantages, and features of the invention will become apparent after reviewing the entire application, which contains the following sections: description of the drawings, detailed description, and claims.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a functional diagram of a specific illustrative embodiment of a system including a configurable translation lookaside buffer (TLB);2 is a functional diagram of a second illustrative embodiment of a system including a configurable TLB;3 is a flowchart of a specific illustrative embodiment of a method of configuring a TLB;4 is a flowchart of a second illustrative embodiment of a method of configuring a TLB; andFIG. 5 is a block diagram of a portable communication device including a configurable TLB.detailed descriptionReferring to FIG. 1, a specific illustrative embodiment of a system including a configurable TLB is depicted and is shown generally at 100. The system 100 includes an operating system 102, a processor 104, an interrupt controller 106, and a software application 108. The operating system 102 responds to inputs from the software application 108 and inputs from the interrupt controller 106. The processor 104 is responsive to the operating system 102 and provides an interrupt output, which is provided to the interrupt controller 106.In a particular embodiment, the processor 104 includes a configuration register 110 containing a plurality of configuration fields, including a TLB size indicator 112. The processor 104 also includes a translation look-aside buffer (TLB) size selection logic circuit 114 that is responsive to the configuration register 110 and responsive to the memory management unit (MMU) 116. The processor 104 further includes a TLB 118 that responds to the TLB size selection logic circuit 114 and responds to the MMU 116.In a particular embodiment, the processor 104 is operable to control several searchable entries available at TLB 118. The processor 104 may be configured to receive instructions from the operating system 102 via the signal 126 and update the value of the TLB size indicator 112 in response to the instructions 126. The TLB size selection logic 114 may be adapted to provide an output to the TLB 118 based on the value of the TLB size indicator 112 and based on the input received from the MMU 116.TLB 118 includes at least two sections, such as a first section 120, a second section 122, and a third section 124, as illustrated. In a particular embodiment, the TLB 118 includes a first part and a second part. In another embodiment, the TLB 118 includes a first part, a second part, a third part, and a fourth part. It should be understood that the TLB 118 may include multiple parts, and may include more than four parts, depending on the specific application and system design constraints.Each TLB portion 120 to 124 contains one or more TLB entries to store data used to translate virtual addresses into physical addresses. In a particular embodiment, the TLB 118 is programmable software so that each of the entries of the TLB 118 can be populated by a software program. In addition, one or more of the TLB portions 120 to 124 are configured to be selectively disabled or enabled based on the output of the TLB size selection logic circuit 114. In a particular embodiment, the processor 104 is configured to send a TLB miss signal 150 to the interrupt controller 106 when the virtual address to be translated does not match any of the entries of the enabled TLB portions 120-124.In a particular embodiment, the interrupt controller 106 is adapted to receive one or more TLB miss signals 150 and initiate an interrupt or exception handling in response to each of the TLB miss signals. The interrupt controller 106 may be configured to provide the control output 142 to the operating system 102 in response to the received TLB miss signal 150.In a particular embodiment, the operating system 102 includes a TLB size module 130 that is executable to determine a selected size of the TLB 118 based on data received from one or more software applications 108, from the interrupt controller 106, or any combination thereof . The TLB size module 130 may include a TLB miss rate evaluation module 132 that is executable to evaluate a TLB miss rate based on a control output 142 from the interrupt controller 106 that provides TLB miss data. In a particular embodiment, the operating system 102 is configured to automatically monitor and update the TLB size of the TLB 118, the number of enabled TLB portions 120 to 124, or the number of TLB entries based on the determined TLB miss rate.In the illustrative embodiment, processor 104 is an interleaved multi-threaded pipeline processor. The configuration register 110 and the TLB 118 may be shared among different processing threads of the processor 104. The operating system 102 may be adapted to support multi-threaded processing at a wireless communication device. In a particular embodiment, the operating system 102 is atype operating system.During operation, the operating system 102 may receive one or more inputs 140 specifying one or more TLB configuration parameters from one or more software applications 108. As an illustrative, non-limiting example, input 140 may indicate the number of TLB entries required or preferred for each software application 108. The operating system 102 may also receive TLB miss rate information from the interrupt controller 106 or another device, and may determine the TLB miss rate at the TLB miss rate evaluation module 132. Each TLB miss that occurs when the TLB 118 receives a translation query for a virtual address that is not stored in a searchable entry in the TLB results in processing delays, while locating the page corresponding to a particular location by searching the page table (not shown) The physical address of the virtual address and then load it into the entry of TLB118. The TLB miss rate may indicate the percentage of TLB queries that caused TLB misses, the ratio of TLB misses to non-TLB misses (ie, TLB "hits"), the number of TLB misses per unit of time, or other factors that reflect TLB performance information.The operating system 102 may determine the TLB size setting value at the TLB size module 130 based on the data received from the software application 108, the TLB miss rate data, or any combination thereof. In a particular embodiment, the operating system 102 transmits the determined TLB size setting value to the processor 104 via a signal 126.For example, in the illustrative embodiment, TLB size module 130 receives an indication of the number of TLB entries from one or more software applications 108, and may determine to be enabled to provide a sufficient number of TLB entries for software applications 108 The number of TLB sections to execute with an acceptable small processing delay due to a TLB miss. The TLB size module 130 may also compare the TLB miss rate data received from the TLB miss rate assessment module 132 with one or more thresholds. For example, if the TLB miss rate exceeds the upper threshold, the TLB size module 130 may determine that one or more additional TLB sections 120 to 124 should be enabled to reduce the TLB miss rate and improve processing performance. However, if the TLB miss rate is below the lower threshold, then the TLB size module 130 may determine that one or more TLB sections 120 to 124 should be disabled in order to not significantly degrade performance due to increased TLB misses Reduce power consumption.In a particular embodiment, the signal 126 generated by the operating system 102 includes instructions to set the value of the TLB size indicator 112. In an illustrative embodiment, the operating system 102 instructs the processor 104 to increment or decrement the number of enabled TLB portions 120 to 124. In another embodiment, the operating system 102 instructs the processor 104 to enable a specific number of TLB portions 120 to 124, or designate to enable a specific number of TLB portions 120 to 124. In a particular embodiment, the operating system 102 instructs the processor 104 to write a specific value to the TLB size indicator 112.As an illustrative example, in an embodiment where the TLB 118 includes only two parts, the first TLB part may always be enabled, and the TLB size indicator 112 may be a single bit value. The operating system 102 may instruct the processor 104 to write a logical "1" value to the TLB size indicator 112 to disable the second TLB portion, or to write a logical "0" value to the TLB size indicator 112 to enable the second Two TLB sections. As another example, in an embodiment where the TLB 118 includes more than two TLB sections, the operating system 102 may instruct the processor 104 to program a value to the TLB size indicator 112, where the value is the number of TLB sections to be enabled Binary representation. To illustrate, the TLB size indicator 112 may include two bits indicating four set values, where each set value is related to a different number of enabled TLB portions (and therefore a different number of searchable TLB entries). As another example, the TLB size indicator 112 may include a dedicated bit for each TLB portion for the operating system 102 to selectively enable or disable a particular TLB portion.In response to the set value of the TLB size indicator 112, the TLB size selection logic circuit 114 provides a command signal to the TLB 118. Based on the command signal from the TLB size selection logic circuit 114, and also based on the input from the MMU 116, the TLB 118 is configured to use one or more of the TLB portions during operation, such as the indicated TLB portion 120 Go to 124. When a command signal from the TLB size selection logic 114 indicates that less than the entire TLB portion 120 to 124 will be used, the TLB 118 can deactivate and optionally power down the disabled or unused TLB portion to save processor Power and resources of 104.In an embodiment, the first TLB portion may be selectively disabled in response to a value of a TLB configuration bit stored in the memory (e.g., one or more bits within the TLB size indicator 112 stored in the configuration register 110). 120. The processor 104 contains logic elements, such as a TLB size selection logic circuit 114, in response to inputs coupled to the memory. The logic element has an output coupled to the TLB 118, and any of the TLB sections 120, 122, and 124 can be selectively disabled in response to the output of the logic element. In a particular embodiment, the first TLB portion 120 may include half of the entries in the TLB 118. In another embodiment, the first TLB portion 120 may include one third, one quarter, or any other portion of the entries in the TLB 118. In addition, the first TLB portion 120 and the second TLB portion 122 may be the same size or different sizes.The system 100 containing the software applications 108 may be used in various modes of operation. In the first mode of operation, the software application 108 may require only a single TLB entry for execution purposes. In this first mode of operation, the software application 108 may direct the operating system 102 to set the TLB size indicator 112 in the configuration register 110 so that only a single entry (or a single portion) in the TLB 118 is used. In this first mode of operation, the software application 108 can execute normally, and the TLB 118 can be used in a low-power and efficient manner because only a single entry in the TLB 118 is utilized. An example of a software application 108 that can be configured to use a single entry of the TLB 118 is a Moving Picture Experts Group (MPEG) -1 Audio Layer 3 (MP3) type application.In the second mode of operation, the software application 108 may require multiple TLB entries, and may even require that all entries of the TLB 118 be utilized. In this second mode of operation, multiple TLB entries are enabled, and all TLB entries can be enabled, depending on the performance requirements of the software application 108. It should be understood that the software application 108 contains program code executable by the processor 104, and the software application 108 is described separately for exemplary and illustrative purposes only.Referring to FIG. 2, a second specific embodiment of a system including a configurable TLB is depicted and represented generally at 200. In a particular embodiment, the system 200 illustrates a portion of the system 100 of FIG. 1. The system 200 includes a translation look-aside buffer (TLB) 202, a TLB configuration indicator 208, a memory management unit (MMU) 210, a TLB configuration logic 212, an output logic circuit 214, and a power logic circuit 216. TLB configuration logic 212 is coupled to receive inputs from TLB configuration indicator 208 and MMU 210. In a particular embodiment, the TLB configuration indicator 208 is the TLB size indicator 112 of FIG. 1. TLB configuration logic 212 is coupled to provide output control signals to power logic circuit 216 and to TLB 202. The TLB 202 is coupled to provide multiple outputs to an output logic circuit 214, which is in turn configured to generate an output 244.TLB 202 includes a first representative portion 204 and a second representative portion 206. The first representative portion 204 contains a first plurality of entries 220. Each of the first plurality of entries 220 includes a first valid field 222, an address space identifier (ASID) field 224, a virtual page number (VPN) field 226, and a physical page number (PPN) field 228. Similarly, the second representative portion 206 of the TLB 202 contains a second plurality of entries 234.The first representative portion 204 also includes a first enable input 218 that is responsive to the TLB configuration logic 212 to selectively enable or disable the search for the first plurality of entries 220. The second representative portion 206 includes a second enable input 230 that is responsive to the TLB configuration logic 212 to selectively enable or disable the search for the second plurality of entries 234.In addition, the second representative portion 206 of the TLB 202 includes a power input 232 that is responsive to the power logic circuit 216 to selectively activate or deactivate power reaching the second representative portion 206. Although not shown, in a specific embodiment, the first representative portion 204 may further include an input that is responsive to the power logic circuit 216 to selectively activate or deactivate the first representative portion 204 that reaches the TLB 202 power.The output logic circuit 214 includes a selection circuit 240 and a multiplexer 242. The multiplexer 242 responds to each of a plurality of outputs from the TLB 202. The selection circuit 240 is responsive to the TLB configuration logic 212 and controls the multiplexer 242 to selectively enable selected entries of the TLB 202 as the resulting output 244.During operation, the TLB configuration logic 212 receives inputs from the TLB configuration indicator 208 and from the MMU 210. The TLB configuration logic 212 generates an output signal based on the received input, and the output signal is provided to a first enable input 218, a second enable input 230, a power logic circuit 216, and an output logic circuit 214.One or more portions of TLB 202 (such as illustrated portions 204 and 206) may be dynamically enabled or disabled based on the output signals of TLB configuration logic 212. When one or more of the TLB sections 204 and 206 are disabled, the power reaching the disabled section may also be disconnected via the power logic circuit 216 to further save power resources. In addition, when one or more parts of the TLB 202 are deactivated or deactivated, the output of those parts is invalid. Therefore, the selection circuit 240 within the output logic circuit 214 is configured to control the multiplexer 242 to shield the invalidation of the deactivated portion from the TLB 202 by deactivating the selection of the deactivated portion via the multiplexer 242 Output signals such that the resulting output 244 can only propagate valid selected entries of the TLB 202. Thus, in a particular embodiment, the multiplexer 242 responds to the output of the TLB 202, and also selects the output of the TLB 202 that is enabled in response to a configuration bit set value in a configuration register as indicated by the TLB configuration logic 212 . In another embodiment where the output of the deactivated portion of the TLB 202 is limited to a specific value (such as a logic "0" value or a high impedance state), the output logic circuit 214 may not include the selection logic 240 in response to the TLB configuration logic 212, and Instead, another output selection logic may be included, such as a node configured to dynamically select only the active outputs of the TLB 202.In a particular illustrative embodiment, the first representative portion 204 includes 32 entries 220 and the second representative portion 206 includes 32 entries 234. The TLB configuration indicator 208 may be configured such that a default logical "0" value indicates that all 64 TLB entries 220 and 234 will be enabled for searching, and a logical "1" value indicates that only the first 32 entries 220 (section The entries in section 204) are used for searching. The MMU 210 may be configured to provide a logical "1" at the output 250 when a TLB search is to be performed and a logical "0" otherwise. The TLB configuration logic 212 may, for example, generate an output 252 that is a logic "1" when the MMU output 250 is "1" and the TLB configuration indicator 208 is "0" via the output of the "AND" element The component couples the TLB configuration indicator 208 to the inverting input and couples the MMU output 250 to the second input.In a particular illustrative embodiment, multiple steps may be performed when the TLB configuration indicator 208 is reset from a default "0" value (e.g., 64 searchable entries) to "1" (e.g., 32 searchable entries) So that any valid entry in the second representative portion 206 is transmitted to the first representative portion 204 before the second representative portion 206 is deactivated. For example, all valid entries in the second representative portion 206 (such as those having a "1" in the corresponding valid field 222) may be copied to unused entries in the first representative portion 204. Similarly, when the TLB configuration indicator 208 is reset from "1" to "0", the valid field 222 of each of the entries 234 of the second representative portion 206 may be set to "0" to indicate a new The enabled entry is invalid. These operations can be controlled by hardware, software, or any combination thereof.Referring to FIG. 3, a specific illustrative embodiment of a method of configuring a TLB is depicted and is shown generally at 300. With the method, at 302, at least one translation lookaside buffer (TLB) configuration indicator is received. In a particular embodiment, the TLB configuration indicator is received at a processor (eg, processor 104 of FIG. 1) containing a configurable TLB. A TLB configuration indicator may be received at the processor from an operating system (eg, operating system 102 of FIG. 1). The TLB configuration indicator may be determined in response to a TLB miss rate exceeding a threshold, in response to a software application, in response to one or more other events related to address translation, or any combination thereof. In a particular embodiment, the TLB configuration indicator determines whether the TLB has a first number of available entries or a second number of available entries.At 304, a determination may be made based on the value of the TLB configuration indicator to increase or decrease the number of searchable TLB entries. In a particular embodiment, the determination may be made by comparing one or more bit values of the TLB size field of the processor configuration register with the current TLB configuration. In an illustrative embodiment, the determination may be made by the TLB size selection logic circuit 114 of FIG. 1.In a particular embodiment, when the number of searchable TLB entries will be increased at decision step 305, at 306, a portion of the TLB may be enabled to increase the number of searchable entries. In a particular embodiment, the newly enabled portion of the TLB may store data from a previous period of operation before the TLB portion was deactivated, and thus the data of newly enabled entries may be unreliable. Therefore at 308, an invalidation indicator may be set for each of the entries in the enabled portion of the TLB. In an illustrative embodiment, the TLB may be TLB 202 of FIG. 2, and the invalidation indicator may be a “0” bit value stored in the valid field 222 of each TLB entry.Alternatively, where the number of searchable TLB entries will be reduced at decision step 305, at 310, a portion of the TLB may be deactivated to reduce the number of searchable entries. As an illustrative example, the deactivation operation may include making selected portions of the TLB unavailable for search without powering down the TLB. In another embodiment, selected portions of the TLB may be powered down after being disabled. For example, when the size of the TLB will decrease in response to a low TLB miss rate, selected portions of the TLB may remain disabled for a period of time to ensure that new TLBs are removed before powering off the disabled portions of the TLB Missing rate is acceptable. In another embodiment, deactivating the portion of the TLB may include powering down the portion of the TLB.In certain embodiments, the portion of the TLB to be deactivated may contain address translation data that should be retained at the TLB in association with one or more ongoing processes. Data may be copied from at least one entry in the deactivated portion of the TLB to at least another portion of the TLB (as shown at 312). In this way, data from the disabled portion of the TLB can be saved for future use.Referring to FIG. 4, an illustrative embodiment of a method using a configurable TLB is depicted and is shown generally at 400. In a particular embodiment, the method 400 may be performed by the operating system 102 of FIG. 1. The method includes determining a TLB miss rate at 402. In a particular embodiment, the TLB miss rate is based on the number of attempted TLB queries that caused an exception compared to the total number of TLB queries. In an illustrative embodiment, data from an interrupt controller may be used to determine a TLB miss rate that receives an interrupt generated in response to a TLB miss.Proceeding to 404, it is detected that the TLB miss rate exceeds the threshold. Proceeding to 406, after detecting that the TLB miss rate has exceeded the threshold, an instruction to increase the TLB size is sent. In an illustrative embodiment, instructions may be sent to a processor containing a configurable TLB, such as the processor 104 of FIG. 1.Proceeding to 408, in a particular embodiment, at least one configuration indicator is set at the configuration register to indicate the number of enabled portions of the TLB. In an illustrative embodiment, the at least one configuration indicator includes one or more bits of a TLB size indicator field (such as the TLB size indicator field 112 of FIG. 1). The at least one configuration indicator may be set in response to an instruction, such as a signal 126 sent from the operating system 102 of FIG. 1.FIG. 5 illustrates an exemplary, non-limiting embodiment of a portable communication device including a configurable TLB and shown generally at 520. As illustrated in FIG. 5, the portable communication device 520 includes a system on chip 522 including a digital signal processor 524 and a configuration register 580. In a particular illustrative embodiment, the configuration register 580 includes one or more indicators to determine the number of searchable entries at the translation lookaside buffer (TLB) 590. TLB590 contains multiple TLB sections that can be selectively enabled or disabled based on system requirements and power usage considerations. In a particular embodiment, the configuration register 580 and the TLB 590 may be components of the digital signal processor 524. In the illustrative embodiment, TLB 590 and configuration register 580 may operate substantially as disclosed with respect to TLB 118 and configuration register 110 of FIG. 1.FIG. 5 also shows that the display controller 526 is coupled to the digital signal processor 524 and to the display 528. The memory 532 is also coupled to the digital signal processor 524. In addition, an encoder / decoder (CODEC) 534 may be coupled to the digital signal processor 524. A speaker 536 and a microphone 538 may be coupled to the CODEC 534.FIG. 5 also indicates that the wireless controller 540 may be coupled to the digital signal processor 524 and to the wireless antenna 542. In a particular embodiment, the input device 530 and the power source 544 are coupled to a system on chip 522. Moreover, in a particular embodiment, as illustrated in FIG. 5, the display 528, the input device 530, the speaker 536, the microphone 538, the wireless antenna 542, and the power source 544 are external to the system on chip 522. However, the display 528, the input device 530, the speaker 536, the microphone 538, the wireless antenna 542, and the power source 544 are each coupled to the components of the system on chip 522.In a particular embodiment, the digital signal processor 524 utilizes interleaved multithreading to process instructions associated with a program thread to perform the functionality and operations required by the various components of the portable communication device 520. For example, when a wireless communication session is established via the wireless antenna 542, the user may speak into the microphone 538. An electronic signal representing the user's voice can be sent to CODEC 534 to be encoded. Digital signal processor 524 may perform data processing for CODEC 534 to encode electronic signals from a microphone. In addition, the incoming signal received via the wireless antenna 542 may be sent by the wireless controller 540 to the CODEC 534 to be decoded and sent to the speaker 536. The digital signal processor 524 may also perform data processing for the CODEC 534 when decoding a signal received via the wireless antenna 542.Further, the digital signal processor 524 may process the input received from the input device 530 before, during, or after the wireless communication session. For example, during a wireless communication session, a user may be surfing the Internet using an input device 530 and a display 528 via a web browser embedded in the memory 532 of the portable communication device 520. The digital signal processor 524 can interleave various program threads used by the input device 530, the display controller 526, the display 528, the CODEC 534, and the wireless controller 540 to efficiently control the portable communication device 520 and various components therein. operating. Many of the instructions associated with various program threads are executed simultaneously during one or more clock cycles. As a result, power and energy consumption due to wasted clock cycles are substantially reduced.Those skilled in the art will further understand that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or a combination of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Configurations, modules, circuits, and steps. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, PROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art . An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a computing device or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal.The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Accordingly, the invention is not intended to be limited to the embodiments shown herein, but will be given the broadest possible scope consistent with the principles and novel features defined by the appended claims.
An apparatus, a method, and a computer program product for gesture recognition. The apparatus classifies a gesture (208) based on a movement of a body part as detected by a primary sensor (202). The apparatus determine a reliability level (214) of a secondary sensor (204) and obtains corroborating information associated with the movement of the body part using the secondary sensor (204) when the reliability level satisfies a criterion. The apparatus then confirms or negates the classification of the gesture (216) based on the corroborating information. The secondary sensor may be a sensor already known to the apparatus, i.e., the sensor is currently being worn by the user, or it may be a sensor that is worn by a user at a later time. In the latter case, the apparatus detects for the presence of a new sensor, determines the gesture recognition capabilities of the new sensor and integrates the new sensor into the gesture recognition process.
^HAT IS CLAIMED IS:CLAIMSA method of gesture recognition, comprising:classifying a gesture based on a movement of a body part, the movement stected by a primary sensor;determining a reliability level of a secondary sensor;when the reliability level satisfies a criterion, obtaining corroborating Lformation associated with the movement of the body part using the secondary sensor; confirming or negating the classification of the gesture based on the jrroborating information.The method of claim 1, wherein determining a reliability level of a secondary msor comprises measuring an environmental condition associated with the secondary msor.The method of claim 2, wherein the environmental condition comprises one or Lore of a sound level or a light level.The method of claim 1 , wherein the corroborating information comprises one or Lore of an image of the body part or a sound emanating from the body part.The method of claim 1 , wherein the primary sensor and the secondary sensor are Liferent types of sensors.The method of claim 1 , wherein confirming or negating the classification of the ssture comprises:determining a corroborating gesture based on the corroborating information; and confirming the classification of the gesture when the corroborating gesture Latches the classification of the gesture.The method of claim 6, wherein determining a corroborating gesture comprises jmparing the corroborating information to corresponding information mapped to a brary of gestures. The method of claim 1 , wherein classifying a gesture based on movement of a ady part as detected by a primary sensor comprises:sensing a motion activity resulting from the movement of the body part;comparing the sensed motion activity to one or more corresponding stored Lotion activities mapped to a library of gestures; andconcluding the body part made the gesture when the sensed motion activity Latches the stored motion activity mapped to the gesture.The method of claim 8, further comprising updating the library of gestures when Le classification of the gesture is negated based on the corroborating information.3. The method of claim 9, wherein updating the library of gestures comprises mapping the stored motion activity corresponding to the sensed motion activity to a Liferent gesture, wherein the different gesture corresponds to a corroborating gesture stermined based on the corroborating information.1. The method of claim 1 , wherein the gesture has an associated confidence level, Le method further comprising:comparing the confidence level to a threshold; andperforming the determining, the obtaining and the confirming or negating only hen the confidence level does not satisfy the threshold.2. The method of claim 1 , further comprising detecting for the presence of the jcondary sensor prior to performing the determining, the obtaining and the confirming r negating.3. The method of claim 1 , further comprising:monitoring data from the secondary sensor provided during the movement of Le body part; andassessing reliability of data provided by the primary sensor based on data rovided by the secondary sensor.4. An apparatus for gesture recognition, said apparatus comprising:means for classifying a gesture based on a movement of a body part, the Lovement detected by a primary sensor;means for determining a reliability level of a secondary sensor; means for obtaining corroborating information associated with the movement of Le body part using the secondary sensor when the reliability level satisfies a criterion; means for confirming or negating the classification of the gesture based on the jrroborating information.5. The apparatus of claim 14, wherein the means for determining a reliability level f a secondary sensor is configured to measure an environmental condition associated ith the secondary sensor.5. The apparatus of claim 15, wherein the environmental condition comprises one r more of a sound level or a light level.7. The apparatus of claim 14, wherein the corroborating information comprises one r more of an image of the body part or a sound emanating from the body part.8. The apparatus of claim 14, wherein the primary sensor and the secondary sensor•e different types of sensors. The apparatus of claim 14, wherein the means for confirming or negating the assification of the gesture is configured to:determine a corroborating gesture based on the corroborating information; and confirm the classification of the gesture when the corroborating gesture matchesLe classification of the gesture.3. The apparatus of claim 19, wherein the means for confirming or negating the assification of the gesture determines a corroborating gesture by being further infigured to compare the corroborating information to corresponding information Lapped to a library of gestures in order to determine a corroborating gesture.1. The apparatus of claim 14, wherein the means for classifying a gesture based on Lovement of a body part as detected by a primary sensor is configured to:sense a motion activity resulting from the movement of the body part;compare the sensed motion activity to one or more corresponding stored motion rtivities mapped to a library of gestures; andconclude the body part made the gesture when the sensed motion activityLatches the stored motion activity mapped to the gesture.2. The apparatus of claim 21 , wherein the means for classifying a gesture is infigured to update the library of gestures when the classification of the gesture is sgated based on the corroborating information.3. The apparatus of claim 22, wherein the means for classifying a gesture updates Le library of gestures by being further configured to remap the stored motion activity irresponding to the sensed motion activity to a different gesture, wherein the different ssture corresponds to a corroborating gesture determined based on the corroborating Lformation.4. The apparatus of claim 14, wherein the gesture has an associated confidence rvel, the apparatus further comprising:means for comparing the confidence level to a threshold; andmeans for performing the determining, the obtaining and the confirming or sgating only when the confidence level does not satisfy the threshold.5. The apparatus of claim 14, further comprising means for detecting for the resence of the secondary sensor prior to performing the determining, the obtaining and Le confirming or negating.5. The apparatus of claim 14, further comprising:means for monitoring data from the secondary sensor provided during the Lovement of the body part; andmeans for assessing reliability of data provided by the primary sensor based on ita provided by the secondary sensor.7. An apparatus for gesture recognition, said apparatus comprising:a memory; anda processing system coupled to the memory and configured to:classify a gesture based on a movement of a body part, the movement stected by a primary sensor;determine a reliability level of a secondary sensor;obtain corroborating information associated with the movement of the ady part using the secondary sensor when the reliability level satisfies a criterion; and confirm or negate the classification of the gesture based on the jrroborating information.8. The apparatus of claim 27, wherein the processing system determine a reliability :vel of a secondary sensor by being configured to measuring an environmental jndition associated with the secondary sensor. The apparatus of claim 28, wherein the environmental condition comprises one r more of a sound level or a light level.3. The apparatus of claim 27, wherein the corroborating information comprises one r more of an image of the body part or a sound emanating from the body part.1. The apparatus of claim 27, wherein the primary sensor and the secondary sensor•e different types of sensors.2. The apparatus of claim 27, wherein the processing system confirms or negates Le classification of the gesture by being further configured to:determine a corroborating gesture based on the corroborating information; and confirm the classification of the gesture when the corroborating gesture matches Le classification of the gesture.3. The apparatus of claim 32, wherein the processing system determines a jrroborating gesture by being configured to compare the corroborating information to irresponding information mapped to a library of gestures.4. The apparatus of claim 27, wherein the processing system classifies a gesture ised on movement of a body part as detected by a primary sensor by being configured csense a motion activity resulting from the movement of the body part;compare the sensed motion activity to one or more corresponding stored motion ^tivities mapped to a library of gestures; andconclude the body part made the gesture when the sensed motion activity Latches the stored motion activity mapped to the gesture.5. The apparatus of claim 34, wherein the processing system is further configured i update the library of gestures when the classification of the gesture is negated based ti the corroborating information.5. The apparatus of claim 35, wherein the processing system updates the library of sstures by being further configured to remap the stored motion activity corresponding• the sensed motion activity to a different gesture, wherein the different gesture irresponds to a corroborating gesture determined based on the corroborating Lformation.7. The apparatus of claim 27, wherein the gesture has an associated confidence rvel, the processing system being further configured to:compare the confidence level to a threshold; andperform the determining, the obtaining and the confirming or negating only hen the confidence level does not satisfy the threshold.8. The apparatus of claim 27, wherein the processing system is further configured• detect for the presence of the secondary sensor prior to performing the determining, Le obtaining and the confirming or negating. The apparatus of claim 27, wherein the processing system is further configured cmonitor data from the secondary sensor provided during the movement of the ady part; andassess reliability of data provided by the primary sensor based on data provided y the secondary sensor.3. A computer program product for gesture recognition, said product comprising: a computer-readable medium comprising code for:classifying a gesture based on a movement of a body part, the movement stected by a primary sensor;determining a reliability level of a secondary sensor;obtaining corroborating information associated with the movement of the ady part using the secondary sensor when the reliability level satisfies a criterion; and confirming or negating the classification of the gesture based on the jrroborating information.
CLASSIFICATION OF GESTURE DETECTION SYSTEMSTHROUGH USE OF KNOWN AND YET TO BE WORN SENSORSCROSS-REFERENCE TO RELATED APPLICATION(S)This application claims the priority of U.S. Non-Provisional Application Serial No. 14/042,660 entitled "CLASSIFCATION OF GESTURE DETECTION SYSTEMS THROUGH USE OF KNOWN AND YET TO BE WORN SENSORS" and filed on September 30, 2013, which is expressly incorporated by reference herein in its entirety.BACKGROUNDFieldThe present disclosure relates generally to gesture recognition, and more particularly to the classification of gesture detection systems through use of known and yet to be worn sensors.BackgroundSystems and applications for implementing augmented reality (AR) have become very popular and widespread. AR systems typically include a head mounted display (HMD) that allow users to simultaneously see and interact with their surroundings while interacting with applications, such as e-mail and media players. Although many AR applications may be run on smartphones and tablets, the most natural form factor for implementing AR systems are optical devices, such as glasses.Some AR systems provide for gesture activation of applications and selection of files and documents, wherein activation or selection occurs in response to different motions of a hands or fingers present within the field of view of the AR glasses. Such methods, however, suffer from significant drawbacks with respect to gesture detection accuracy. For example, conventional systems that rely on a camera may track hand gestures with varying levels of accuracy due to poor lighting or slow frame rate. Accordingly, it is desirable to improve the accuracy of gesture detection and classification. SUMMARYAn apparatus, a method, and a computer program product for gesture recognition are provided. An apparatus classifies a gesture based on a movement of a body part as detected by a primary sensor. The apparatus determine a reliability level of a secondary sensor and obtains corroborating information associated with the movement of the body part using the secondary sensor when the reliability level satisfies a criterion. The apparatus then confirms or negates the classification of the gesture based on the corroborating information. The secondary sensor may be a sensor already known to the apparatus, i.e., the sensor is currently being worn by the user, or it may be a sensor that is worn by a user at a later time. In the latter case, the apparatus detects for the presence of a new sensor, determines the gesture recognition capabilities of the new sensor and integrates the new sensor into the gesture recognition process.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a diagram illustrating a gesture recognition apparatus and different types of sensors that may be used by the apparatus to classify gestures.FIG. 2 is a flow diagram illustrating the operation of different modules/means/components in a gesture recognition apparatus.FIG. 3 is an illustration of an AR system including a pair of AR glasses and a gesture recognition wristband.FIG. 4 is a flow chart of a method of gesture recognition.FIG. 5 is a diagram illustrating an example of a hardware implementation for a gesture recognition apparatus employing a processing system.DETAILED DESCRIPTIONThe detailed description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known structures and components are shown in block diagram form in order to avoid obscuring such concepts. Several aspects of a gesture recognition apparatus will now be presented with reference to various apparatus and methods. These apparatus and methods will be described in the following detailed description and illustrated in the accompanying drawings by various blocks, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as "elements"). These elements may be implemented using electronic hardware, computer software, or any combination thereof. Whether such elements are implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.By way of example, an element, or any portion of an element, or any combination of elements may be implemented with a "processing system" that includes one or more processors. Examples of processors include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. One or more processors in the processing system may execute software. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.Accordingly, in one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof.If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.FIG. 1 is a diagram illustrating a gesture recognition apparatus 100 along with various different sensors that may be used by the apparatus to classify gestures. The sensors may include, for example: a visual sensor 102, such as a camera; a light sensor 104; a sound sensor 106, such as microphone; a motion sensor 108; a temperature sensor 110; and an electromyography (EMG) sensor 112. The foregoing sensors are a representative sample of sensor types that may be employed by the gesture classification apparatus. Other types of sensors may be employed. The sensors may provide sensor signals and otherwise communicate with the gesture recognition apparatus through wired 114 or wireless 116 connections, such as Bluetooth.The gesture recognition apparatus 100 is dynamic in that it may employ different sensors at different times, depending on sensor availability. To this end, the gesture recognition apparatus 100 is configured to detect for the presence of new and existing sensors. For example, a user of the gesture recognition apparatus 100 may not be wearing a motion sensor at the time of initial use of the apparatus, but may at a later time begin to wear one. Accordingly, the apparatus 100 periodically scans for sensors. Such scanning may occur through Bluetooth, WiFi, WiFI Direct and Cellular. If a prior existing sensor is no longer available, or a new sensor becomes available, the gesture recognition apparatus 100 adjusts its operation accordingly. Such adjustments are described further below.FIG. 2 is a diagram illustrating a gesture recognition apparatus 200 receiving signals from three different sensors, including a primary sensor 202, a secondary sensor 204 and a reliability sensor 206. The gesture recognition apparatus 200 is configured to output gesture classifications corresponding to movements of a body part. The gesture recognition apparatus 200 processes gesture events, including for example, body motion, body images, and body sounds, sensed by one or more sensors, in a manner that improves overall accuracy of the gesture classifications outputted by the apparatus 200.Each of the sensors 202, 204, 206 may be co-located with the gesture recognition apparatus 200 or may be located separate from the apparatus. For example, the gesture recognition apparatus 200 may be included in a pair of AR glasses worn by a user. The AR glasses may include a camera that may function as the secondary sensor 204, and a light sensor that may function as the reliability sensor 206. The user of the AR glasses may wear an additional apparatus that may include the primary sensor 202. For example, the additional apparatus may be a wristband worn by the user.The gesture recognition apparatus 200 includes a primary gesture classifier 208, a secondary gesture classifier 210, a confidence module 212, a reliability module 214 and a gesture confirmation module 216. In one configuration, the primary sensor 202 is configured to sense body movement and output a corresponding movement signal to the primary gesture recognition device 208. The primary sensor 208 may be an EMG sensor or a pressure sensor that detects body movement.The primary gesture classifier 208 processes the movement signal and classifies the body movement as one of a number of gestures supported by the gesture recognition apparatus 200. For example, the primary gesture classifier 208 may include a look up table of gestures, whereby particular gestures are mapped to particular characteristics of movement signals. In this case, the primary gesture classifier 208 processes the received movement signals, extracts signal characteristics, e.g., frequency, amplitude, shape of curve, slope, minimum, maximum, hysteresis, mean, median, standard deviation, variance, acceleration and looks for a matching signal characteristics in the look up table. The gesture mapped to the matching signal characteristics is determined by the primary gesture classifier 208 to be the gesture detected by the primary sensor. Accordingly, the primary gesture classifier 208 outputs an indication of the detected gesture, referred to in FIG. 2 as a "classified gesture."The confidence module 212 receives the indication of the classified gesture from the primary gesture classifier 208 and determines the confidence level of this gesture based on a look up table. The confidence look up table includes a listing of gestures that may be classified based on movement signals obtained from the primary sensor202 and corresponding measures of confidence that the classified gesture provided by the primary gesture classifier 208 in response to the movement signal is accurate.For example, a particular sensor, such as an EMG sensor, may be more accurate at detecting finger snap then at detecting a hand wave. Accordingly, the look up table for that sensor would have a higher level of confidence when the classified gesture is a finger snap, then when the classified gesture is a hand wave. The confidence look up table may be based on available information for the sensor type corresponding to the primary sensor 202 or it may be based on past errors in classified gesture determinations made by the primary gesture classifier 208 in response to movement signals from the primary sensor 202.Upon determination of the confidence level corresponding to the classified gesture, the confidence module 212 compares the confidence level to a threshold. If the threshold is satisfied, the gesture recognition apparatus 200 outputs the classified gesture as a confirmed gesture. If the threshold is not satisfied, a gesture confirmation process is initiated, in which case the classified gesture output by the primary gesture classifier 208 is provided to the gesture confirmation module 216 for further processing, as described below.Regarding confidence levels and thresholds, in one implementation, these measures are represented by percentages. For example, the threshold may be programmed to 98%. In this case, the confidence module 212 compares the confidence level percentage of the classified gesture to the threshold percentage. If the confidence level satisfies the threshold, e.g., exceeds, equals or exceeds, etc., then the classified gesture is output by the gesture recognition apparatus 200 as the confirmed gesture. If the confidence level is below the threshold then the gesture confirmation process is initiated. Through programming it is possible to bypass the gesture confirmation process. For example, in one configuration, the gesture confidence level of all gestures may be set to 100%. Alternatively, the threshold may be set to zero so that the threshold is always satisfied.In cases where the gesture confirmation process is initiated, the gesture recognition apparatus 200 may activate the secondary sensor 204. The secondary sensor 204 may be any sensor that captures a corroborating event related to the body movement sensed by the primary sensor 202. For example, the secondary sensor 204 may be a camera configured to capture an image of the body movement, or a microphone configured to capture sound associated with the body movement.In an alternative configuration, the gesture recognition apparatus 200 may first determine that information provided by the secondary sensor 204 is reliable. In this case, the gesture recognition apparatus 200 activates the reliability sensor 206 and delays activation of the secondary sensor 204 until reliability of the secondary sensor is confirmed.The secondary sensor reliability module 214 evaluates if the data provided by the secondary sensor 204 is reliable based on input received from the reliability sensor 206. For example, if the secondary sensor 204 is a camera, the reliability sensor 206 may be a light detector. In this case the reliability module 214 evaluates how much light is in the room. If the light level satisfies a threshold, such as a specific number of lumens, then the reliability module 214 concludes that the data from the secondary sensor is reliable. As another example, if the secondary sensor 204 is a sound detector, the reliability sensor 206 may be a microphone. In this case, the reliability module 214 evaluates the sound level in the vicinity of the secondary sensor 204. If the sound level satisfies a threshold indicative of an environment that is not too loud or noisy, such as below a specific number of decibels, then the reliability module 214 concludes the secondary sensor 204 is an effective sensor and the data from the secondary sensor is reliable.Upon determining that the secondary sensor 204 is reliable, the gesture recognition apparatus 200 turns on the secondary sensor 204. Corroborating information is captured by the secondary sensor 204 and provided to the secondary gesture classifier 210. The secondary gesture classifier 210 processes the corroborating information and classifies the body movement associated with the corroborating information as one of a number of gestures supported by the secondary gesture classifier 210. For example, the secondary gesture classifier 210 may include a look up table of gestures, whereby particular gestures are mapped to particular characteristics of images captured by a camera or sounds captured by a sound detector. In this case, the secondary gesture classifier 210 processes the received corroborating information, extracts appropriate characteristics, e.g., presence of edges, intensity histograms, color gradients, etc, in the case of an image; and fundamental frequency, crossing rate, rolloff, spectrum smoothness and spread, etc. in the case of sound, and looks for a matching characteristics in the gesture look up table. The gesture mapped to the matching characteristics is determined by the secondary gesture classifier 108 to be the gesture detected by the secondary sensor. Accordingly, the secondary gesture classifier 210 outputs an indication of the detected gesture, referred to in FIG. 2 as a "corroborating gesture."The secondary gesture classifier 210 provides the corroborating gesture to the gesture confirmation module 216. The classified gesture provided by the primary gesture classifier 208, through the confidence module 212, and the corroborating gesture provided by the secondary gesture classifier 210 are processed to determine a confirmed gesture. In one configuration, the confirmation module 216 compares the classified gesture and corroborating gesture to determine if they are the same gesture. If the two gestures match, then the matching gesture is output as the confirmed gesture. If the two gestures do not match, then a gesture determination error is output by the gesture detection apparatus 200.The confidence module 216 may also output a confirmed gesture based on respective confidence levels of the primary gesture classifier 208, secondary gesture classifier 210 and any additional sensors and corresponding gesture classifiers that may be added to the system. For example, in the case of a primary gesture classifier paired with a primary sensor in the form an EMG sensor wrist band, a secondary gesture classifier paired with a second sensor in the form of a microphone, and a third gesture classifier paired with a Fitbit that senses motion of the hips, the system considers all gesture classifications it receives and may make a confirmed gesture decision based on which gesture classification has the highest confidence level. For example, if the primary gesture classifier is 90% confident the gesture is a snap, the secondary gesture classifier is 60% confident the gesture is snap and the third gesture classifier is 10%> confident the gesture is a hand wave hello, the system outputs a snap as the confirmed gesture. In another configuration, perhaps when all confidence levels are substantially the same, the system may determine a confirmed gesture based on majority rule.In one example implementation, the subject gesture is a pinch formed by the thumb and index figure of a user and the primary sensor 202 is an EMG sensor. Based on a movement signal provided by the EMG sensor, the primary gesture classifier 208 determines the gesture is a pinch. The confidence level of this gesture does not satisfy a confidence threshold. Accordingly, the gesture recognition apparatus 200 activates the secondary sensor 204, which is a camera, to confirm the gesture detection. Based on information received from a reliability sensor 206, which may be a light detector included in the camera, the reliability module 214 determines if the room is well lit based on, for example, a threshold number of lumens, such as 500 lumens for a typical office. If the room is well lit, the camera 204 takes a picture and the secondary gesture classifier 210 processes the picture to determine if the picture evidences an index finger and thumb brought together to create a circle indicative of a pinch. Processing the picture may involve skin tone detection to look for skin, which serves as an indication that the user brought his hand in front of camera to do gesture, or Hough Line Transform which looks for lines, i.e. edges of fingers. If the secondary gesture classifier 210 determines a pinch, then the gesture confirmation module 216 outputs a pinch as a confirmed gesture based on a match between the classified gesture and the corroborating gesture.In another example implementation, wherein the gesture is a pair of sequential finger snaps and the primary sensor 202 is a pressure-sensitive band on the back of the palm. Based on a movement signal provided by the pressure-sensitive band, the primary gesture classifier 208 determines the gesture is a snap, e.g., the first snap in the pair of snaps. The confidence level of this gesture does not satisfy a confidence threshold. Accordingly, the gesture recognition apparatus 200 initiates a gesture confirmation process. In this process, the apparatus 200 activates the reliability sensor 206, which is a microphone, prior to activating the secondary sensor. A signal corresponding to sound captured by the microphone in the vicinity of where the first snap occurred is provided to the reliability module 214. The reliability module 214 processes the sound signal to determine if the vicinity wherein the first snap occurred is too loud, e.g., the sound level is above a threshold decibel level. If the room is determined not to be too loud, the gesture recognition apparatus 200 turn on the secondary sensor 204, which is a microphone.The secondary sensor microphone 204 captures the sound in the vicinity of the gesture and the secondary gesture classifier 210 processes the sound (e.g., DSP processing) to determine if the sound evidences a finger snap, e.g., the second snap. If the secondary gesture classifier 210 determines a snap, then the gesture confirmation module 216 outputs a snap as a confirmed gesture based on a match between the classified gesture and the corroborating gesture.In the foregoing example, the secondary sensor 204 and the reliability sensor206 are both microphones and may be the same microphone. The distinction between the sensors 204, 206 in this case is the level of sophistication in the processing of the sounds performed by the reliability module 214 and the secondary gesture classifier 210, respectively. In the case of the reliability module 214, the sound processing relates to loudness, which involves determining a decibel level. In the case of the secondary gesture classifier 210, the sound processing relates to signal characteristics related to gestures, such as obtaining the frequency domain of the sound via FFT and comparing primary frequency to the primary frequency of snaps of the user recorded during training, which involves more complex digital signal processing. Accordingly, a delay in activation of complex digital signal processing until a sound level criterion is met may result in system efficiencies in terms of power and processing consumption.The gesture recognition apparatus 200 described with reference to FIG. 2 includes a primary sensor, a secondary sensor and a reliability sensor. These sensors may be considered in terms of functionality, in combination with other components of the apparatus 200, and may not necessary correspond to different physical sensors. For example, in the case of sound sensing, the same microphone may be used to capture the sound processed by the reliability module and the sound processed by the secondary gesture classifier. In this case, the primary sensor and the secondary sensor are the same component. In other cases, each of the sensors may be based on a different underlying technology, EMG, image sound, pressure, motion, etc.A gesture recognition apparatus 200 may have more than three sensors, each of which may be based on a different underlying technology. The apparatus 200 may have flexibility is designating different sensors as the primary sensor, second sensor and reliability sensor. In one configuration, sensors may be designated to achieve an order of operation whereby sensors with lower power draw are used first. For example, an EMG sensor draws far less current than a camera. Accordingly, the EMG sensor may be designated the primary sensor that is used to determine a classified gesture, while the camera is designated the secondary sensor and is used only when a corroborating gesture is needed.The data and results from the secondary gesture classifier 210 may help improve the performance of the primary gesture classifier 208 through re-training. For example, if a primary sensor, e.g., EMG sensor, provides a movement signal upon which the primary gesture classifier 208 determines the gesture is a pinch or snap, but the secondary sensor 204, e.g., camera, provides corroborating information upon which the secondary gesture classifier 210 determines the gesture is a pinch, then it may be beneficial to update or retrain the primary gesture classifier. To this end, theEMG data captured at that time, i.e., the movement signal, is fed back into the primary gesture classifier 208 and the gesture look up table of the primary gesture classifier mapping movement signal characteristics to gestures is updated so that sensing of EMG signals that used to result in detections of pinches or snaps by the primary gesture classifier, are now detected more accurately only as pinches. The gesture recognition apparatus 200 may be configured to pair with sensors that become available at a later time. For example, if a user of the gesture recognition apparatus 200 begins to wear a pedometer, the system may benefit from information available from the pedometer. The apparatus 200 may detect the presence of the new sensor and determine its capabilities. Based on these capabilities the system may implement additional features of the gesture determination process. In the case of the pedometer, upon recognition of the pedometer, the gesture recognition apparatus 200 may implement a feature whereby the pedometer functions as a reliability sensor 206 with respect to an EMG primary sensor 202. When the pedometer indicates to the reliability module 214 that the user is walking, the reliability module causes the apparatus to ignore the output of any gesture classifiers that are derived from signals provided by the EMG sensor. This is based on the premise that walking results in noisy EMG signals due to unstable electrical ground, which in turn result in inaccurate EMG based gesture determinations.In an example scenario of adding new sensor capability, the gesture recognition apparatus 200 periodically looks for new devices using well know technology, such as AllJoyn. When a new device is turned on, the apparatus 200 recognizes the device and connects to the device, for example, through some open standard. Once connected, the apparatus 200 queries the device for available data from sensors included in the device. As the user of the new device does various gestures, the apparatus 200 looks for patterns in the data stream from these new sensors.If the classifier components 208, 210 of the apparatus 200 find relationships between data provided by the new sensor and corresponding gestures, such correspondences will be added to the gesture look up table of the classifier so the new sensor may be used as a sensor in the gesture classification apparatus 200. For example, in the case of a device that is strapped in the user's arm and provides data corresponding to motion, the classifier may find that every time the user does a snap, the device provides a movement signal that is repeatable and consistent. Likewise, if the classifier components 208, 210 of the apparatus 200 do not find relationships between data provided by the new sensor and corresponding gestures the system will ignore data from the new sensor. For example, if the new sensor is a temperature sensor that reports temperature, but the apparatus 200 determines that no temperature change is associated with a snap gesture, then temperature data will be ignored by the apparatus.FIG. 3 is an illustration of an AR system 300 that includes an AR device 302 in the form of pair of AR glasses, and a gesture sensor device 304 in the form of a wristband. The AR glasses 302 may be configured to project content through its lenses using methods known in the art. For example, the AR glasses 302 may be configured to project application content through its lenses, such as e-mails, documents, web pages, or media content such as video games, movies or electronic books. Other types of AR devices 302 may include Smartphones, tablets, laptops, etc.The AR glasses 302 include a communications device 306 for communicating with the gesture sensor device 304. The communications device 306 may be, for example, a Bluetooth device. The AR glasses 302 further include a processor 318 for processing signals received from the gesture sensor device 304. The processor 318 may include one or more of the components of the gesture recognition apparatus 200 shown in FIG. 2.The gesture sensor device 304 is configured to be associated with a body part and may be any form conducive to provide such association. For example, if the body part is a hand or finger, the gesture recognition device may be configured as a wristband 304.The gesture sensor device 304 may include one or more sensors of the gesture recognition apparatus shown in FIG. 2. For example, in one configuration, the gesture sensor device 304 includes the primary sensor 202 in the form of a pair of electrodes 308, 310 that provide EMG sensing capability. The electrodes 308, 310 are preferably positioned on the wristband such that when the user is wearing the wristband 304 the electrodes are located so as to sense electrical activity resulting from muscular movement of the wrist. The electrodes 308, 310 in combination with an EMG sensing element (not shown), function as an EMG sensor that provides signals indicative of movement of a body part. EMG sensing capability is based on well known technology.The gesture sensor device 304 may include other types of sensors, such as a motion sensor 312 or a pressure sensor 314, which may function as the primary sensor. The motion sensor 312 may be positioned anywhere on the wristband and provides signals indicative of movement of the body part. The indications provided may be one of general overall movement of the body part or finer movement of the body part corresponding to a gesture. The motion sensor 312 may be, for example, an accelerometer, gyroscope, or magnetometer. The gesture sensor device 304 also includes a communication device 316 for communicating with the AR glasses 302.FIG. 4 is a flow chart of a method of gesture determination. The process is directed toward determining gestures with improved accuracy through use of primary and secondary sensors. The process may be performed by the gesture recognition apparatus 200 of FIG. 2.At step 402, the apparatus classifies a gesture based on a movement of a body part. The movement of the body part is detected by a primary sensor. More specifically, classifying a gesture may include sensing a motion activity resulting from the movement of the body part, comparing the sensed motion activity to one or more corresponding stored motion activities mapped to a library of gestures, and concluding the body part made the gesture when the sensed motion activity matches the stored motion activity mapped to the gesture.At step 404, the apparatus may determine the level of confidence that the classified gesture is accurate. In this regard, the classified gesture has an associated confidence level. The confidence level is based on the primary sensor type. In other words, because different sensors may detect certain gestures better than other sensors, each type of sensor, e.g., motion, EMG, pressure, etc., has an associated look up table that maps gestures to confidence levels.Once the confidence level corresponding to the sensor type and gesture is determined, at step 406, the apparatus compares the confidence level to a threshold. If the threshold is satisfied, then at step 408, the apparatus outputs the classified gesture as a confirmed gesture.If the threshold is not satisfied, then at step 410, the apparatus initiates a gesture recognition confirmation process, wherein the apparatus determines a reliability level of an available secondary sensor. Determining a reliability level of a secondary sensor may include measuring an environmental condition associated with the secondary sensor. For example, the environmental condition may be one or more of a sound level or a light level. In one implementation, prior to initiating the gesture recognition confirmation process, the apparatus detects for the presence of a secondary sensor. Once the reliability level is determined, as step 412, the apparatus compares the reliability level to a criterion. For example, in the case of a secondary sensor that is a camera, the reliability level may be a measure of light as measured by a reliability sensor and the criterion may be a threshold light level, such as 500 lumens, that the measured light level must satisfy, e.g., match or exceed. If the criterion is not satisfied, then at step 414, the apparatus outputs a gesture detection error.If the criterion is satisfied, then at step 416, the apparatus determines a corroborating gesture based on the secondary sensor. More specifically, the apparatus obtains corroborating information associated with the movement of the body part using the secondary sensor and determines a corroborating gesture based on the corroborating information. The corroborating information may be one or more of an image of the body part or a sound emanating from the body part. Determining a corroborating gesture includes comparing the corroborating information to corresponding information mapped to a library of gestures.At step 418, the apparatus confirms or negates the classification of the gesture based on the corroborating information. More specifically, at step 420, the apparatus compares the classified gesture and the corroborating gesture.At step 422, if the gestures match, then at step 424, the apparatus outputs a confirmed gesture corresponding to the classified gesture and the corroborating gesture. If the gestures do not match, then at step 426, the apparatus outputs a gesture detection error. Alternatively, the apparatus may process attributes of the classified gesture and one or more corroborating gestures and output a confirmed gesture accordingly. For example, each of the classified gesture and the one or more corroborating gestures may have an associated confidence level. In this case, the apparatus compares the respective confidence levels and outputs the gesture with the highest confidence as the confirmed gesture. As another example, if the respective confidence levels are substantially the same, e.g., within a certain percentage of each other, such as 10%, the apparatus may output a confirmed gesture based on a majority rule, with the gesture indicated most being output as the confirmed gesture.FIG. 5 is a diagram illustrating an example of a hardware implementation for a gesture recognition apparatus 100' employing a processing system 520. The apparatus includes a gesture classification module 504 that classifies a gesture based on a movement of a body part as detected by a primary sensor, a reliability module506 that determines a reliability level of a secondary sensor, a corroboration module 508 that obtains corroborating information associated with the movement of the body part using the secondary sensor when the reliability level satisfies a criterion, and a confirmation/negation module 510 that confirms or negates the classification of the gesture based on the corroborating information.The apparatus 100' may include additional modules that perform each of the steps of the algorithm in the aforementioned flow chart of FIG. 4. As such, each step in the aforementioned flow charts of FIG. 4 may be performed by a module and the apparatus may include one or more of those modules. The modules may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by a processor configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by a processor, or some combination thereof.The processing system 520 may be implemented with a bus architecture, represented generally by the bus 524. The bus 524 may include any number of interconnecting buses and bridges depending on the specific application of the processing system 520 and the overall design constraints. The bus 524 links together various circuits including one or more processors and/or hardware modules, represented by the processor 522, the modules 504, 506, 508, 510, and the computer-readable medium / memory 526. The bus 524 may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further.The processing system 520 includes a processor 522 coupled to a computer- readable medium / memory 526. The processor 522 is responsible for general processing, including the execution of software stored on the computer-readable medium / memory 526. The software, when executed by the processor 522, causes the processing system 520 to perform the various functions described supra for any particular apparatus. The computer-readable medium / memory 520 may also be used for storing data that is manipulated by the processor 522 when executing software. The processing system further includes at least one of the modules 504, 506, 508 and 510. The modules may be software modules running in the processor 522, resident/stored in the computer readable medium / memory 526, one or more hardware modules coupled to the processor 522, or some combination thereof. In one configuration, the apparatus 100/100"for gesture recognition includes means for classifying a gesture based on a movement of a body part detected by a primary sensor, means for determining a reliability level of a secondary sensor, means for obtaining corroborating information associated with the movement of the body part using the secondary sensor when the reliability level satisfies a criterion and means for confirming or negating the classification of the gesture based on the corroborating information.In some configurations, the gesture has an associated confidence level, in which case the apparatus 100/100"for gesture recognition may further include means for comparing the confidence level to a threshold, and means for performing the determining, the obtaining and the confirming or negating only when the confidence level does not satisfy the threshold.The apparatus 100/100"for gesture recognition may also include means for detecting for the presence of the secondary sensor prior to performing the determining, the obtaining and the confirming or negating, means for monitoring data from the secondary sensor provided during the movement of the body part, and means for assessing reliability of data provided by the primary sensor based on data provided by the secondary sensor.The aforementioned means may be one or more of the aforementioned modules of the apparatus 100 and/or the processing system 520 of the apparatus 102' configured to perform the functions recited by the aforementioned means.It is understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Further, some steps may be combined or omitted. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects." Unless specifically stated otherwise, the term "some" refers to one or more. Combinations such as "at least one of A, B, or C," "at least one of A, B, and C," and "A, B, C, or any combination thereof include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as "at least one of A, B, or C," "at least one of A, B, and C," and "A, B, C, or any combination thereof may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed as a means plus function unless the element is expressly recited using the phrase "means for."
A memory controller circuit (100) is disclosed which is coupleable to a first memory circuit (125), such as DRAM, and includes: a first memory control circuit (155) to read from or write to the firstmemory circuit; a second memory circuit (175), such as SRAM; a second memory control circuit (160) adapted to read from the second memory circuit in response to a read request read when the requesteddata is stored in the second memory circuit, and otherwise to transfer the read request to the first memory control circuit; predetermined atomic operations circuitry (185); and programmable atomic operations circuitry (135) adapted to perform at least one programmable atomic operation. The second memory control circuit also transfers a received programmable atomic operation request to the programmable atomic operations circuitry and sets a hazard bit for a cache line of the second memory circuit.
1. A memory controller circuit coupleable to a first memory circuit, the memory controller comprising:A first memory control circuit coupleable to the first memory circuit, the first memory control circuit adapted to read from the first memory circuit directly in response to a read request and directly in response to an atomic operation request or loading the requested data and writing or storing the requested data to the first memory circuit in response to a write request;second memory circuit;Predetermined atomic operation circuitry adapted to perform at least one predetermined atomic operation of a plurality of predetermined atomic operations in response to an atomic operation request specifying the at least one predetermined atomic operation;Programmable atomic operation circuitry adapted to perform at least one programmable atomic operation of a plurality of programmable atomic operations in response to an atomic operation request specifying the at least one programmable atomic operation; anda second memory control circuit coupled to the second memory circuit, the second memory control circuit being adapted to respond directly to a read request when the requested data is stored in the second memory circuit and Reading or loading the requested data from the second memory circuit directly in response to the atomic operation request, and when the requested data is not stored in the second memory circuit, the read request and the atomic operation request are communicated to the first memory control circuit, the second memory control circuit being further adapted to read data in response to the atomic operation request specifying the at least one predetermined atomic operation and a memory address. and transmitting the atomic operation request specifying the at least one predetermined atomic operation to the predetermined atomic operation circuitry, setting a hazard bit stored in a memory hazard register corresponding to the memory address, directly in response to specifying the at least one The atomic operation of a predetermined atomic operation requests writing of the resulting data from the predetermined atomic operation in the second memory circuit, and after writing the resulting data, resetting or clearing the set Danger position.2. The memory controller circuit of claim 1, wherein the plurality of predetermined atomic operations include at least two predetermined atomic operations selected from the group consisting of: Extract and AND, Extract and OR, Extract and XOR, Extract and add, extract and subtract, extract and increment, extract and decrement, extract and minimize, extract and maximize, extract and swap, compare and swap, and combinations thereof.3. The memory controller circuit of claim 1, wherein the programmable atomic operating circuitry includes:an instruction cache storing a plurality of processor instructions corresponding to the at least one programmable atomic operation;an execution queue that stores thread identifiers corresponding to the programmable atomic operations;Core control circuitry coupled to the instruction cache and to the execution queue, the core control circuitry adapted to select the instruction in response to the thread identifier corresponding to the programmable atomic operation The starting or next instruction or instruction address in the cache for performing said programmable atomic operation; andA processor core adapted to execute at least one instruction for said programmable atomic operation and produce resulting data.4. The memory controller circuit of claim 3, wherein the programmable atomic operating circuitry further comprises:A memory controller interface circuit coupled to the processor core and receiving the resulting data and communicating the resulting data to the second memory control circuit for writing the resulting data to the second memory circuit .5. The memory controller circuit of claim 4, further comprising:A network communications interface may be coupled to a communications network and to the memory controller interface circuit, the network communications interface being adapted to prepare and transmit a response data packet having the resulting data over the communications network.6. The memory controller circuit of claim 3, wherein the programmable atomic operation circuitry further comprises:At least one data buffer storing operand data and intermediate results resulting from execution of the at least one instruction for the programmable atomic operation.7. The memory controller circuit of claim 3, wherein the programmable atomic operation circuitry further comprises:a network command queue coupled to the processor core, the network command queue storing the resulting data; andA network communications interface is coupled to the network command queue and may be coupled to a communications network, the network communications interface being adapted to prepare and transmit a response data packet having the resulting data over the communications network.8. The memory controller circuit of claim 3, wherein the processor core is coupled to a data buffer, and wherein the processor core is further adapted to execute a load unbuffered instruction to determine whether an operand is stored in the in the data buffer, and when the data is not stored in the data buffer, a read request to the second memory control circuit is generated, and when the operand data is obtained, the read request is not The operand data is stored in the data buffer.9. The memory controller circuit of claim 3, wherein the processor core is further adapted to execute a store and clear lock instruction to generate an atomic write request to the second memory control circuit, the atomic A write request has the resulting data and a specification to reset or clear a memory hazard bit after writing the resulting data to the second memory circuit.10. The memory controller circuit of claim 3, wherein the processor core is further adapted to execute an atomic return instruction to reset or clear memory after writing the resulting data to the second memory circuit. Danger position.11. The memory controller circuit of claim 3, wherein the processor core is further adapted to execute an atomic return instruction to generate a response data packet with the resulting data.12. The memory controller circuit of claim 3, wherein the processor core is further adapted to execute atomic return instructions to complete atomic operations.13. The memory controller circuit of claim 1, wherein the atomic operation request specifying the at least one programmable atomic operation includes a physical memory address, a programmable atomic operation identifier, and at least one thread status register value.14. The memory controller circuit of claim 13, wherein the programmable atomic operating circuitry further comprises:At least one register that stores thread status information.15. The memory controller circuit of claim 14, wherein the programmable atomic operation circuitry is further adapted to respond to receiving the atomic operation request specifying the at least one programmable atomic operation, with the A physical memory address, any data corresponding to the memory address, and the at least one thread status register value to initialize the at least one register.16. The memory controller circuit of claim 1, further comprising:a network communications interface coupleable to a communications network and to the first memory control circuit and the second memory control circuit, the network communications interface being adapted to decode a plurality of request packets received from the communications network, and preparing and transmitting a plurality of response data packets over the communications network.17. The memory controller circuit of claim 1, wherein the programmable atomic operation circuitry is adapted to perform user-defined atomic operations, multi-cycle operations, floating point operations, and multi-instruction operations.18. The memory controller circuit of claim 1, further comprising:A write merge circuit adapted to write or store data read from the first memory circuit to the second memory circuit.19. The memory controller circuit of claim 1, wherein the second memory control circuit is further adapted to communicate the atomic operation request to read data and specify the at least one programmable atomic operation to the Programmable atomically operated circuit systems.20. The memory controller circuit of claim 1, wherein the second memory control circuit is further adapted to respond directly to a write request and directly to an atomic operation request specifying the at least one programmable atomic operation. Write or store data to the second memory circuit.21. The memory controller circuit of claim 1, wherein the second memory control circuit is further adapted to set a setting stored in a memory hazard register in response to a write request specifying a memory address in the second memory circuit. a danger bit in corresponding to the memory address, and after writing or storing data at the memory address to the second memory circuit, the set danger bit is reset or cleared.22. The memory controller circuit of claim 1, wherein the second memory control circuit is further adapted to, in response to a write request having write data and specifying a memory address in the second memory circuit, Current data stored at the memory address is transferred to the first memory control circuit to write the current data to the first memory circuit and overwrite the second memory circuit with the write data the current data in .23. The memory controller circuit of claim 1, wherein the second memory control circuit is further adapted to set, in response to a write request having write data and specifying a memory address in the second memory circuit, The hazard bit corresponding to the memory address stored in the memory hazard register transfers the current data stored at the memory address to the first memory control circuit to write the current data to the first memory address. A memory circuit that overwrites the current data in the second memory circuit with the write data and after writing or storing the write data to the second memory circuit at the memory address , resets or clears the danger bits of said settings.24. The memory controller circuit of claim 1, wherein the second memory control circuit is further adapted to, in response to an atomic operation request specifying the at least one programmable atomic operation and a memory address, change the atomic operation to A request is transmitted to the programmable atomic operating circuitry and sets a hazard bit stored in a memory hazard register corresponding to the memory address.25. The memory controller circuit of claim 1, wherein the first memory control circuit includes:a plurality of memory bank request queues storing a plurality of read or write requests issued to the first memory circuit;a scheduler circuit coupled to the plurality of memory bank request queues, the scheduler being adapted to select a read or write from the plurality of read or write requests from the plurality of memory bank request queues incoming requests and scheduling the read or write requests for access to the first memory circuit; andA first memory access control circuit coupled to the scheduler, the first memory access control circuit being adapted to read or load data from the first memory circuit and write or store data to the First memory circuit.26. The memory controller circuit of claim 25, wherein the first memory control circuit further comprises:A plurality of memory request queues storing multiple memory requests;a request selection multiplexer that selects memory requests from the plurality of memory request queues;A plurality of memory data queues storing data corresponding to the plurality of memory requests; andA data selection multiplexer selects data from the plurality of memory data queues, the selected data corresponding to the selected memory request.27. The memory controller circuit of claim 1, wherein the second memory control circuit includes:Network request queue, which stores read requests or write requests;Atomic operation request queue, which stores atomic operation requests;an incoming request multiplexer coupled to the network request queue and the atomic operation request queue to select requests from the network request queue or the atomic operation request queue;A memory hazard control circuit having one or more memory hazard registers; andA second memory access control circuit coupled to the memory hazard control circuit and the incoming request multiplexer, the second memory access control circuit adapted to respond to the selected request from the selected request. The second memory circuit reads or loads data and writes or stores data to the second memory circuit and signals the memory hazard control circuit to set or clear storage in the one or more memory hazard registers. dangerous position.28. The memory controller circuit of claim 27, wherein the second memory control circuit further comprises:a delay circuit coupled to the second memory access control circuit; andAn incoming control multiplexer that selects incoming network requests that require access to the first memory circuit or when a cache line of the second memory circuit contains data that will be overwritten by data from a read request or a write request. A cache eviction request from the second memory circuit is selected when writing data previously written to the first memory circuit.29. The memory controller circuit of claim 1, wherein the memory controller circuit is coupled to a communications network for grouping a plurality of write data requests, a plurality of read data requests, a plurality of predetermined atomic operations A request packet, a plurality of programmable atomic operation request packets are routed to the memory controller circuit, and a plurality of response data packets from the memory controller circuit are routed to a request source address.30. The memory controller circuit of claim 1, wherein the programmable atomic operating circuitry includes:A processor circuit coupled to the first memory control circuit via a switchless direct communication bus.31. The memory controller circuit of claim 1, wherein said first memory control circuit, said second memory circuit, said second memory control circuit, said predetermined atomic operating circuitry and said programmable Atomic operating circuit systems are implemented as a single integrated circuit or a single system-on-chip SOC.32. The memory controller circuit of claim 1, wherein the first memory control circuit, the second memory circuit, the second memory control circuit and the predetermined atomic operation circuitry are implemented as a first integrated circuitry, and the programmable atomically operated circuitry is implemented as a second integrated circuit coupled to the first integrated circuit via a switchless direct communication bus.33. The memory controller circuit of claim 1, wherein the programmable atomic operation circuitry is adapted to generate read requests and generate write requests to the second memory circuit.34. The memory controller circuit of claim 1, wherein the programmable atomic operation circuitry is adapted to perform arithmetic operations, logical operations and control flow decisions.35. The memory controller circuit of claim 1, wherein the first memory circuit includes dynamic random access memory (DRAM) circuitry and the second memory circuit includes static random access memory (SRAM) circuitry.36. A method of performing programmable atomic operations using a memory controller circuit coupleable to a first memory circuit, the method comprising:Reading or loading requested data from the first memory circuit directly in response to a read request and directly in response to an atomic operation request using a first memory control circuit coupleable to the first memory circuit and in response to a write write or store the requested data to the first memory circuit in response to an incoming request;Using a second memory control circuit coupled to the second memory circuit, the requested data is stored in the second memory circuit directly in response to a read request and directly in response to the atomic operation request. A second memory circuit reads or loads the requested data, and when the requested data is not stored in the second memory circuit, transmits the read request and the atomic operation request to the first a memory control circuit;using predetermined atomic operation circuitry to perform at least one predetermined atomic operation of a plurality of predetermined atomic operations in response to an atomic operation request specifying the at least one predetermined atomic operation;using programmable atomic operation circuitry, performing at least one programmable atomic operation of a plurality of programmable atomic operations in response to an atomic operation request specifying the at least one programmable atomic operation; andUsing the second memory control circuitry, in response to an atomic operation request specifying the at least one predetermined atomic operation and a memory address, transmitting the read data and the atomic operation request to the predetermined atomic operation circuitry, setting a hazard bit stored in a memory hazard register corresponding to the memory address, and resulting data from the predetermined atomic operation being written to the at least one predetermined atomic operation directly in response to the atomic operation request In the second memory circuit, and after writing the resulting data, the set danger bit is reset or cleared.37. The method of claim 36, wherein the programmable atomic operating circuitry includes a processor core coupled to a data buffer, and wherein the method additionally includes:Using the processor core, execute a load unbuffered instruction to determine whether an operand is stored in the data buffer, and when the data is not stored in the data buffer, generate a Read request from the control circuit.38. The method of claim 36, wherein the programmable atomic operating circuitry includes a processor core, and wherein the method additionally includes:Using the processor core, execute a store and clear lock instruction to generate an atomic write request to the second memory control circuit, the atomic write request having the resulting data and a method for transferring the resulting data to the second memory control circuit. A designation to reset or clear the memory hazard bit after writing to the second memory circuit.39. The method of claim 36, wherein the programmable atomic operating circuitry includes a processor core, and wherein the method additionally includes:Using the processor core, an atomic return instruction is executed to reset or clear a memory hazard bit after writing the resulting data to the second memory circuit.40. The method of claim 36, wherein the programmable atomic operating circuitry includes a processor core, and wherein the method additionally includes:Using the processor core, an atomic return instruction is executed to generate a response data packet with the resulting data.41. The method of claim 36, wherein the programmable atomic operating circuitry includes a processor core, and wherein the method additionally includes:Using the processor core, an atomic return instruction is executed to complete the atomic operation.42. The method of claim 36, wherein the atomic operation request specifying the at least one programmable atomic operation includes a physical memory address, a predetermined atomic operation identifier, and at least one thread status register value.43. The method of claim 42, wherein the programmable atomic operating circuitry additionally includes at least one register storing thread state information, and wherein the method additionally includes:using the programmable atomic operation circuitry, in response to receiving the atomic operation request specifying the at least one programmable atomic operation, using the physical memory address, any data corresponding to the memory address, and the at least A thread status register value to initialize the at least one register.44. The method of claim 36, further comprising:Using the second memory control circuit, the atomic operation request to read data and specify the at least one programmable atomic operation is communicated to the programmable atomic operation circuitry.45. The method of claim 36, further comprising:Using the second memory control circuit, in response to a write request specifying a memory address in the second memory circuit, a hazard bit stored in a memory hazard register corresponding to the memory address is set, and after data is stored in the After the memory address is written or stored into the second memory circuit, the set danger bit is reset or cleared.46. The method of claim 36, further comprising:Using the second memory control circuit, in response to a write request having write data and specifying a memory address in the second memory circuit, current data stored at the memory address is transferred to the first memory A circuit is controlled to write the current data to the first memory circuit, and to overwrite the current data in the second memory circuit with the written data.47. The method of claim 36, further comprising:using the second memory control circuit, in response to a write request having write data and specifying a memory address in the second memory circuit, setting a hazard bit stored in a memory hazard register corresponding to the memory address, transmitting current data stored at the memory address to the first memory control circuit to write the current data to the first memory circuit, overwriting the second memory circuit with the written data and after writing or storing the write data to the second memory circuit at the memory address, resetting or clearing the set danger bit.48. The method of claim 36, further comprising:Using the second memory control circuitry, in response to an atomic operation request specifying the at least one programmable atomic operation and a memory address, transmitting the atomic operation request to the programmable atomic operation circuitry and setting a stored in memory hazard The danger bit in the register corresponding to the memory address.49. A memory controller coupleable to a first memory circuit, the memory controller comprising:A first memory control circuit coupleable to the first memory circuit, the first memory control circuit comprising:a plurality of memory bank request queues storing a plurality of read or write requests issued to the first memory circuit;a scheduler circuit coupled to the plurality of memory bank request queues, the scheduler being adapted to select a read or write from the plurality of read or write requests from the plurality of memory bank request queues incoming requests and scheduling the read or write requests for access to the first memory circuit; andA first memory access control circuit coupled to the scheduler, the first memory access control circuit being adapted to read or load data from the first memory circuit and write or store data to the first memory circuit;second memory circuit;Predetermined atomic operation circuitry adapted to perform at least one predetermined atomic operation of a plurality of predetermined atomic operations; andProgrammable atomic operation circuitry adapted to perform at least one programmable atomic operation among a plurality of programmable atomic operations, and in response to receiving an atomic operation request specifying the at least one programmable atomic operation, using the physical memory address, corresponding Initialize at least one register with any data at the memory address and at least one thread status register value; andA second memory control circuit coupled to the second memory circuit includes:At least one input request queue that stores read or write requests;A memory hazard control circuit having a memory hazard register; andA second memory access control circuit adapted to read or load data from and write or store data to said second memory circuit, said second memory access control circuit being further adapted to respond to an atomic operation request specifying the at least one predetermined atomic operation and a memory address, transmitting the atomic operation request to the predetermined atomic operation circuitry and setting an atomic operation request stored in a memory hazard register corresponding to the memory address dangerous position.
memory controllerCross-references to related applicationsThis application is U.S. Provisional Patent No. 62/623,331 titled "Memory Controller with Integrated Custom AtomicUnit" filed by inventor Tony M. Brewer on January 29, 2018. non-provisional application and claims the rights and priority thereto, said provisional patent application is hereby jointly assigned and the entire contents thereof are hereby incorporated by reference in their entirety with the same full force and effect as if General is fully explained in this article.Technical fieldThe present invention relates generally to memory controllers, and more particularly, to a memory controller that provides scheduled and programmable atomic operations and reduces the latency of repeatedly accessed memory locations.Statement Regarding Federally Sponsored Research or DevelopmentThis invention was made with government support under Contract No. HR0011-16-3-0002 awarded by the Department of Defense (DOD-DARPA). The government has certain rights in this invention.Background techniqueMemory controllers are commonly used in computing technology and the like to control access to read data from a memory circuit, write data to the memory circuit, and refresh data held in the memory circuit. A wide variety of memory controllers are commercially available and are generally designed for use in a wide range of applications, but are not optimized for specific applications including machine learning and artificial intelligence ("AI") applications.Therefore, there is a continuing need for memory controllers that are both high-performance and energy-efficient. Such memory controllers should provide support for computationally intensive cores or operations that require large and highly frequent memory accesses, such that performance may be limited by the access the application may have to data stored in the memory. Speed limitations, such as for applications performing fast Fourier transform ("FFT") operations, finite impulse response ("FIR") filtering, and other applications typically used in larger applications such as synthetic aperture radar (e.g., requiring frequent access Computationally intensive operations such as, but not limited to, 5G network connections and 5G base station operations, machine learning, AI, template code operations, and graph analysis operations such as graph clustering using spectral techniques . Such memory controllers should also be optimized for high throughput and low latency, including for atomic operations. Such memory controllers should also provide a wide range of atomic operations, including predetermined atomic operations as well as programmable or user-defined atomic operations.Contents of the inventionAs discussed in greater detail below, representative apparatus, systems, and methods provide memory controllers that are high-performance and energy-efficient. Representative embodiments of the memory controller provide support for computationally intensive cores or operations that require large and highly frequent memory accesses, such as for performing fast Fourier transforms ("FFT"). ”) operations, computationally intensive kernels or operations for finite impulse response (“FIR”) filtering, and other computationally intensive operations typically used in larger applications such as synthetic aperture radar, such as, but not limited to, 5G network connectivity and 5G base station operations, machine learning, AI, template code operations, and graph analysis operations such as graph clustering using spectral techniques. Representative embodiments of the memory controller are optimized for high throughput and low latency, including for atomic operations. Representative embodiments of the memory controller also provide a wide range of atomic operations, including predetermined atomic operations as well as programmable or user-defined atomic operations.Representative embodiments of the memory controller produced significant results when evaluated using an architectural simulator. For example, representative embodiments of the memory controller provide more than three times (3.48x) better atomic update performance using standard GDDR6 DRAM memory compared to state-of-the-art X86 server platforms. Also, for example, a representative embodiment of the memory controller provides seventeen times (17.6x) using modified GDDR6 DRAM memory (with more memory banks) compared to state-of-the-art X86 server platforms. for better atomic update performance.Representative embodiments of the memory controller also provide extremely low latency and high throughput memory read and write operations that are typically limited only by memory bank availability, error correction overhead, and available bandwidth (Gb) on the communications network. /s) and the limitations of the memory and cache devices themselves, resulting in flat latency until maximum bandwidth is reached.Representative embodiments of the memory controller also provide extremely high performance (high throughput and low latency) for programmable or user-defined atomic operations that is comparable to the performance of predetermined atomic operations. Rather than performing multiple memory accesses, in response to an atomic operation request specifying a programmable atomic operation and a memory address, circuitry in the memory controller communicates the atomic operation request to the programmable atomic operation circuitry and sets the settings stored in The hazard bit in the memory hazard register corresponding to the memory address used for the memory row in the atomic operation to ensure that no other operations (read, write, or atomic) are performed on the memory row, followed by completion of the atomic operation Immediately clear the dangerous bit. Additional direct data paths provided for programmable atomic operation circuitry 135 to perform programmable or user-defined atomic operations allow for additional write operations without any limitations imposed by the bandwidth of the communication network and without any increase in the bandwidth of the communication network. congestion.In a representative embodiment, a memory controller circuit is coupled to a first memory circuit, wherein the memory controller includes: a first memory control circuit coupled to the first memory circuit, the first The memory control circuit is adapted to read or load requested data from the first memory circuit in response to a read request and to write or store the requested data to the first memory circuit in response to a write request. ; a second memory circuit; a second memory control circuit coupled to the second memory circuit, the second memory control circuit being adapted to when the requested data is stored in the second memory circuit, The requested data is read or loaded from the second memory circuit in response to a read request, and when the requested data is not stored in the second memory circuit, the read request is transmitted to the first memory control circuit; predetermined atomic operation circuitry adapted to perform the at least one predetermined atomic operation in response to an atomic operation request specifying at least one of a plurality of predetermined atomic operations; and Programmable atomic operation circuitry adapted to perform at least one programmable atomic operation of a plurality of programmable atomic operations in response to an atomic operation request specifying the at least one programmable atomic operation.In another representative embodiment, a memory controller circuit is coupled to a first memory circuit, wherein the memory controller includes: a first memory control circuit coupled to the first memory circuit, the A first memory control circuit is adapted to read or load requested data from the first memory circuit in response to a read request and to write or store the requested data to the first memory circuit in response to a write request. Memory circuit; programmable atomic operation circuitry coupled to the first memory control circuit, the programmable atomic operation circuitry adapted to respond to specifying at least one programmable atomic operation of a plurality of programmable atomic operations an atomic operation request to perform the at least one programmable atomic operation; a second memory circuit; and a second memory control circuit coupled to the second memory circuit and the first memory control circuit, the second memory control circuit Circuitry is adapted to, in response to an atomic operation request specifying the at least one programmable atomic operation and a memory address, transmit the atomic operation request to the programmable atomic operation circuitry and set an atomic operation request stored in a memory hazard register corresponding to Danger bit of the memory address.In a representative embodiment, the plurality of predetermined atomic operations may include at least two predetermined atomic operations selected from the group consisting of: extract-and-and, extract-and-OR, extract-and-XOR, extract-and-add, extract-and Subtract, extract and increment, extract and decrement, extract and minimize, extract and maximize, extract and swap, compare and swap, and combinations thereof.In a representative embodiment, programmable atomic operation circuitry may include: an instruction cache that stores a plurality of processor instructions corresponding to the at least one programmable atomic operation; and an execution queue that stores a plurality of processor instructions corresponding to the at least one programmable atomic operation. a thread identifier of a programmable atomic operation; a core control circuit coupled to the instruction cache and to the execution queue, the core control circuit adapted to respond to the programmable atomic operation a thread identifier that selects a starting or next instruction or instruction address in the instruction cache for performing the programmable atomic operation; and a processor core adapted to perform the programmable atomic operation at least one instruction and produce the resulting data.Also in representative embodiments, the programmable atomic operation circuitry may additionally include memory controller interface circuitry coupled to the processor core and receiving the resulting data and transferring the resulting data to the A second memory control circuit to write the resulting data to the second memory circuit. In a representative embodiment, the memory controller circuit may additionally include a network communications interface coupleable to a communications network and coupled to the memory controller interface circuit, the network communications interface being adapted to prepare and operate at the A response data packet having the obtained data is transmitted over the communication network.In representative embodiments, the programmable atomic operation circuitry may additionally include at least one data buffer that stores operand data and intermediate data generated from execution of the at least one instruction for the programmable atomic operation. result. Also in a representative embodiment, the programmable atomic operation circuitry may additionally include: a network command queue coupled to the processor core, the network command queue storing resulting data; and a network communication interface coupled to The network command queue is and may be coupled to a communications network, the network communications interface being adapted to prepare and transmit a response data packet having the resulting data over the communications network.In a representative embodiment, the processor core may be coupled to a data buffer, and the processor core may be further adapted to execute a load unbuffered instruction to determine whether an operand is stored in the data buffer, and When the data is not stored in the data buffer, a read request to the second memory control circuit is generated. In a representative embodiment, the processor core may be further adapted to execute a store and clear lock instruction to generate an atomic write request to the second memory control circuit, the atomic write request having the resulting data and a designation to reset or clear the memory hazard bit after writing the resulting data to the second memory circuit. In representative embodiments, the processor core may be further adapted to execute an atomic return instruction to reset or clear a memory hazard bit after writing the resulting data to the second memory circuit. In representative embodiments, the processor core may be further adapted to execute an atomic return instruction to generate a response data packet with the resulting data. In representative embodiments, the processor core may be further adapted to execute atomic return instructions to complete atomic operations.In a representative embodiment, the atomic operation request specifying the at least one programmable atomic operation includes a physical memory address, a programmable atomic operation identifier, and at least one thread status register value. In representative embodiments, the programmable atomic operation circuitry may additionally include at least one register that stores thread state information. In such representative embodiments, the programmable atomic operation circuitry may be further adapted to respond to receiving the atomic operation request specifying the at least one programmable atomic operation, using the physical memory address, corresponding Initializing the at least one register with any data at the memory address and the at least one thread status register value.In representative embodiments, the memory controller circuit may additionally include a network communications interface that may be coupled to a communications network and to the first memory control circuit and the second memory control circuit, the network communications interface Adapted to decode a plurality of request packets received from the communications network and to prepare and transmit a plurality of response data packets over the communications network.In representative embodiments, the programmable atomic operation circuitry is adapted to perform user-defined atomic operations, multi-cycle operations, floating point operations, and multi-instruction operations.In a representative embodiment, the memory controller circuit may additionally include a write merging circuit adapted to write or store data read from the first memory circuit to the second memory circuit.In a representative embodiment, the second memory control circuit is further adapted to read from the second memory circuit in response to an atomic operation request when the requested data is stored in the second memory circuit. Fetching or loading the requested data, and transmitting the atomic operation request to the first memory control circuit when the requested data is not stored in the second memory circuit. In a representative embodiment, the second memory control circuit is further adapted to write or store data to the second memory circuit in response to a write request or in response to an atomic operation request. In a representative embodiment, the second memory control circuit is further adapted to, in response to a write request specifying a memory address in the second memory circuit, set a value stored in a memory hazard register corresponding to the memory address. the danger bit, and after writing or storing data at the memory address to the second memory circuit, the set danger bit is reset or cleared. In a representative embodiment, the second memory control circuit is further adapted to, in response to a write request having write data and specifying a memory address in the second memory circuit, store at the memory address Current data is transferred to the first memory control circuit to write the current data to the first memory circuit and overwrite the current data in the second memory circuit with the written data.In a representative embodiment, the second memory control circuit is further adapted to set a corresponding value stored in a memory hazard register in response to a write request having write data and specifying a memory address in the second memory circuit. At the danger bit of the memory address, the current data stored at the memory address is transferred to the first memory control circuit to write the current data to the first memory circuit, using the write Data overwrites the current data in the second memory circuit, and after writing or storing the write data to the second memory circuit at the memory address, resetting or clearing the setting dangerous position.In a representative embodiment, the second memory control circuit is further adapted to, in response to an atomic operation request specifying the at least one programmable atomic operation and a memory address, communicate the atomic operation request to the programmable atomic Circuitry is operated and a hazard bit stored in a memory hazard register corresponding to the memory address is set. In a representative embodiment, the second memory control circuit is further adapted to communicate the atomic operation request to the predetermined atomic operation circuit in response to an atomic operation request specifying the at least one predetermined atomic operation and a memory address. A system that sets a hazard bit stored in a memory hazard register corresponding to the memory address, writes resulting data from the predetermined atomic operation into the second memory circuit, and after writing the resulting data , resets or clears the danger bits of said settings.In a representative embodiment, the first memory control circuit may include: a plurality of memory bank request queues storing a plurality of read or write requests to the first memory circuit; a scheduler circuit coupled to to the plurality of memory bank request queues, the scheduler being adapted to select one of the plurality of read or write requests from the plurality of memory bank request queues and schedule the a read or write request for accessing the first memory circuit; and a first memory access control circuit coupled to the scheduler, the first memory access control circuit being adapted from the A first memory circuit reads or loads data and writes or stores data to the first memory circuit.In a representative embodiment, the first memory control circuit may additionally include: a plurality of memory request queues that store a plurality of memory requests; and a request selection multiplexer that selects a memory from the plurality of memory request queues. a request; a plurality of memory data queues that store data corresponding to the plurality of memory requests; and a data selection multiplexer that selects data from the plurality of memory data queues, the selected data corresponding to The selected memory request.In a representative embodiment, the second memory control circuit may include: a network request queue that stores read requests or write requests; an atomic operation request queue that stores atomic operation requests; an inflow request multiplexer, it is coupled to the network request queue and the atomic operation request queue to select requests from the network request queue or the atomic operation request queue; a memory hazard control circuit having one or more memory hazard registers; and Two memory access control circuits coupled to the memory hazard control circuit and the incoming request multiplexer, the second memory access control circuit adapted to respond to the selected request from the selected request. The second memory circuit reads or loads data and writes or stores data to the second memory circuit and signals the memory hazard control circuit to set or clear storage in the one or more memory hazard registers. dangerous position. In a representative embodiment, the second memory control circuit may additionally include: a delay circuit coupled to the second memory access control circuit; and a flow control multiplexer that selects the need to access the an incoming network request to the first memory circuit or when the cache line of the second memory circuit contains data to be written to the first memory circuit before being overwritten by data from a read request or a write request, A cache eviction request from the second memory circuit is selected.In representative embodiments, the memory controller circuit may be coupled to a communications network for grouping a plurality of write data requests, a plurality of read data requests, a plurality of predetermined atomic operation requests, a plurality of programmable Atomic operation request packets are routed to the memory controller circuit, and multiple response data packets from the memory controller circuit are routed to the request source address.In a representative embodiment, the programmable atomic operating circuitry may include processor circuitry coupled to the first memory control circuitry via a switchless direct communication bus.In representative embodiments, the first memory control circuit, the second memory circuit, the second memory control circuit, the predetermined atomic operation circuitry, and the programmable atomic operation circuitry may be implemented as a single Integrated circuit or single system on chip (SOC).In a representative embodiment, the first memory control circuit, the second memory circuit, the second memory control circuit, and the predetermined atomic operation circuitry may be implemented as a first integrated circuit, and the programmable Atomic operating circuitry may be implemented as a second integrated circuit coupled to the first integrated circuit via a switchless direct communication bus.In a representative embodiment, the programmable atomic operating circuitry is adapted to generate read requests and generate write requests to the second memory circuit. In representative embodiments, the programmable atomic operation circuitry is adapted to perform arithmetic operations, logic operations, and control flow decisions.In a representative embodiment, the first memory circuit includes dynamic random access memory (DRAM) circuitry and the second memory circuit includes static random access memory (SRAM) circuitry.Representative methods of performing programmable atomic operations using a memory controller circuit, wherein the memory controller circuit is coupleable to a first memory circuit, are also disclosed, wherein the method includes using a first memory circuit coupleable to the first memory circuit. a memory control circuit that reads or loads requested data from the first memory circuit in response to a read request and writes or stores the requested data to the first memory circuit in response to a write request; Using a second memory control circuit coupled to the second memory circuit, when the requested data is stored in the second memory circuit, the requested data is read or loaded from the second memory circuit in response to a read request. the requested data, and when the requested data is not stored in the second memory circuit, transmit the read request to the first memory control circuit; use a predetermined atomic operation circuit system to respond performing at least one predetermined atomic operation in response to an atomic operation request specifying at least one of a plurality of predetermined atomic operations; and using programmable atomic operation circuitry, in response to specifying at least one of the plurality of programmable atomic operations. The at least one programmable atomic operation is performed on an atomic operation request of the programmable atomic operation.Another representative method of performing programmable atomic operations using a memory controller circuit is also disclosed, the memory controller circuit being coupled to a first memory circuit, wherein the method includes using a memory controller circuit that is coupled to the first memory circuit. a first memory control circuit that reads or loads requested data from the first memory circuit in response to a read request and writes or stores the requested data to the first memory in response to a write request circuit; using a second memory control circuit coupled to a second memory circuit, reading from the second memory circuit in response to a read request when the requested data is stored in the second memory circuit or Loading the requested data, and when the requested data is not stored in the second memory circuit, transmitting the read request to the first memory control circuit, and responsive to specifying the At least one programmable atomic operation and an atomic operation request of a memory address, transmitting the atomic operation request to the programmable atomic operation circuitry and setting a hazard bit stored in a memory hazard register corresponding to the memory address; using Predetermined atomic operation circuitry to perform at least one predetermined atomic operation in response to an atomic operation request specifying the at least one predetermined atomic operation; and using the programmable atomic operation circuitry, in response to specifying a plurality of predetermined atomic operations. An atomic operation request of at least one of the programmable atomic operations executes the at least one programmable atomic operation.In a representative embodiment, the programmable atomic operation circuitry includes a processor core coupled to a data buffer, wherein the method further includes using the processor core, executing a load unbuffered instruction to determine whether an operand is stored in the data buffer, and when the data is not stored in the data buffer, a read request to the second memory control circuit is generated.In a representative embodiment, the programmable atomically-operated circuitry includes a processor core, and wherein the method may further include executing, using the processor core, a store and clear lock instruction to generate an issue to the second An atomic write request to a memory control circuit having the resulting data and a specification to reset or clear a memory hazard bit after writing the resulting data to the second memory circuit. In a representative embodiment, the programmable atomically operating circuitry includes a processor core, wherein the method may further include using the processor core, executing an atomic return instruction to write the resulting data to the The second memory circuit then resets or clears the memory danger bit. In a representative embodiment, the programmable atomically operating circuitry includes a processor core, and wherein the method may further include executing an atomic return instruction using the processor core to generate response data having the resulting data. Group. Also in representative embodiments, the programmable atomic operation circuitry includes a processor core, and wherein the method may further include executing an atomic return instruction using the processor core to complete an atomic operation.In a representative embodiment, the atomic operation request specifying the at least one programmable atomic operation includes a physical memory address, a programmable atomic operation identifier, and at least one thread status register value. In such representative embodiments, the programmable atomically-operated circuitry further includes at least one register storing thread state information, and wherein the method further includes using the programmable atomically-operated circuitry, in response to receiving The atomic operation request specifying the at least one programmable atomic operation initializes the at least one register with the physical memory address, any data corresponding to the memory address, and the at least one thread status register value.In a representative embodiment, the method may additionally include, using the second memory control circuit, when the requested data is stored in the second memory circuit, in response to an atomic operation request, from the A second memory circuit reads or loads the requested data, and when the requested data is not stored in the second memory circuit, communicates the atomic operation request to the first memory control circuit . In a representative embodiment, the method may additionally include, using the second memory control circuit, in response to a write request specifying a memory address in the second memory circuit, setting a corresponding value stored in a memory hazard register. a danger bit at the memory address, and after writing or storing data at the memory address to the second memory circuit, resetting or clearing the set danger bit.In a representative embodiment, the method may additionally include, using the second memory control circuit, in response to a write request having write data and specifying a memory address in the second memory circuit, storing in the The current data at the memory address is transferred to the first memory control circuit to write the current data to the first memory circuit, and overwrites all the data in the second memory circuit with the written data. Describe the current data. In a representative embodiment, the method may additionally include using the second memory control circuit, in response to a write request having write data and specifying a memory address in the second memory circuit, setting a setting stored in the memory. a danger bit in a danger register corresponding to the memory address, transmitting the current data stored at the memory address to the first memory control circuit to write the current data to the first memory circuit, Overwrite the current data in the second memory circuit with the write data, and after writing or storing the write data to the second memory circuit at the memory address, reset Or clear the danger bit of said setting.In a representative embodiment, the method may additionally include, using the second memory control circuitry, in response to an atomic operation request specifying the at least one programmable atomic operation and a memory address, transmitting the atomic operation request to The programmable atom operates circuitry and sets a hazard bit stored in a memory hazard register corresponding to the memory address.Another memory controller is disclosed, the memory controller being coupled to a first memory circuit, wherein the memory controller includes a first memory control circuit being coupled to the first memory circuit, the first memory The control circuit includes: a plurality of memory bank request queues storing a plurality of read or write requests to the first memory circuit; a scheduler circuit coupled to the plurality of memory bank request queues, the scheduler circuit The processor is adapted to select one of the plurality of read or write requests from the plurality of memory bank request queues and schedule the read or write request for accessing the a first memory circuit; and a first memory access control circuit coupled to the scheduler, the first memory access control circuit being adapted to read or load data from the first memory circuit and write data into or stored into the first memory circuit; the second memory circuit; predetermined atomic operation circuitry adapted to perform at least one predetermined atomic operation of a plurality of predetermined atomic operations; and programmable atomic operation circuitry, adapted to perform at least one programmable atomic operation of a plurality of programmable atomic operations; and a second memory control circuit coupled to the second memory circuit, the second memory control circuit including: at least one input request queue, which stores read or write requests; a memory hazard control circuit having a memory hazard register; and a second memory access control circuit adapted to read or load data from said second memory circuit and write data or stored to the second memory circuit, the second memory access control circuit further adapted to, in response to an atomic operation request specifying the at least one predetermined atomic operation and a memory address, communicate the atomic operation request to the The predetermined atom operates circuitry and sets a hazard bit stored in a memory hazard register corresponding to the memory address.Another memory controller is disclosed, the memory controller being coupled to a first memory circuit, wherein the memory controller includes a first memory control circuit being coupled to the first memory circuit, the first memory The control circuit includes: a plurality of memory bank request queues storing a plurality of read or write requests to the first memory circuit; a scheduler circuit coupled to the plurality of memory bank request queues, the scheduler circuit The processor is adapted to select one of the plurality of read or write requests from the plurality of memory bank request queues and schedule the read or write request for accessing the a first memory circuit; and a first memory access control circuit coupled to the scheduler, the first memory access control circuit being adapted to read or load data from the first memory circuit and write data into or stored into the first memory circuit; the second memory circuit; predetermined atomic operation circuitry adapted to perform at least one predetermined atomic operation of a plurality of predetermined atomic operations; and programmable atomic operation circuitry, adapted to perform at least one programmable atomic operation of a plurality of programmable atomic operations; and a second memory control circuit coupled to the second memory circuit, the second memory control circuit including: at least one input request queue, which stores read or write requests; a memory hazard control circuit having a memory hazard register; and a second memory access control circuit adapted to read or load data from said second memory circuit and write data or stored to the second memory circuit, the second memory access control circuit further adapted to, in response to an atomic operation request specifying the at least one predetermined atomic operation and a memory address, communicate the atomic operation request to the the predetermined atomic operation circuitry and setting a hazard bit stored in a memory hazard register corresponding to the memory address, writing the resulting data from the predetermined atomic operation into the second memory circuit, and writing After the obtained data, the set hazard bit is reset or cleared.Numerous other advantages and features of the invention will become apparent from the following detailed description of the invention and its examples, claims and drawings.Description of the drawingsThe objects, features and advantages of the present invention will be more readily understood when considered by reference to the following disclosure in conjunction with the accompanying drawings, in which like reference numerals are used to designate like components in the various figures and alphabetic symbols are used in the figures. Reference numerals of are used in the various figures to indicate additional types, examples, or variations of selected component embodiments. In the accompanying drawings:Figure 1 is a block diagram of a representative first computing system embodiment.Figure 2 is a block diagram of a representative second computing system embodiment.Figure 3 is a high-level block diagram of representative first and second memory controller circuits.Figure 4 is a block diagram of a representative first memory controller circuit embodiment.Figure 5 is a block diagram of a representative second memory controller circuit embodiment.6A, 6B, and 6C (collectively, FIG. 6) are block diagrams of a representative second memory control circuit embodiment, a representative first memory control circuit embodiment, and a representative atom and merge operation circuit, respectively.7A, 7B, and 7C (collectively, FIG. 7) are flowcharts of representative methods of receiving and decoding a request and executing a read or load request, wherein FIGS. 7A and 7B illustrate receiving and decoding a request and executing a read or load request from a first memory circuit. A representative method of performing a read or load request, and FIG. 7C illustrates a representative method of executing a read or load request from the second memory circuit.8A, 8B, 8C, and 8D (collectively, FIG. 8) are flowcharts illustrating representative methods of performing an atomic operation as part of an atomic operation request.9 is a flowchart illustrating a representative method of performing data eviction from a second memory circuit as part of a read (or load) request or as part of a write (or store) request.Figure 10 is a flow diagram of a representative method of performing a write or storage request.Figure 11 is a block diagram of a representative programmable atomic operating circuit system embodiment.Detailed waysWhile the invention is susceptible to embodiments in many different forms, it is understood in the drawings that this disclosure is to be considered exemplification of the principles of the invention and that the invention is not intended to be limited to the specific embodiments shown. Specific exemplary embodiments of the invention are shown and will be described in detail herein. In this regard, before at least one embodiment according to the invention is explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth above and below, illustrated in the drawings or described in the examples. The methods and apparatus according to the invention are capable of other embodiments and of being practiced and carried out in various ways. Furthermore, it is to be understood that the wording and terminology employed herein, as well as the abstract contained herein, are for the purpose of description and should not be regarded as limiting.FIG. 1 is a block diagram of a representative first computing system 50 embodiment. Figure 2 is a block diagram of a representative second computing system 50A embodiment. Figure 3 is a high-level block diagram of representative first and second memory controller circuits. 4 is a block diagram of a representative first memory controller circuit 100 embodiment. Figure 5 is a block diagram of a representative second memory controller circuit 100A embodiment. 6 illustrates FIGS. 6A, 6B, and 6C, which are block diagrams of a representative second memory control circuit embodiment, a representative first memory control circuit embodiment, and a representative atom and merge operation circuit, respectively.1 and 2 illustrate different first computing system 50 and second computing system 50A embodiments that include additional components forming relatively larger and smaller systems 50, 50A, any and all of which are within the scope of the present disclosure. Inside. As shown in Figures 1 and 2, each may be an arrangement suitable for use in a system-on-a-chip ("SOC") in various combinations as illustrated, such as, but not limited to, computing systems 50, 50A, which may include one or more processor 110, communication network 150, optionally one or more hybrid thread processors ("HTP") 115, optionally one or more configurable processing circuits 105, various one or more optional communication interfaces 130. The first memory controller circuit 100 in the first computing system 50 or the second memory controller circuit 100A in the second computing system 50A, and in both the first computing system 50 and the second computing system 50A, respectively. First memory circuit 125 coupled to first memory controller circuit 100 or second memory controller circuit 100A.Referring to FIG. 3 , the first memory controller circuit 100 differs from the second memory controller circuit 100A in that the first memory controller circuit 100 additionally includes a programmable atomic operation circuit system 135 as an integrated device, that is, a first Memory controller circuit 100 includes all of the functionality and circuitry of second memory controller circuit 100A and additionally includes programmable atomic operation circuitry 135 . Processors 110, 110A include programmable atomic operation circuitry 135 and other additional circuitry, such as, but not limited to, network communications interface circuitry 170 or other or additional communications and processing circuitry. Programmable atomic operation circuitry 135 is used for the execution of programmable atomic operations. In first memory controller circuit 100, those programmable atomic operations are performed within programmable atomic operation circuitry 135 of first memory controller circuit 100. In the second memory controller circuit 100A, those programmable atomic operations are performed in conjunction with the programmable atomic operation circuitry 135 of the separate processor 110A.In the second computing system 50A, the second memory controller circuit 100A is directly coupled to the processor 110A, such as, but not limited to, as a separate integrated circuit or as a separate chiplet, such as through a separate bus structure 60 . As discussed in greater detail below, such processor 110A may be implemented the same as processor 110, or may be implemented as a different or simpler processor designed to perform primarily or only programmable atomic operations. Processor 110A is illustrated separately only to illustrate that second memory controller circuit 100A has a direct, rather than switched or routed, communication path to and from processor 110A. For example, processor 110 may be configured to implement processor 110A, wherein processor 110A is additionally provided with a direct communication path (eg, bus 60) to second memory controller circuit 100A. As previously indicated, the first memory controller circuit 100 differs from the second memory controller circuit 100A only in that the first memory controller circuit 100 may be included as an integrated device within a single integrated circuit or as part of a SOC, for example. The additional circuitry and functionality of the atomic operating circuitry 135 is programmed, and the second memory controller circuit 100A communicates directly with the programmable atomic operating circuitry 135 that is part of a separate processor 110A, as illustrated in FIG. 3 . In other words, in such integrated devices, first memory controller circuit 100 includes all of the same circuitry and functionality of second memory controller circuit 100A and additionally includes additional programmable atomic operation circuitry 135 . Accordingly, unless the description or context dictates otherwise, first memory controller circuit 100 and second memory controller circuit 100A are collectively described herein, with any and all descriptions and specificities applicable to first memory controller circuit 100 and second memory controller circuit 100A.Processors 110, 110A are typically multi-core processors that may be embedded within the first computing system 50 or the second computing system 50A, or may be coupled to the first computing system 50 or the second computing system 50A via a communication interface 130, such as a PCIe-based interface. An external processor in the second computing system 50A. As described in more detail below, such processors may be implemented as are or become known in the electronic arts. Communication interface 130 , such as a PCIe-based interface, may be implemented as is or become known in the electronics arts and provides communication to and from system 50 , 50A and another external device.The programmable atomic operation circuitry 135 of the first memory controller circuit 100 or the processors 110, 110A may be a multi-threaded processor having, for example, one or more processor cores 605 and additionally having functions for executing programmable atomic The extended instruction set of operations is based on the RISC-V ISA, as discussed in more detail below with reference to Figure 11. When provided with an extended instruction set for performing programmable atomic operations, the representative programmable atomic operation circuitry 135 and/or the processors 110, 110A may be implemented, for example, but not limited to, as described in U.S. Patent Application No. 16/176,434. One or more hybrid thread processors 115 are described (the entire contents of which are hereby incorporated by reference in their entirety and have the same full force and effect as if fully set forth herein). In general, the programmable atomic operation circuitry 135 of the first memory controller circuit 100 or the processors 110, 110A provides a cylinder style polling instantaneous thread switching to maintain high instructions per clock rate.Communication network 150 may also be implemented as is or becomes known in the electronics arts. For example, in a representative embodiment, communication network 150 is a packet-based communication network that provides processors 110, 110A, first memory controller circuit 100 or second memory controller circuit 100A, optionally one or Data packet routing between and among the plurality of hybrid thread processors 115, optionally one or more configurable processing circuits 105, and various one or more optional communication interfaces 130. In such packet-based communication systems, each packet typically contains destination and source addressing, as well as any data payload and/or instructions. For example, for purposes of this disclosure, first memory controller circuit 100 or second memory controller circuit 100A may receive a request having a source address, a read (or load) request, and a physical address in first memory circuit 125 Group. In response, and as described in greater detail below, first memory controller circuit 100 or second memory controller circuit 100A will read from the specified address (which may be in first memory circuit 125 or second memory circuit 175, (as discussed below) and assembles the response packet to the source address containing the requested data. Similarly, first memory controller circuit 100 or second memory controller circuit 100A may receive a packet with a source address, a write (or store) request, and a physical address in first memory circuit 125 . In response, and as described in greater detail below, first memory controller circuit 100 or second memory controller circuit 100A writes data to the specified address (which may be in first memory circuit 125 or second memory circuit 175, as discussed below), and assemble the response packet to the source address containing the acknowledgment that the data is stored in memory (which may be in the first memory circuit 125 or the second memory circuit 175, as discussed below).By way of example, and without limitation, communications network 150 may be implemented as multiple crossbar switches with a folded clos configuration, and/or as a mesh network implementing additional connections, depending on the system 50, 50A embodiment. And by way of example, and without limitation, communications network 150 may be part of an asynchronous switching fabric, meaning that data packets may be routed along any of a variety of paths such that, depending on the routing, any selected data at the destination addressed The arrival of the packet can occur at any of a number of different times. And for example and without limitation, communication network 150 may be implemented as a synchronous communication network, such as a synchronous mesh communication network. Any and all such communications networks 150 are considered equivalent and within the scope of this disclosure. Representative embodiments of communications network 150 are also described in U.S. Patent Application No. 16/176,434.The optional one or more mixed thread processors 115 and the one or more configurable processing circuits 105 are discussed in more detail in various related applications, such as U.S. Patent Application No. 16/176,434, and are illustrated as providing a system that may include Examples of various components within computing systems 50, 50A.Referring to FIG. 4 , first memory controller circuit 100 is coupled to first memory circuit 125 , for example, for write (store) operations and read (load) operations to and from first memory circuit 125 . The first memory controller circuit 100 includes a first memory control circuit 155, a second memory control circuit 160, an atom and merge operation circuit 165, a second memory circuit 175 and a network communication interface 170. Network communications interface 170 is coupled to communications network 150, for example, via a bus or other communications structure 163 that typically includes address (routing) lines and data payload lines (not separately illustrated). The first memory controller circuit 155 is coupled directly to the first memory 125 , such as via a bus or other communication structure 157 , to provide write (store) operations and read (load) operations to and from the first memory circuit 125 . The first memory control circuit 155 is also coupled for output to the atom and merge operation circuit 165 and for input to the second memory control circuit 160 . The second memory control circuit 160 is coupled directly to the second memory circuit 175 , such as via a bus or other communication structure 159 , and to the network communication interface 170 via a bus or other communication structure 161 for input (eg, incoming reads or writes). input request) and coupled for output to first memory control circuit 155 . It should be noted that the second memory circuit 175 is typically part of the same integrated circuit as the first memory controller circuit 100 or the second memory controller circuit 100A. Atomic and merge operation circuit 165 is coupled to receive (as input) the output of first memory control circuit 155 and provide output to second memory circuit 175 , network communication interface 170 , and/or directly to communication network 150 .Referring to FIG. 5 , the second memory controller circuit 100A is coupled to the first memory circuit 125 , such as for write (store) operations and read (load) operations to and from the first memory circuit 125 , and to the processor 110A. The second memory controller circuit 100A includes a first memory control circuit 155, a second memory control circuit 160, an atom and merge operation circuit 165A, a second memory circuit 175 and a network communication interface 170. Network communications interface 170 is coupled to communications network 150, for example, via a bus or other communications structure 163 that typically includes address (routing) lines and data payload lines (not separately illustrated). The first memory control circuit 155 is coupled directly to the first memory 125 , such as via a bus or other communication structure 157 , to provide write (store) operations and read (load) operations to and from the first memory circuit 125 . The first memory control circuit 155 is also coupled for output to the atom and merge operation circuit 165A and for input to the second memory control circuit 160 . The second memory control circuit 160 is coupled directly to the second memory circuit 175 , such as via a bus or other communication structure 159 , and to the network communication interface 170 via a bus or other communication structure 161 for input (eg, incoming reads or writes). input request) and coupled for output to first memory control circuit 155 . Atomic and merge operation circuit 165A is coupled to receive (as input) the output of first memory control circuit 155 and provide output to second memory circuit 175 , network communication interface 170 , and/or directly to communication network 150 .As mentioned above, with reference to Figures 4 and 5, the first memory controller circuit 100 differs from the second memory controller circuit 100A in that the first memory controller circuit 100 includes programmable atomic operation circuitry 135 (in Neutral atom and merge operation circuitry 165) is coupled to first memory control circuitry 155 via bus or communication line 60A, and second memory controller circuitry 100A is coupled to programmable atom operation circuitry 135 in separate processor 110A via bus Or communication line 60 is coupled to first memory control circuit 155 . Therefore, in the first memory controller circuit 100, the atomic and merge operation circuit 165 includes the memory hazard clear (reset) circuit 190, the write merge circuit 180, the predetermined atomic operation circuitry 185, and the programmable atomic operation circuitry 135 , and in the second memory controller circuit 100A, the atomic and merge operation circuit 165A includes a memory danger clearing (reset) circuit 190, a write merge circuit 180, and a predetermined atomic operation circuit system 185. Memory hazard clearing (reset) circuit 190, write combining circuit 180, and predetermined atomic operation circuitry 185 may each be implemented with other combinational logic circuitry such as adders (and subtractors), shifters, comparators, ANDs. gate, OR gate, XOR gate, etc.) or other logic circuitry, and may also include one or more registers or buffers for storing, for example, operands or other data. As mentioned above and discussed in greater detail below, programmable atomic operation circuitry 135 may be implemented as one or more processor cores and control circuitry, as well as with other combinational logic circuitry (e.g., adders, shifters etc.) or other logic circuits, and may also include one or more registers, buffers, and/or memory for storing addresses, executable instructions, operands, and other data, for example, or may be implemented as Processor 110, or more generally a processor (as described below). It should be noted that the memory hazard clear (reset) circuit 190 need not be a separate circuit in the atomic and merge operation circuits 165, 165A, but may instead be part of the memory hazard control circuit 230.Network communication interface 170 includes a network input queue 205 to receive data packets (including read and write request packets) from the communication network 150; a network output queue 210 to transmit data packets (including read and write response packets) to Communication network 150; data packet decoder circuitry 215 to decode incoming data packets from communication network 150, render the data (in specified fields, such as request type, source address, and payload data) and transmit the data provided in the packet to second memory control circuit 160; and data packet encoder circuit 220 to encode outgoing data packets (eg, responses to requests sent to first memory circuit 125) for transmission over communication network 150. Data packet decoder circuit 215 and data packet encoder circuit 220 may each be implemented as a state machine or other logic circuit.The first memory circuit 125 and the second memory circuit 175 may be any type or class of memory circuits, as discussed in greater detail below, such as, but not limited to, RAM, FLASH, DRAM, SDRAM, SRAM, MRAM, FeRAM, ROM, EPROM, or E2PROM, or any other form of memory device. In a representative embodiment, first memory circuit 125 is a DRAM, typically an external DRAM memory device, and second memory circuit 175 is an SRAM data cache. For example, first memory circuit 125 may be a separate integrated circuit in its own package, or may be included in a package with first memory controller circuit 100 and second memory controller circuit 100A, such as by sharing a common interposer. of separate integrated circuits. Additionally, a plurality of first memory circuits 125 may optionally be included. For example, but not limited to, the first memory circuit 125 may be Micron Technology, currently available from 8000 S. Federal Way, Boise, Idaho, 83716, US. Inc.) or Micron NGM memory IC (Micron’s next-generation DRAM device). These GDDR6 devices are the JEDEC standard with 16Gb density and 64GB/s peak per device.The second memory circuit 175 (eg, SRAM cache) is a memory-side cache and is accessed by physical address. In a representative embodiment, the second memory circuit 175 may be 1MB in size, with a 256B row size. The 256B row size is chosen to minimize the reduction in achievable bandwidth due to ECC support. Larger line sizes are possible based on applied simulation. Having a memory row size of 256B has the benefit of reducing energy compared to smaller row sizes, assuming that most of the accessed second memory circuit 175 is ultimately used. In a representative embodiment, a request from the communication network 150 accesses the second memory circuit 175 with accesses sized from a single byte up to 64 bytes. The tag of the second memory circuit 175 (eg, SRAM cache) should be able to handle partial row reads and writes.The second memory circuit 175 (acting as a cache) facilitates repeated atomic operations on the same memory line. The application will use barrier synchronization operations to determine when all threads of the process have finished processing a section of the application. An in-memory atomic count operator is used to determine when all threads have entered the barrier. Atomic counting operations are as many as there are threads present in said section of the application. Performing atomic operations on the data within the cache allows these barrier counting operations to be completed in only a few clocks per operation.The second most cost-effective use of second memory circuit 175 is to cache accesses from configurable processing circuit 105 . In a representative embodiment, configurable processing circuitry 105 does not have a cache, but instead streams data to and from internal memory. The second memory circuit 175 allows access to the same cache line to be handled efficiently.At a high level, and as discussed in more detail below (with respect to the representative embodiment illustrated in FIG. 6), with reference to FIGS. 1-5, first memory controller circuit 100 and second memory controller circuit 100A may receive data from a computing Data read (data load) requests within the system 50, 50A have physical memory addresses and are decoded in the data packet decoder circuit 215 of the network communication interface 170 and passed to the second memory control circuit 160. The second memory control circuit 160 will determine whether the requested data corresponding to the physical memory address is within the second memory circuit 175 and, if so, the requested data (along with the corresponding request with the requester (source) address) Provided to first memory control circuit 155 and ultimately to data packet encoder circuit 220 to encode outgoing data packets for transmission over communication network 150. When the requested data corresponding to the physical memory address is not within the second memory circuit 175, the second memory control circuit 160 provides the request (and/or the physical memory address) to the first memory control circuit 155, which A memory control circuit 155 will access and obtain the requested data from the first memory circuit 125. In addition to providing the requested data to the data packet encoder circuit 220 to encode outgoing data packets for transmission over the communication network 150, the first memory control circuit 155 also provides the data to the write combining circuit 180, which The input merge circuit 180 also writes data to the second memory circuit 175 .This additional writing of the requested data to the shared cache (eg, to the second memory circuit 175) provides significant latency reduction and is an important novel feature of the representative embodiments. For example, the requested data may be needed more frequently than other stored data, so storing the requested data locally may reduce the latency otherwise required to retrieve the data from the first memory circuit 125 (i.e., involving time period).Essentially, using the second memory circuit 175 as a local cache provides reduced latency for repeatedly accessed memory locations (in the first memory circuit 125). Additionally, the second memory circuit 175 provides a read buffer for sub-memory row accesses, ie, accesses to the first memory circuit 125 that do not require an entire memory row of the first memory circuit 125 . This use of second memory circuit 175 is also particularly beneficial for computing elements in systems 50, 50A that have small or no data caches.The first memory controller circuit 100 and/or the second memory controller circuit 100A are responsible for optimally controlling the first memory circuit 125 (e.g., GDDR6 RAM) to provide the second memory circuit 175 (as a high speed memory) immediately following a cache miss. cache) loads the requested data, and when the cache line is transferred out of the second memory circuit 175, i.e., evicted to make room for other incoming data, the first memory circuit 125 stores the data from the second memory circuit 175 Data for circuit 175. A representative embodiment of a GDDR6 device, such as but not limited to first memory circuit 125, has two non-dependent channels, each running 16 bits wide at 16 GT/s. A single GDDR6 device can support a peak bandwidth of 64GB/s. GDDR6 devices have a channel burst length of 16, resulting in a 32B data burst. Four bursts from each open row (ie, 128 bytes) are required to achieve full memory bandwidth. The bandwidth can be reduced when some of the bits are used for error correction coding ("ECC").As part of this operation, the second memory control circuit 160 will reserve a cache line in the second memory circuit 175 by setting the danger bit (in hardware) so that another process cannot read, overwrite, or modify the cache OK. As discussed in greater detail below, this process may also remove or "evict" data currently occupying a reserved cache line, which may then be provided to first memory control circuitry 155 for writing (storing) ) will be replaced or "evicted" from the second memory circuit 175 and stored in or to this data of the first memory circuit 125 . After the additional writing of the requested data to the second memory circuit 175, the memory hazard clear (reset) circuit 190 clears (resets) any corresponding hazard bits that were set.Similarly, the first memory controller circuit 100 and the second memory controller circuit 100A may receive a data write (data storage) request from within the computing system 50, 50A, the request having a physical memory address and at the network communication interface 170 is decoded in the data packet decoder circuit 215 and transmitted to the second memory control circuit 160. The second memory control circuit 160 will write (store) locally in the second memory circuit 175 . As part of this operation, the second memory control circuit 160 may reserve a cache line in the second memory circuit 175 by setting the danger bit (in hardware) so that it cannot be read by another process while a transition is in progress. cache line. As discussed in more detail below, this process may also remove or "evict" data currently occupying the reserved cache line that is also written (stored) to the first memory circuit 125 . Following the writing/store of the requested data to the second memory circuit 175, the memory hazard clear (reset) circuit 190 will clear (reset) any corresponding hazard bits that were set.Predetermined types of atomic operations may also be performed by predetermined atomic operations circuitry 185 of atomic and merge operation circuitry 165, involving requests for predetermined or "standard" atomic operations on the requested data, such as relatively simple single-loop integer atomic operations, Such as fetch-and-increment or compare-and-swap, which will occur with the same amount of processing as a regular memory read or write operation that does not involve atomic operations, such as incrementing one Atomic operations. For these operations, as discussed in more detail below, the second memory control circuit 160 will reserve a cache line in the second memory circuit 175 by setting the danger bit (in hardware) so that while a transition is in progress, another cache line will be reserved. A process cannot read the cache line. Data is obtained from first memory circuit 125 or second memory circuit 175 and provided to predetermined atomic operation circuitry 185 to perform the requested atomic operation. After the atomic operation, in addition to providing the resulting data to the data packet encoder circuit 220 to encode the outgoing data packet for transmission over the communication network 150, the predetermined atomic operation circuitry 185 provides the resulting data to the write combining circuit 180, The write combining circuit 180 also writes the resulting data to the second memory circuit 175 . After the writing/storage of the resulting data to the second memory circuit 175, the memory hazard clear (reset) circuit 190 will clear (reset) any corresponding hazard bits that were set.Custom or programmable atomic operations may be performed by programmable atomic operations circuitry 135 (which may be part of first memory controller circuit 100 or processor 110A), involving requests for programmable atomic operations on requested data. Any user may prepare any such programming code to provide such custom or programmable atomic operations, subject to the various constraints described below. For example, programmable atomic operations can be relatively simple multi-loop operations, such as floating point addition, or relatively complex multi-instruction operations, such as bloom filter plug-ins. Programmable atomic operations can be the same as or different from scheduled atomic operations because they are defined by the user rather than the system vendor. For these operations, as also discussed in more detail below, the second memory control circuit 160 will reserve a cache line in the second memory circuit 175 by setting the danger bit (in hardware) such that while a transition is in progress, Another process cannot read the cache line. Data is obtained from first memory circuit 125 or second memory circuit 175 and provided to programmable atomic operation circuitry 135 (e.g., within first memory controller circuit 100 or on dedicated communication link 60 to processor 110A above) to perform the requested programmable atomic operation. After the atomic operation, the programmable atomic operation circuitry 135 provides the resulting data to the network communication interface 170 (either within the first memory controller circuit 100 or within the processor 110A) to directly encode the outgoing data packet with the resulting data for use in transmitted over the communication network 150. Additionally, programmable atomic operation circuitry 135 provides the resulting data to second memory control circuit 160 , which also writes the resulting data to second memory circuit 175 . After the writing/storage of the resulting data to the second memory circuit 175, the second memory control circuit 160 will clear (reset) any corresponding danger bits that were set.The approach taken for programmable (i.e., "customized") atomic operations is to provide a signal that can be sent to the first memory controller circuit 100 through the communications network 150 from an originating source such as the processor 110 or other system 50, 50A component. and/or a plurality of generic custom atomic request types for the second memory controller circuit 100A. As discussed in greater detail below, first memory controller circuit 100 and second memory controller circuit 100A identify the request as a custom atom and forward the request within first memory controller circuit 100 or within processor 110A Programmable Atomic Operating Circuit System 135. In a representative embodiment, programmable atomic operation circuitry 135: (1) is a programmable processing element capable of efficiently performing user-defined atomic operations; (2) can perform loads and stores, arithmetic and logical operations to memory and control flow decisions; and (3) utilizing the RISC-V ISA with a new specialized instruction set to facilitate interaction with the first memory controller circuit 100 and/or the second memory controller circuit 100A or components thereof, thereby Perform user-defined operations atomically. It should be noted that the RISC-V ISA contains a full set of instructions supporting high-level language operators and data types. Programmable atomic operation circuitry 135 may utilize the RISC-V ISA, but typically supports a more limited instruction set and limited register file sizes to reduce the die size of the cells when included within the first memory controller circuit 100 .Referring to FIG. 6 , the second memory control circuit 160 includes a second memory access control circuit 225; a memory hazard control circuit 230 having a memory hazard register 260; a network request queue 250; an atomic operation return queue 255; and incoming request multiplexing. 245; optional delay circuit 235; and flow control multiplexer 240. The second memory access control circuit 225 is coupled to the second memory circuit 175 (eg, SRAM) and includes a state machine and logic circuitry to read from and write to the second memory circuit 175 through corresponding addressing, providing signaling to Memory hazard control circuit 230 to set or clear various memory hazard bits and generate cache lines when the cache line of second memory circuit 175 contains data that will be overwritten by other data and will be written to first memory circuit 125 "Eviction" request.The memory hazard control circuit 230 includes a memory hazard register 260 and optionally a state machine and logic circuitry to set or clear various memory hazard bits stored in the memory hazard register 260 to provide hardware-based cache coherence. A cache "miss", that is, an incoming request for data that is not stored in the second memory circuit 175 requires access to the first memory circuit 125 to introduce the desired data to the second memory circuit 175 (as a local cache cache). During this first memory circuit 125 access time, the memory row is unavailable for other requests. Memory hazard control circuit 230 maintains a hazard bit table in memory hazard register 260 to indicate which cache lines of second memory circuit 175 are unavailable for access. The memory hazard control circuit 230 (or equivalently, the memory hazard clear (reset) circuit 190) maintains incoming requests attempting to access such cache lines with the hazard bit set until the hazard is cleared. Once the danger is cleared, the request is resent through incoming request multiplexer 245 for processing. The tag address of the cache line of the second memory circuit 175 is hashed to the hazard bit index. The number of dangerous bits is typically chosen to set the probability of dangerous collisions to a sufficiently low level.Network request queue 250 provides a queue for incoming requests (eg, loads, stores) from communications network 150 . Atomic operation return queue 255 provides a queue for resulting data from programmable atomic operations. The incoming request multiplexer 245 selects and optimizes between incoming memory request sources having requests from the memory hazard clearing (reset) circuit 190 , requests from the atomic operation return queue 255 , in priority order requests and requests from the network request queue 250, and provide these requests to the second memory access control circuit 225. Optional delay circuit 235 is a pipeline stage that simulates the delay of a read operation from second memory circuit 175 . Incoming control multiplexer 240 selects from requests requiring access to first memory circuit 125 (i.e., a cache "miss" when the requested data is not found in second memory circuit 175) of incoming network requests, and when the cache line of the second memory circuit 175 contains data to be written to the first memory circuit 125 before being overwritten by other incoming data (from a read or write request). Cache "eviction" request for two memory circuits 175.First memory control circuit 155 includes scheduler circuit 270; one or more first memory bank queues 265; first memory access control circuit 275; one or more queues for output data and request data, ie, second Memory "hit" request queue 280, second memory "miss" request queue 285, second memory "miss" data queue 290, and second memory "hit" data queue 295; request selection multiplexer 305 and data Multiplexer 310 is selected.A first memory bank (request) queue 265 is provided such that each individually managed bank of the first memory circuit 125 has a dedicated bank request queue 265 to hold a request until the request is available on the associated first memory circuit 125 is scheduled on the library. Scheduler circuit 270 selects across bank queue 265 to select a request for an available bank of first memory circuit 125 and provides the request to first memory access control circuit 275 . The first memory access control circuit 275 is coupled to the first memory circuit 125 (eg, DRAM) and includes a state machine and logic circuitry to perform corresponding addressing (eg, row addressing and column addressing) using the physical address of the first memory circuit 125 . address) to read (load) and write (store) to the first memory circuit 125.The second memory "hit" data queue 295 holds the read data provided directly from the second memory circuit 175 (on the communication line 234), that is, held in the second memory circuit 175 and read from the second memory circuit 175. data until the requested data is selected for provision in the response message. The second memory "miss" data queue 290 holds read data provided from the first memory circuit 125 , that is, held in the first memory circuit 125 and read from the first memory circuit 125 and not in the second memory circuit 175 until the requested data is selected for provision in the response message. When the requested data is available in the second memory circuit 175, the second memory "hit" request queue 280 holds the request packet information (eg, the identifier or address of the source requester used to provide addressing for the response packet) until a selection is made. The request is used to prepare the response message. When the requested data is available in the first memory circuit 125 (and not in the second memory circuit 175), the second memory "miss" request queue 285 maintains request packet information (e.g., to provide addressing for a response packet). The identifier or address of the source requestor) until the request is selected for use in preparing a response message.Data selection multiplexer 310 reads data at first memory circuit 125 (held in second memory "miss" data queue 290) and at second memory circuit 175 reads data (held in second memory "hit" Choose between data queue 295). As mentioned above, the selected data is also written to the second memory circuit 175. The request selection multiplexer 305 is then used to select the corresponding request data, correspondingly the response data held in the second memory "miss" request queue 285 and the response held in the second memory "hit" request queue 280 Choose between data. The read data is then matched to the corresponding request data so that a return data packet with the requested data can be assembled and transmitted over the communications network to the address of the request source. As discussed in more detail below, there are several different ways for the above phenomenon to occur using the atomic and merge operation circuit 165 or using the optional outflow response multiplexer 315.When included, outflow response multiplexer 315 selects between: (1) read data and request data provided by data selection multiplexer 310 and request selection multiplexer 305 ; and (2) data generated by programmable atomic operation circuitry 135 (when included in atomic and merge operation circuitry 165 of first memory controller circuit 100 ) and requests provided by request selection multiplexer 305 data. In both cases, the read or generated data and request data are provided to the network communications interface 170 through the outgoing response multiplexer 315 to encode and prepare response or return data packets for transmission over the communications network 150 . In selected embodiments, processor 110A, which performs programmable atomic operations, may itself directly encode and prepare response or return data packets for transmission over communications network 150 .Atomic and merge operation circuitry 165, 165A includes write merge circuitry 180, scheduled atomic operation circuitry 185, and memory hazard clear (reset) circuitry 190, wherein atomic and merge operation circuit 165 additionally includes programmable atomic operation circuitry 135. The write merging circuit 180 receives the read data from the data selection multiplexer 310 and the request data from the request selection multiplexer 305, and merges the request data and the read data (to generate a read data with data and source address for responding or returning a single unit in the data packet), then providing: (1) a write port (on line 236) of the second memory circuit 175 (or equivalently, to the second memory access control circuit 225 to write to the second memory circuit 175); (2) optionally, to the outgoing response multiplexer 315 for selecting and providing to the network communication interface 170 for encoding and preparing responses or return data packets for transmission over communications network 150; or (3) optionally, to network communications interface 170 to encode and prepare a response or return data packets for transmission over communications network 150. Alternatively, and as another option illustrated in Figure 5C, the outgoing response multiplexer 315 may receive and select the read data directly from the data selection multiplexer 310 and directly from the request selection multiplexer. 305 request data for provision to network communications interface 170, encode and prepare response or return data packets for transmission over communications network 150.When data is requested for a predetermined atomic operation, predetermined atomic operation circuitry 185 receives request and read data from write combining circuit 180 or directly from data selection multiplexer 310 and request selection multiplexer 305 . Atomic operations are performed and the resulting data is written to (stored in) second memory circuit 175 using write merging circuit 180 and also provided to outflow response multiplexer 315 or directly to Network communications interface 170 to encode and prepare response or return data packets for transmission over communications network 150 .Predefined atomic operations circuitry 185 handles predefined atomic operations, such as fetch-and-increment or compare-and-swap (eg, the atomic operations listed in Table 1). These operations perform simple read-modify-write operations to a single memory location of 32 bytes or less in size. An atomic memory operation begins with a request packet transmitted over the communication network 150 . The request packet has a physical address, atomic operator type, operand size, and optionally up to 32 bytes of data. The atomic operation performs a read-modify-write to the second memory circuit 175 cache line, filling the cache if necessary. Atomic operator responses can be simple completion responses, or responses with up to 32 bytes of data. Table 1 shows a list of example atomic memory operators in a representative embodiment. The Request Group Size field specifies the operand width used for the atomic operation. In representative embodiments, various processors (e.g., programmable atomic operation circuitry 135, processors 110, 110A), mixed thread processor 115, configurable processing circuitry 105) are capable of supporting 32- and 64-bit atomic operations, and in some cases, atomic operations with 16 and 32 bytes.Table 1:Atomic Identifier Atomic Description 0 Extract and AND 1 Extract and OR 2 Extract and XOR 3 Extract and add 4 Extract and subtract 5 Extract and increment 6 Extract and decrement 7 Extract and minimize 8 Extract and maximize 9 Extract and swap ( Swap) 10 Compare and swap 11-15 Reserved 16-63 Customized (programmable) atomic operationsAs mentioned above, before the read data is written (stored) to the second memory circuit 175, the hazard bit set for the reserved cache line is cleared by the memory hazard clearing (reset) circuit 190. Accordingly, when write merge circuit 180 receives a request and read data, memory hazard clear (reset) circuit 190 may transmit a reset or clear signal to memory hazard control circuit 230 (on communication line 226) to reset The memory danger bit set for the reserved cache line in register 260 is set or cleared. Alternatively, when memory hazard clear (reset) circuit 190 is included in memory hazard control circuit 230, write merge circuit 180 may transmit a reset or clear signal to memory hazard control circuit 230 (on communication line 226), The memory danger bit set for the reserved cache line in register 260 is also reset or cleared. As also mentioned above, resetting or clearing this danger bit will also free up upcoming read or write requests involving the specified (or reserved) cache line, providing the pending read or write request to Flows into request multiplexer 245 for selection and processing.7A, 7B, and 7C (collectively, FIG. 7) are flowcharts of representative methods of receiving and decoding a request and executing a read or load request, wherein FIGS. 7A and 7B illustrate receiving and decoding a request and executing a read or load request from a first memory circuit. A representative method of performing a read or load request, and FIG. 7C illustrates a representative method of executing a read or load request from the second memory circuit. 8A, 8B, 8C, and 8D are flowcharts illustrating representative methods of performing an atomic operation as part of an atomic operation request. 9 is a flowchart illustrating a representative method of performing data eviction from a second memory circuit as part of a read (or load) request or as part of a write (or store) request. Figure 10 is a flow diagram of a representative method of performing a write or storage request.As mentioned above, the first memory controller circuit 100 and/or the second memory controller circuit 100A may receive a memory read (or load) request or a memory write (or store) request transmitted from the communication network 150 . Table 2 shows a list of example read, write, and atomic operations and corresponding requests in a representative embodiment (where "..." indicates that requests for other operations may be specified using the immediately preceding request type and pattern, such as but Not limited to AmoXor requests for extract and XOR atomic operations, AmoAnd requests for extract and AND atomic operations).Table 2:Table 3 shows a list of example responses to read, write, and atomic requests from first memory controller circuit 100 and/or second memory controller circuit 100A as data in a representative embodiment. Packets are transmitted over communications network 150.table 3:It should be noted that the source entity or device (i.e., the entity or device issuing the read or write request, such as various processors (eg, processor 110), mixed thread processor 115, configurable processing circuit 105) typically does not have No information is required and no information is required as to whether the requested read data or the requested write data is held in the first memory circuit 125 or the second memory circuit 175 and a read or write request to the memory may simply be generated and transmitting the request over the communication network 150 to the first memory controller circuit 100 and/or the second memory controller circuit 100A.Referring to FIG. 7 , a representative method of performing receive and decode requests and performing a read or load request begins with receiving a request (eg, table 2 request) (start step 400). Use the packet decoder circuit 215 to decode the received request, determine the request type (read, write, atomic operation), and place the request in the corresponding queue (network request queue 250 or atomic operation request queue 255) (step 402) . In another representative embodiment, if packet decoder circuit 215 is not included, the request is placed in a single request queue (combined network request queue 250 and atomic operation request queue 255) and accessed by the second memory The control circuit 225 performs the step of decoding the received request and determining the request type of step 402 . The request is selected from the queue by the incoming request multiplexer 245 (step 404), and when the request is a read request (step 406), the second memory access control circuit 225 determines whether the requested data is stored in the second memory in circuit 175 (step 408). When the request is not a read request (in step 408), the second memory access control circuit 225 determines whether the request is a write request (step 410), and if so, proceeds to step 540 illustrated and discussed with reference to FIG. . When the received request is neither a read request nor a write request from the network request queue 250, the request is an atomic operation request from the atomic operation queue 255, and the second memory access control circuit 225 continues to refer to FIG. 8 Step 456 of description and discussion.It should be noted that these steps 400, 402, 404 and 406 or 410 generally apply to all read, write and/or atomic operations, not just the read operations illustrated in Figure 7. For write (or store) operations , the method will have steps 400, 402, 404, and 410 completed, and it has been determined that the request selected from the network request queue 250 is a write request. It should be noted that steps 406 and 410 of determining whether a request is a read or a write request may occur in any order; therefore completion of step 406 is not required for the initiation of a write operation. Similarly, the determination as to whether a request is an atomic operation request may occur as a separate step (not illustrated), rather than just by a process of elimination where the request is not a read request and is not a write request. Additionally, only two of the steps of determining whether a request is a read request, a write request, or an atomic operation request are required, with any third request automatically determined by a process of elimination where the request is not the first request type and is not the second request type type. All such variations are considered equivalent and within the scope of this disclosure.When the second memory access control circuit 225 has determined (in step 408 ) that the requested data is not stored in the second memory circuit 175 , the second memory access control circuit 225 selects the cache in the second memory circuit 175 line (step 411), and using the memory hazard control circuit 230, it is determined whether the particular cache line in the second memory circuit 175 has the hazard bit set in the memory hazard register 260 (step 412). If the danger bit is set for the cache line in second memory circuit 175, then second memory access control circuit 225 determines whether another cache line is available (which does not have the danger bit set) (step 414), If so, the available cache line in the second memory circuit 175 is selected (step 416). If there are no available cache lines in the second memory circuit 175 , ie, all cache lines have the hazard bit set, then the second memory access control circuit 225 queues the read request in the memory hazard control circuit 230 (step 418), until the danger bit has been reset or cleared for the cache line in the second memory circuit 175 (step 420), and the second memory access control circuit 225 selects the cache line with the danger bit reset or cleared, return Go to step 416.When the cache line in the second memory circuit 175 is selected in step 411 or step 416, the second memory access control circuit 225 determines whether there is data already stored in the selected cache line (step 422) , and if the data is present in the cache line, then a data eviction process (step 423) is performed (ie, steps 522-534 illustrated and discussed with reference to FIG. 9 are performed for eviction of data from the second memory circuit 175 ).When the selected cache line does not have stored data (step 422) or the data eviction process has been completed (step 423), the second memory access control circuit 225 generates a signal to the memory hazard control circuit 230 to target the first cache line. The selected cache line in the second memory circuit 175 sets the danger bit (step 424), blocking other requests to access the same cache line because the data in the cache line will be in the process of being transferred and another read The cache line should not be accessed by a fetch or write process, thus providing memory coherence.Since there is a cache "miss", the second memory access control circuit 225 then passes the read request to the optional delay circuit 235 (to match the access of the second memory access control circuit 225 to the second memory circuit 175 and determine the amount of time spent on a cache miss) (or as another option, pass the request directly to the inflow control multiplexer 240), so that the request is then selected by the inflow control multiplexer 240 and stored in A read request to access the first memory circuit 125 is queued in the first memory bank queue 265 (step 426). Scheduler circuit 270 ultimately selects the read request from first memory bank queue 265 and schedules (or initiates) access to the memory bank of first memory circuit 125 (step 428). The requested data is read or obtained from the first memory circuit 125 and provided to the second memory "miss" data queue 290, and the corresponding request (or request data, such as source address) is provided to the second memory "miss" Request queue 285 (step 430). Using data selection multiplexer 310 and request selection multiplexer 305, read data and corresponding requests are selected and paired together using write combining circuit 180 (step 432), wherein the write merge circuit 180 then writes the read data to the selected cache line (via the communication line 236 ) in the second memory circuit 175 (or equivalently, to the second memory access control circuit 225 to write to the second memory circuit 175) (step 434). As used herein, "pairing" read data and corresponding requests together simply means using data selection multiplexer 310 and request selection multiplexer 305 to pair the read data and corresponding requests together. Select or match together so that both data and request are available together or simultaneously, such as for atomic operations or preparing outgoing response data packets, (i.e. to avoid reading data being paired with and mistakenly sent to a bad request origin of). The previously set danger bit is reset or cleared for the selected cache line by writing the read data to the selected cache line in the second memory circuit 175 (step 436). Using the read data and the corresponding request, a read response data packet (eg, the response from Table 3) with the requested read data is prepared and typically transmitted over the communication network 150 to the source address (step 438), and from the first The read operation of memory circuit 125 may end (returning to step 440).Memory hazard control circuit 230 is used when second memory access control circuit 225 has determined (in step 408 ) that the requested data is stored in the cache line in second memory circuit 175 (ie, a cache hit), The second memory access control circuit 225 determines whether the particular cache line in the second memory circuit 175 has the hazard bit set in the memory hazard register 260 (step 442). If the hazard bit is set for the cache line in the second memory circuit 175, the second memory access control circuit 225 queues the read request in the memory hazard control circuit 230 (step 444) until the hazard bit has been set for the second cache line. The cache line in memory circuit 175 has the danger bit reset or cleared (step 446).When there is no danger bit currently set after step 442 or step 446, the second memory access control circuit 225 reads or obtains the requested data from the cache line and transfers it directly to the second memory" Hit" data queue 295 (step 448). As part of step 448, since there is a cache "hit," the second memory access control circuit 225 then transmits the read request to the optional delay circuit 235 (to match the second memory access control circuit 225 accessing the second memory circuit 175 and the amount of time it takes to obtain the data) and the corresponding request (or request data, such as a source address) is provided to the second memory "hit" request queue 280 . Using data selection multiplexer 310 and request selection multiplexer 305, read data and corresponding requests are selected and paired together (step 450). Using the read data and the corresponding request, a read response data packet (eg, the response from Table 3) with the requested read data is prepared and typically transmitted over the communication network 150 to the source address (step 452), and from The read operation of the second memory circuit 175 may end (return to step 454).As mentioned above, for any read operation or predetermined atomic operation from the first memory circuit 125 or the second memory circuit 175, there are several options available for preparing and transmitting the read response data packet, such as: (1) Read data and corresponding requests are provided to outgoing response multiplexer 315 using write merging circuitry 180 for selection and provision to network communications interface 170 to encode and prepare responses or return data packets for use on communications network 150 or (2) optionally, using write merging circuit 180, provide the read data and corresponding request to network communication interface 170, encode and prepare response or return data packets for transmission on communication network 150; or (3) As another option, outflow response multiplexer 315 may receive and select read data directly from data selection multiplexer 310 and request data directly from request selection multiplexer 305 to Provided to the network communications interface 170, response or return data packets are encoded and prepared for transmission over the communications network 150.As mentioned above, incoming requests to the first memory controller circuit 100 and/or the second memory controller circuit 100A may be used for atomic operations, which are essentially subsequent atomic operations on the operand dates. , after a write request to save the resulting data to memory, a read request to obtain the operand data. The read operation portion of the atomic operation request typically passes through step 432 for a cache miss or through step 450 for a cache hit, that is, by obtaining data from the first memory circuit 125 or the second memory circuit 175 and transferring the data and requests are provided into corresponding queues 280, 285, 290, 295, tracking the previously discussed read operations with the additional step of setting the danger bit for the selected cache line of the second memory circuit 175. For clarity, these steps are also discussed below.Referring to FIG. 8, after step 410, when the request is an atomic operation request, the second memory access control circuit 225 determines whether the requested operand data is stored in the second memory circuit 175 (step 456). Memory hazard control circuit 230 is used when second memory access control circuit 225 has determined (in step 456) that the requested data is stored in the cache line in second memory circuit 175 (ie, a cache hit), The second memory access control circuit 225 determines whether the particular cache line in the second memory circuit 175 has a hazard bit set in the memory hazard register 260 (step 458). If the hazard bit is set for the cache line in the second memory circuit 175, the second memory access control circuit 225 queues the atomic operation request in the memory hazard control circuit 230 (step 460) until the hazard bit has been set for the second cache line. The cache line in memory circuit 175 has the danger bit reset or cleared (step 462).When there is no danger bit currently set after step 458 or step 462, the second memory access control circuit 225 sets the danger bit for the cache line in the second memory circuit 175 (this is because the danger bit will be After updating the data) (step 464), the requested data is obtained from the cache line and transferred directly to the second memory "hit" data queue 295 (step 466), performing (e.g., fetch and AND Or extract and swap (replace) the "extract" part of the atomic operation. As part of step 466, since there is a cache "hit," the second memory access control circuit 225 then passes the atomic operation request to the optional delay circuit 235 (to match the second memory access control circuit 225 access the amount of time it takes for the second memory circuit 175 to obtain the data) and the corresponding request (or request data, such as the source address) is provided to the second memory "hit" request queue 280 . Using data selection multiplexer 310 and request selection multiplexer 305, read operand data and corresponding atomic operation requests are selected and paired together (step 468).When the second memory access control circuit 225 has determined (in step 456 ) that the requested data is not stored in the second memory circuit 175 , the second memory access control circuit 225 selects the cache in the second memory circuit 175 line (step 470), and using the memory hazard control circuit 230, it is determined whether the particular cache line in the second memory circuit 175 has the hazard bit set in the memory hazard register 260 (step 472). If the danger bit is set for the cache line in second memory circuit 175, then second memory access control circuit 225 determines whether another cache line is available (which does not have the danger bit set) (step 474), If so, the available cache line in the second memory circuit 175 is selected (step 476). If there are no available cache lines in the second memory circuit 175 , ie, all cache lines have the hazard bit set, then the second memory access control circuit 225 queues the atomic operation request in the memory hazard control circuit 230 ( Step 478) until the hazard bit has been reset or cleared for the cache line in the second memory circuit 175 (Step 480), and the second memory access control circuit 225 selects the cache line with the hazard bit reset or cleared , return to step 476.When the cache line in the second memory circuit 175 is selected in step 470 or step 476, the second memory access control circuit 225 determines whether there is data already stored in the selected cache line (step 482) , and if the data is present in the cache line, then a data eviction process is performed (step 484) (i.e., steps 522-534 illustrated and discussed with reference to FIG. 9 are performed for eviction of data from the second memory circuit 175 ).When the selected cache line does not have stored data (step 482) or the data eviction process has been completed (step 484), the second memory access control circuit 225 generates a signal to the memory hazard control circuit 230 to target the second cache line. The selected cache line in the second memory circuit 175 sets the danger bit (step 486), blocking other requests to access the same cache line because the data in the cache line will be in the process of being transferred and another read The cache line should not be accessed by a fetch or write process, thus providing memory coherence.Since there is a cache "miss", the second memory access control circuit 225 then passes the atomic operation request to the optional delay circuit 235 (to match the access of the second memory access control circuit 225 to the second memory circuit 175 and determine the amount of time spent on a cache miss) (or as another option, pass the request directly to the inflow control multiplexer 240), so that the request is then selected by the inflow control multiplexer 240 and stored in An atomic operation request to access the first memory circuit 125 is queued in the first memory bank queue 265 (step 488). Scheduler circuit 270 ultimately selects the atomic operation request from first memory bank queue 265 and schedules (or initiates) access to the memory bank of first memory circuit 125 (step 490). The requested data is obtained (read) from the first memory circuit 125 and provided to the second memory "miss" data queue 290, and the corresponding atomic operation request (containing the request data, such as the source address) is provided to the second memory" miss" request queue 285 (step 492), thereby performing the "fetch" portion of the atomic operation. Using data selection multiplexer 310 and request selection multiplexer 305, read data and corresponding requests are selected and paired together (step 494).When processing an atomic operation request, after step 468 or step 494, there are available cache lines in the second memory circuit 175 that have been reserved (e.g., danger bits set), either from the second memory circuit 175 or from the first The memory circuit 125 reads (obtains) the operand data, and the read data is paired or matched to its corresponding atomic operation request. When the atomic operation request is for a predetermined atomic operation (step 496), data selection multiplexer 310 and request selection multiplexer 305 transmit the data and request to the predetermined atomic operation circuitry 185 (step 498), and Predetermined atomic operation circuitry 185 performs the requested atomic operations to produce the resulting data (step 500), such as extract and AND, extract and OR, extract and XOR, extract and add, extract and subtract, extract and increment, extract and Decrement, extract and minimize, extract and maximize, extract and swap (replace), compare and swap. The resulting data is written to the selected cache line (via communication line 236) in second memory circuit 175 using write merge circuit 180 (or equivalently, to second memory access control circuit 225 to Write to the second memory circuit 175) (step 502). By writing the resulting data to the selected cache line in the second memory circuit 175, the memory hazard clearing (reset) circuit 190 or the memory hazard control circuit 230 is used to reset or clear the previous cache line for the selected cache line. Set the danger bit (step 504). Using the resulting data and the corresponding atomic operation request, an atomic operation response data packet (eg, the response from Table 3) with the requested resulting data is prepared and transmitted, typically over the communication network 150, via the network communication interface 170 to a source address (provided at Requesting) (step 506), and the scheduled atomic operation can end (return to step 508).When, in step 496, the atomic operation request is not for a predetermined atomic operation, that is, for a programmable or custom atomic operation, as part of the "job descriptor" discussed in more detail below, on the communication line to processor 110A The atomic operation request and the read data are transmitted to the programmable atomic operation circuitry 135 on the bus 60 or on the communication line or bus 60A leading to the programmable atomic operation circuitry 135 in the atomic and merge operation circuit 165 (step 510). Programmable atomic operation circuitry 135 performs the requested programmable atomic operation to generate the resulting data (step 512), as discussed in greater detail below, and communicates the resulting data with the programmable atomic operation request to the atomic operation request queue 255 ( Step 514). Programmable atomic operation requests are selected using incoming request multiplexer 245 and the resulting data is written to the selected cache line (essentially Write operation) (step 516). By writing the resulting data to the selected cache line in the second memory circuit 175, the memory hazard clearing (reset) circuit 190 or the memory hazard control circuit 230 is used to reset or clear the previous cache line for the selected cache line. Set the danger bit (step 518). Using the resulting data and the corresponding programmable atomic operation request, a programmable atomic operation response data packet (eg, the response from Table 3) having the requested resulting data is prepared and typically transmitted over the communication network 150 to the source address (step 520) , and the programmable atomic operation can end (return to step 505).
A semiconductor device (200) is described herein. The semiconductor device includes a substrate and a collector region (220) in the substrate. The semiconductor device also includes a plurality of emitter regions (216) in the substrate, each of the plurality emitter regions separate from each other, wherein the plurality of emitter regions is disposed in an area bounded by the collector region.
CLAIMSWhat is claimed is:1. A semiconductor device, comprising: a substrate; a collector region in the substrate, a plurality of emitter regions in the substrate, each of the plurality emitter regions separate from each other, wherein the plurality of emitter regions is disposed in an area bounded by the collector region.2. The semiconductor device of claim 1, wherein the collector region is shaped to form a ring in the substrate.3. The semiconductor device of claim 1, wherein the plurality of emitter regions and the collector region are arranged to form a finger arrangement.4. The semiconductor device of claim 3, wherein each of the plurality of emitter regions is disposed adjacent to another of the plurality of emitter regions in a row.5. The semiconductor device of claim 1, wherein the collector region comprises a first side disposed adjacent to a first side of each of the plurality of emitter regions and a second side disposed adjacent to a second side of each of the plurality of emitter regions.6. The semiconductor device of claim 1, wherein the semiconductor device comprises a PNP device and the collector region and each of the plurality of emitter regions comprise p-type doped material.7. The semiconductor device of claim 1, wherein the semiconductor device comprises an NPN device and the collector region and each of the plurality of emitter regions comprise n-type doped material.8. The semiconductor device of claim 1, further comprising a base contact region disposed on the substrate and on an opposite side of the collector region from the plurality of emitter regions.9. The semiconductor device of claim 1, further comprising at least one trench isolating the collector region and the plurality of emitter regions.10. The semiconductor device of claim 1, further comprising a field plate surrounding the plurality of emitter region.11. The semiconductor device of claim 1, further comprising a base contact region between the collector region and the plurality of emitter regions.12. The semiconductor device of claim 1, wherein the substrate comprises a n-type buried layer and an epitaxial layer disposed on the n-type buried layer.13. The semiconductor device of claim 12, wherein the epitaxial layer comprises n-type doped material.14. The semiconductor device of claim 13, wherein the epitaxial layer comprises p-type doped material.15. The semiconductor device of claim 1, further comprising a p-type isolating material and an p-type buried layer.16. The semiconductor device of claim 1, wherein at least one of the emitter regions comprises a circular shape.17. The semiconductor device of claim 1, wherein at least one emitter region of the plurality of emitter regions is coupled to an emitter terminal.18. The semiconductor device of claim 1, where at least one emitter region of the plurality of emitter regions is uncoupled from an emitter terminal.19. The semiconductor device of claim 1, wherein at least two emitter regions of the plurality of emitter regions are coupled to a same emitter terminal.20. A method for manufacturing a semiconductor device, comprising: forming a collector region in an epitaxial layer of a semiconductor substrate; and forming a plurality of emitter regions in the epitaxial layer of the semiconductor substrate, where the plurality of emitter regions are disposed in an area bounded by the collector region.21. The method of claim 20, further comprising: forming at least one base contact region in the epitaxial layer of the semiconductor substrate, wherein the at least one base contact region is disposed adjacent to at least one of the plurality of emitter regions and adjacent to the collector region.22. A bipolar transistor, comprising: a collector region shaped to form a ring; and a first emitter region and a second emitter region, wherein the first emitter region and second emitter region are disposed on a semiconductor substrate in an area inside the ring formed by the collector region.
REPEATER EMITTER FOR LATERAL BIPOLAR TRANSISTOR [0001] Examples of the present disclosure generally relate to bipolar transistors and, in particular, to manufacturing bipolar transistors.BACKGROUND(0002] Bipolar transistors are commonly used in semiconductor devices, especially for highspeed operation and large drive current applications. The bipolar transistor is formed by a pair of P-N junctions, including an emitter-base junction and a collector-base junction. An NPN bipolar junction transistor has a thin region of p-type material providing the base region between two regions of n-type material providing the emitter and collector regions. A PNP bipolar junction transistor has a thin region of n-type material providing the base region between two regions of p- type material constituting the emitter and collector regions. The movement of electrical charge carriers winch produces electrical current flow between the collector region and the emitter region is controlled by an applied voltage across the emitter-base junction.[0003] A bipolar transistor 100 is shown in FIG. 1. The bipolar transistor 100 includes an retype buried layer (NBL) 102 formed over a substrate 101. The bipolar transistor 100 also includes an epitaxial layer 104 grown over the NBL 102. The collector region 120 of the bipolar transistor 100 is a doped region of one conductivity type in epitaxial layer 104, and the base contact region 118 is formed by doped regions of the opposite conductivity type that that of the collector region 120. The base region can be formed by doped (e.g., n-type) regions of the epitaxial layer 104 disposed between the emitter region 116 and the collector region 120, and the base contact region 118 is connected to the base region. The emitter region 116 is a doped region of the same conductivity type as the collector region 120 and is disposed adjacent to the collector region 120. The bipolar transistor 100 also includes deep trenches 128, 130 to encircle the transistor 100 and isolate the bipolar transistor 100.SUMMARY|0004| This Summary is provided to comply with 37 C.F.R. §1.73, requiring a summary of the invention briefly indicating the nature and substance of the invention. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. [0005] According to some examples, a semiconductor device includes a substrate, a collector region in the substrate, and a plurality of emitter regions in the substrate. Each of the plurality emitter regions are separate from each other, and the plurality of emitter regions is disposed in an area bounded by the collector region.[0006] According to some examples, a method for manufacturing a semiconductor device is described. The method includes forming a collector region in an epitaxial layer of a semiconductor substrate. The method includes forming a plurality of emitter regions in the epitaxial layer of the semiconductor substrate. The plurality of emitter regions are disposed in an area bounded by the collector region.[0007] According to some examples, a bipolar transistor is described. The bipolar transistor includes a collector region; and a first emitter region and a second emitter region. The first emitter region and second emitter region are disposed on a semiconductor substrate in a ring- shaped area formed by the collector region.[0008] These and other aspects may be understood with reference to the following detailed description.BRIEF DESCRIPTION OF THE DRAWINGS[0009] So that the manner in which the above recited features can be understood in detail, a more particular description, briefly summarized above, may be had by reference to example implementations, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical example implementations and are therefore not to be considered limiting of its scope.[0010] FIG. l is a cross-sectional diagram of a bipolar transistor.[0011 ] FIG. 2 is a cross-sectional diagram of a bipolar transistor having multiple emitter regions, according to some examples.[0012] FIG. 3 is a top view of a bipolar transistor having multiple emitter regions, according to some examples.[0013] FIG. 4 is a top view of a bipolar transistor having multiple emitter regions, according to some examples.]0014] FIG. 5 is a graph illustrating the change in the current gain as a function of the emitter area, according to some examples.[0015] FIG. 6 is a graph illustrating the change in the current gain as a function of the emitter area, according to some examples.[0016] FIG. 7 is a graph illustrating the current gain as a function of collector current density for devices with different number of fingers, according to some examples.[0017] FIG. 8 is a top view diagram of a bipolar transistor having multiple emitter regions, according to some examples.(0018) FIG. 9 is a flow diagram illustrating manufacturing a bipolar transistor with multiple emitter regions, according to some examples.[0019] To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements of one example may be beneficially incorporated in other examples.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0020] The present invention is described with reference to the attached figures, wherein like reference numerals are used throughout the figures to designate similar or equivalent elements. The figures are not drawn to scale and they are provided merely to illustrate the instant invention. Several aspects of the invention are described below7with reference to example applications for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide a full understanding of the invention. One having ordinary' skill in the relevant art, however, will readily recognize that the invention can be practiced without one or more of the specific details or with other methods. In other instances, well-known structures or operations are not shown in detail to avoid obscuring the invention. The present invention is not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a methodology in accordance with the present invention(0021) Vertical bipolar transistors can be designed to handle higher current by increasing the emitter length and/or the number of groups of emitter regions (called “fingers”). However, lateral bipolar transistors are often constructed with circular and/or small square emitters to maximize the emitter perimeter to emitter area ratio. Circular and/or small square emitters maximize the collector current, wiiich is proportional to the emitter perimeter, vis-a-vis the base current, w'hich has a component proportional to the emitter area. Because of the need to maximize perimeter to area ratios, the emitter regions of lateral bipolar transistors cannot be simply scaled to achieve larger current handling capabilities. [0022! The need to maintain emitter perimeter to area ratio to maintain bipolar transistor performance often necessitates arraying a large array of repeated units of lateral bipolar transistors to drive large currents, which consumes large silicon area. Additionally, multiple lateral bipolar transistors impact the cost of the device.[0023] The area penalty of lateral bipolar unit repetition can be mitigated by integrating the emitter regions in a collector island region. For example, the collector regions can surround each emitter region. While integrating the emitters in the collector island region improves the area density, this combination of the emitters and the collector island region still involves significant area penalty since each emitter region is separated from every other emitter region by the required spacing to each collector region. For high voltage devices, this spacing of the combination of the emitters and the collector island region can be of the order of lOpm, resulting in a minimum emitter to emitter spacing of 20mih,[0024] Examples of the present disclosure involve retaining the circular emitter layout of the lateral bipolar transistors to maximize the gain. Examples of the present disclosure involve including multiple emitters associated with a collector region in lateral bipolar transistors to maintain the total emitter perimeter to area ratio. For example, emitters are repeated in a rectangular collector ring to minimize the loss of current gain with larger emitter area. The decrease in current gain with multiple emitter regions is 30-40% less than the other solutions. Also, the collector does not surround every individual emitter allowing the emitters to be spaced close together and reducing the silicon area penalty. The multiple emitters are formed in an area bounded by the collector region with no portion of the collector region extending between the multiple emitters for that collector region.[0025] FIG. 2 shows an example cross-sectional view of an integrated circuit 200 including a bipolar transistor 201 according to an embodiment. Bipolar transistor 201 is developed on a substrate (not illustrated). In some examples, the substrate may be formed using silicon. The substrate may be doped with p-type dopants (e.g., group III) elements of the periodic table). [0026] In some examples, the bipolar transistor 201 includes a buried layer 202 FIG. 2 illustrates an n-type buried layer (NBL) 202 The NBL 202 may be formed by implanting n-type dopants in the substrate. The NBL 202 isolates active circuitry of the bipolar transistor 201 from the underlying substrate, effectively eliminating parasitic nonlinear junction capacitances to the substrate and reducing collector-to-substrate capacitances. The doping concentration of the NBL 202 can have a range of le17to le19atoms/cm3, for example, about 5e18atoms/cm3. While the example bipolar transistor 201 includes anNBL 202, other example bipolar transistors as described herein can include a p-type buried layer.[0027] The bipolar transistor 201 includes an epitaxial layer 204. The epitaxial layer 204 can be formed on the NBL 202 and, in some cases, formed in direct contact with NBL 202. The epitaxial layer 204 includes a top side and a bottom side. The epitaxial layer 204 is deposited, defined, and doped with an impurity of the conductivity type matching the base contact regions 218 disposed on top of the epitaxial layer 204. The doping concentration of the epitaxial layer can have a range of 5e14to 5e16atoms/cm3, for example, le15atoms/cm3. In some examples, the substrate of the bipolar transistor 201 can include the epitaxial layer 204 and of the NBL 202.[0028] The bipolar transistor 201 includes a plurality of emitter regions 216a, 216b (collectively emitter regions 216) formed in the top side of the epitaxial layer 204. Each of the emitter regions 216 extend downward into the epitaxial layer 204 to a particular depth (not illustrated) and each of the emitter regions 216 are separate and discrete. Each of the emitter regions 216 can have their own doping concentration, and in some examples, can share the same doping concentrations. The doping concentration of the emitter regions 216 can have a range of le17to le20atoms/cm3, for example, le19atoms/cm3. Each of the emitter regions 216 abuts the top side of the epitaxial layer 204 of the bipolar transistor 201. The emitter regions 216 can be have a variety of shapes, including square, rectangular, and/or circular. The bipolar transistor 201 can include any combination of two or more emitter regions 216 of any shape (square, rectangular, and/or circular) positioned to maximize the total perimeter of the emitters exposed to the corresponding perimeter of the collector region 220. Each of the emitter regions 216 may be surrounded by and shorted to a poly field plate 222, which increases the breakdown voltage between the emitter regions 216 and the base region 218. Similarly, the collector region 220 may be shorted to a poly field plate 224, which increases the breakdown voltage between the collector region 220 and the base region 218. While not shown in FIG. 2, the collector region 220 surrounds the emitter regions 216. FIG. 2 illustrates a cross-section of the collector region 220 present only at the left and right side of the emitter regions 216. However, as illustrated in the top view of FIG. 3, the collector region 220 surrounds the emitter regions 216 and can enable placement of the emitter regions 216 in a single row. The single row placement of the emitter regions 216 allows for each of the emitter regions 216 to face a portion of the collector region, thus enabling current conduction between the emitter and the collector.[0029] By way of example, the bipolar transistor 201 of FIG. 2 includes two emitter regions 216: a first emitter regions 216a and a second emitter region 216b. However, the bipolar transistor 201 may include any number of emitter regions 216 according to examples described herein.[0030] In the example of FIG. 2, the first emitter region 216a includes a first lateral side spaced from and facing the first base contact region 218a as well as an opposite second lateral side (on the right in FIG. 2) spaced from and facing the second emitter region 216b. Similarly, the second emitter region 216b includes a first lateral side spaced from and facing the second lateral side of the first emitter region 216a as well as an opposite second lateral side (on the right in FIG. 2) spaced from and facing a second base contact region 218b. Any additional emitter regions can be disposed between the first emitter region 216a and the second emitter region 216b. For example, a third emitter region can be disposed adjacent to both the first emitter region 216a and the second emitter region 216b: the third emitter region is spaced from and facing the second lateral side of the first emitter region 216a and is spaced from and facing the first lateral side of the second emitter region 216b. In some examples, the emitter regions 216 may be arranged in a row such that each emitter region is adjacent to another emitter region. By arranging the emitter regions 216 in a row, more of the emitter regions 216 are exposed to the collector region 220. Furthermore, multiple emitter regions 216 inside the perimeter of the same collector region 220 maintains the advantage of a larger emitter implant perimeter to emitter active area ratio. Multiple emitter regions 216 increases the total combined perimeter of the emitters within a given collector region 220 versus a single emitter perimeter of the same total area. This provides a higher ratio of total perimeter length for the emitter regions 216 (i.e., the perimeter length of emitter region 216a + the perimeter length of emitter region 216b + . . . ) within a given collector region 220 to emitter area within the given collector region 220. A higher ratio of emitter perimeter to emitter area results in higher gain. [0031] In some examples, base contact regions 218a, 218b (collectively base contact regions 218) are formed in the epitaxial layer 204 of the bipolar transistor 201. The base contact regions 218 extend downward into the epitaxial layer 204 from the top surface of the epitaxial layer 204. The doping concentration of the base contact regions 218 can have a range about le19to le20atoms/cm3, for example, le19atoms/cm3. In some examples, the base contact regions 218 have a doping concentration different from that of the epitaxial layer 204. For example, the base contact regions 218 have a doping concentration greater than the doping concentration of the epitaxial layer 204. As mentioned, the base contact region 218 is disposed adjacent to the emitter regions on the top surface of the epitaxial layer 204.[0032] The bipolar transistor 201 further includes a collector region 220. The collector region 220 extends downward in the top surface of the epitaxial layer 204 of the substrate. The multiple emitter regions are bounded by collector region 220. The lateral bipolar transistor 201 allows a top side collector contact. In some embodiments, the collector region 220 forms a ring on the epitaxial layer 204. The emitter regions 216 and the collector region 220 can have the same doping conductivity type and opposite to that of the epitaxial layer 204 and base contact regions 218. The doping concentration of the collector region 220 can have a range- of le17to le20atoms/cm3, for example, le19atoms/cm3.[0033] As illustrated in FIG. 2, the base contact regions 218 are disposed adjacent to the emitter regions 216 and are spaced from and facing the collector region 220. However, in some examples such as FIG. 3, an additional base contact region 318 may be provided outside the collector region 220, which is then contacted by back end of line (BEOL) metallization. This favorably impacts the device performance trading off some silicon area. The base contact region 318 in between the emitter region 216 and the collector region 220 is retained here to prevent parasitic channels forming between the emitter region 216 and the collector region 220. The bipolar transistor can also include an uncontacted base region (such as the uncontacted base region 322 disposed between the emitter regions 316 and the collector region 320 of FIG. 3). In some examples, the emitter regions 216 are spaced 2-5 pm (for example, 3 pm) away from each other; the spacing from an emitter region to the collector region 220 is about 5-12 pm (for example, 7.5 pm); and the spacing from an emitter region to a base contact region 218 is about 2-5 pm (for example, 3.5 pm).[0034] In some examples, the bipolar transistor 201 includes deep trench layers 228 and 230. The deep trench layers 228, 230 are formed to encircle the bipolar transistor 201 and can isolate the bipolar transistor 201 from other semiconductor devices. The deep trenches 228, 230 may also be used to contact the doped (e.g., p-type) substrate underneath the NBL 202. In some examples, the deep trench layers 228 forms a ring on the epitaxial layer 204 and is disposed adjacent to the collector region 220. The deep trench layers 228, 230 extend from the top of the die to below the NBL 202.[0035] In some examples, instead of deep trench layers 228, 230 as illustrated in FIG. 2, the bipolar transistor 201 includes p-type isolation (PISO) layer and/or p-type buried layer (PBL) implants (not illustrated) when the epitaxial layer 204 is n-type. Accordingly, the implants can replace the deep trench layers 228 and 230 and can form a ring on the epitaxial layer 204 to encircle the bipolar transistor 201. The implants extend from the surface of the epitaxial layer 204 down to the NBL 202. The implants can isolate the epitaxial layer 204 from other portions of the substrate.[0036] In some examples, instead of a PNP bipolar transistor as illustrated in FIG. 2, NPN bipolar transistors can also include multiple emitter regions disposed in an area defined by the collector region. In such examples, the structure and function of the NPN bipolar transistor is similar to bipolar transistor 201 except that in the NPN bipolar transistor, the dopants are reverse to provide a NPN transistor cell structure. As stated herein above, functional aspects of NPN bipolar transistors are similar to the bipolar transistor 201 with reverse dopant and reverse polarities.{0037] In some examples, where the bipolar transistor 201 is a lateral NPN transistor with a p- type epitaxial layer, the bipolar transistor 201 includes deep n-type wells. The deep n-type well touches the implanted NBL 202 and extends to the top of the die providing a top contact to the implanted NBL 202. These deep n-type wells may be disposed adjacent to the deep trench layers 228, 230, and may also extend from the top of the die to the NBL 202.[0038) FIG. 3 is a top view of a bipolar transistor 300 having multiple emitter regions, according to some examples. The bipolar transistor 300 includes a base contact region 318 disposed around the collector region 320, and the collector region 320 in turn is disposed around the multiple emitter regions 316a, 316b, 316c, 316d, 316e (collectively emitter regions 316). Accordingly, the collector region 320 is disposed between the emitter regions 316 and the base contact regions 318. As illustrated, in some examples, the collector region 320 form a ring around the emitter regions 316 and the base contact region 318 form a rectangle around the ring-shaped collector region 320. In some examples, the collector region 320 forms a rectangle around the emitter regions 316. The collector region 320 includes a first side and a second side that are disposed on distal and proximal sides of the emitter regions. For example, the first side of the collector region 320 is adjacent to the distal side of each of the emitter regions 316, and the second side of the collector region 320 is adjacent to the proximal side of each of the emitter regions 316. Correspondingly, the base contact region 318 includes a first side and a second side. The first side of the base contact region 318 is disposed adjacent to the first side of the collector region 320 and the second side of the base contact region 318 is disposed opposite to the first side of the base contact region 318. In some examples, as illustrated, the bipolar transistor 300 can include an uncontacted base region 322 disposed between the emitter regions 316 and the collector region 320, and the uncontacted base region 322 can have the same potential as the base contact region 318.(0039) The collector region 320 surrounding the emitter regions 316 increases the inner perimeter of the collector region 320 exposed to the perimeter of the emitter regions 316. Exposing more of the inner perimeter of the collector region 320 to the perimeter of the emitter regions 316 ensures the proportionality of collector current to emitter region perimeter.(0049) FIG. 4 is a top view of a bipolar transistor having multiple emitter regions, according to some examples. In some examples, the bipolar transistor 400 can include emitter regions 416, collector region 420, and base contact regions 418 arranged as multiple fingers. Each finger 410, as illustrated in FIG. 4, includes multiple emitter regions 416, a collector region 420, and a base contact region 418. Accordingly, the number of emitter regions 416 of bipolar transistor 400 is more than the number of emitter regions of bipolar transistor 201 of FIG. 2.(0041) The emitter regions 416 of each finger 410 are arranged adjacent to each other and in a column. Each finger 410 includes the base contact region 418 disposed as a ring around the emitter regions 416, and the collector region 420 disposed as a ring around the base contact region 418. The base contact region 418 of each finger 410 includes a first side and a second side that are disposed on distal and proximal sides of the respective emitter regions 416. For example, the first side of the base contact region 418 of each finger 410 is adjacent to the distal side of each of the respective emitter regions 416, and the second side of the base contact region 418 of each finger 410 is adj acent to the proximal side of each of the respective emitter regions 416. Correspondingly, the collector region 420 of each finger 410 includes a first side and a second side. The first side of the collector region 420 of each finger 410 is disposed adjacent to the first side of the respective base contact region 418 and the collector region 420 is disposed adjacent to the base contact region 418 of an adjacent finger.(0942) Each finger 410 can be arranged in a vertical orientation such that the emitter regions 416 form a column of emitter regions 416. Additionally, the collector region 420 of each finger 410 can be shared with each other. For example, the collector region 420 is shared between adjacent fingers 410 The bipolar transistor 400 includes four fingers 410 but can include any number of fingers 410. The use of multiple fingers with multiple emitter regions 416 enables high current products (e.g., low dropout regulators).|0043| FIG. 5 is a graph illustrating the change in the current gain as a function of the emitter area. The current gain is normalized to the current gain of a transistor with a minimum radius. The graph 500 includes result 502 of a bipolar transistor similar to the bipolar transistor 100 of FIG. 1 and result 504 of a bipolar transistor similar to the bipolar transistor 201 of FIG. 2. Result 502 show the current gain with a single circular emitter region, where the single emitter region increases in radius. Result 504 show the current gain with multiple emitter regions. As illustrated, as the radius of the single emitter region increases, the current gain of result 502 decreases more rapidly compared to the current gain of result 504 corresponding to a bipolar transistor with multiple emitter regions.[0044] FIG. 6 is a graph illustrating the change in the current gain as a function of the emitter area. The current gain is normalized to the current gain of a transistor with a minimum radius. The graph 600 includes result 602 of a bipolar transistor similar to the bipolar transistor 100 of FIG. 1 and result 604 of a bipolar transistor similar to the bipolar transistor 201 of FIG. 2. Result 602 show the current gain with a single rectangular emitter region, where the single emitter region increases in emitter length. Result 604 show the current gain with multiple emitter regions. As illustrated, as the length of the rectangular emitter region increases, the current gain of result 602 decreases more rapidly compared to the current gain of result 604 corresponding to a bipolar transistor with multiple emitter regions.[0045] FIG. 7 is a graph illustrating the current gain as a function of collector current density for devices with different number of fingers. The graph 700 includes results 702, 704, 706, and 708 of bipolar transistors, each of the bipolar transistors having different number of fingers with emitter regions. Each of the bipolar transistors have multiple fingers (e.g., fingers 410) and each finger has four emitter regions (e.g., emitter regions 216). Result 702 show the current gain for a bipolar transistor with 2 fingers, each finger with four emitter regions. Result 704 show the current gain for a bipolar transistor with 6 fingers, each finger with four emitter regions. Result 706 show the current gain for a bipolar transistor with 8 fingers, each finger with four emitter regions. Result 708 show the current gain for a bipolar transistor with 12 fingers, each finger with four emitter regions. As illustrated in graph 700, the current gain between bipolar transistors with different number of fingers remains consistent as the number of fingers increases. Accordingly, bipolar transistors having multiple fingers with multiple emitter regions can scale the number of fingers with generally the same current gain.[0046] FIG. 8 is a top view diagram of a bipolar lateral transistor according to some examples. The bipolar transistor 800 includes a collector region 220, the base contact region 218, and the emitter regions 216a, 216b, 216c, 216d, 216d, 216e (collectively emitter regions 216). The collector region 220 of bipolar transistor 800 can be the same collector region 220 of bipolar transistor 201, the base contact region 218 of bipolar transistor 800 can be the same base contact region 218 of bipolar transistor 201; and the emitter regions 216 can be the same emitter regions 216 of bipolar transistor 201. By way of example of FIG. 8, the bipolar transistor 800 includes five emitter regions 216. The emitter regions 216 are arranged adjacent to one another in a row to maximize the perimeter of the emitter regions 216 exposed to the collector region 220.[0047] In some examples, all or some of the emitter regions 216 can be electrically shorted to the same emitter terminal. For example, each of the emitter regions 216 can be connected to the same emitter terminal. In other examples, emitter regions 216a and 216b are both connected to one emitter terminal, and emitter regions 216c, 216d, and 216e are all connected to another emitter terminal. Accordingly, any combination of emitter regions 216 can be electrically shorted to one or more same emitter terminal.[0048] Some emitter regions 216 can be left unconnected or without connections to contacts to achieve the same current gain as the number of contacted emitter regions. For example, emitter regions 216a, 216e can be left uncontacted while emitter regions 216b, 216c, 216d are connected to contacts (i.e., metal contacts). Leaving the emitter regions 216a, 216e at the end of the row of emitter regions 216 uncontacted can result in the same current gain as the current gain with three contacted emitter regions.(0049) Additionally, some emitter regions 216 can also go to different terminals and the remaining emitter regions 216 remain floating to minimize interactions between emitters. For example, emitter regions 216b and 216d can be connected to two different terminals and emitter regions 216a, 216c, and 216e are not connected to any terminals and remain “floating” in order to minimize interactions between emitter regions. Minimizing the interactions between emitter regions can decrease the dependence of gain on the number of emitter regions of the bipolar transistor 800.[0050] FIG. 9 is a flow diagram of a process of manufacturing a bipolar transistor with multiple emitter regions, according to one example.[0051] Operations 900 begin with step 902 involving providing a wafer having an epitaxial layer and a buried layer. The epitaxial layer of the provided wafer and the buried layer of the provided wafer may be the same epitaxial layer 204 of FIG. 2 and the same NBL 202 of FIG. 2. The epitaxial layer provided with the wafer has a first conductivity type, and the buried layer provided with the wafer can have the same conductivity type as the epitaxial layer. The first conductivity type can be n-type in some examples, and in other examples, the first conductivity type can be p-type. [0052] Operations 900 continue, optionally, at step 904 with forming trenches in the epitaxial layer. The formed trenches in the epitaxial layer can be the trenches 228, 230 of FIG. 2. In some examples, instead of trenches, PISO and/or PBL implants may be used. In some examples, operations 900 further continues with forming deep n-type wells.[0053] Operations 900 continue at step 906 with forming a collector region in the epitaxial layer of the semiconductor device. The collector region formed can be the same collector region 220 of FIG. 2. The collector region formed in the epitaxial layer of the semiconductor device has a second conductivity type, and the second conductivity type is different from the first conductivity type of the epitaxial layer. In some examples, where the first conductivity type of the epitaxial layer is n- type, the second conductivity type of the collector region is p-type. In other examples, where the first conductivity type of the epitaxial layer is p-type, the second conductivity type of the collector region is n-type.[0054] Operations 900 continue, optionally, at step 908, with forming at least one base contact region in the epitaxial layer of the semiconductor device. The formed base contact region can be the same base contact region 218 of FIG. 2. The base contact region formed in the epitaxial layer of the semiconductor device has a conductivity type that matches the first conductivity type of the epitaxial layer. Accordingly, in examples where the first conductivity type of the epitaxial layer is n-type, the conductivity type of the base contact region is also n-type. In examples where the first conductivity type of the epitaxial layer is p-type, the conductivity of the base contact region is also p-type. Forming the base contact region can involve forming the base contact region on a first lateral side of the collector region or forming the base contact region on a second lateral side of the collector region. Accordingly, the base contact region can be disposed inside the area formed by the collector region or outside the are formed by the collector region.[0055] Operations 900 continue with step 908 involving forming the plurality of emitter regions in the epitaxial layer of the semiconductor device. Forming the plurality of emitter regions can occur at the same time as forming the collector region using the same implantation steps, or at different times. As described above, the plurality of emitter regions formed may have a variety of shapes (e.g., circular, square, rectangular and the semiconductor device can have any number of emitter regions. When forming the emitter regions in the epitaxial layer, the emitter regions can be formed in a row so that each emitter region is adjacent to another emitter region without an intervening collector region. In some examples, the emitter regions can be formed in an array of emitter regions with multiple rows and columns. When forming the plurality of emitter regions in the epitaxial layer, the emitter region, the base contact region, and the collector region may be disposed on the epitaxial layer of the bipolar transistor as illustrated in FIG. 2. The emitter regions formed in the epitaxial layer of the semiconductor device each have a second conductivity type, and the second conductivity type is different from the first conductivity type of the epitaxial layer. In some examples, where the first conductivity type of the epitaxial layer is n-type, the second conductivity type of the emitter regions is p-type. In other examples, where the first conductivity type of the epitaxial layer is p-type, the second conductivity type of the emitter regions is n-type. [0056] In some examples, operations 900 can involve manufacturing the bipolar transistors in multiple finger arrangements. When manufacturing the bipolar transistor to include multiple finger arrangements, operations 900 can involve forming multiple collector regions, multiple base contact regions, and multiple sets of emitter regions and arrange a collector region, a base contact region, and a set of emitter regions for each of the finger arrangements. When manufacturing the bipolar transistor with multiple finger arrangements, operations 900 can involve manufacturing the multiple finger arrangements as described above, with reference to FIG. 3. As mentioned, operations 900 can involve forming any number of finger arrangements.(0057) In some examples, operations 900 can involve forming one or more contacts coupled to one or more of the emitter regions, to the collector region, and/or to the base contact region. Forming the one or more contacts coupled to the emitter regions can involve forming one contact that electrically shorts multiple emitter regions. For example, operations 900 can involve forming an emitter terminal that electrically shorts more than one emitter region (e.g., emitter region 216a, 216b, 216c, 216d, and 216e of FIG. 8). Forming the one or more contacts coupled to the emitter regions can also involve forming one or more contacts to emitter regions but leaving some of the emitter regions unconnected. For example, operations 900 can involve forming one or more contacts for emitter regions 216b, 216c 216d of FIG. 8 while leaving emitter regions 216a and 216e unconnected. In some examples, forming one or more contacts coupled to the emitter regions can involve connecting some emitter regions to different contacts and leaving other emitter regions floating. For example, operations 900 can involve forming an emitter contact connected to emitter regions 216b and 216d of FIG. 8 and leaving emitter regions 216a, 216c and 216e of FIG. 10 unconnected to any contacts.[0058] The operations 900 continues with BEOL processing and packaging of the semiconductor device.[0059] Although the exemplary devices described above are configured as n-type transistors, the invention also includes devices that are configured as p-type transistors or combinations of n-type or p-type transistors. One of ordinary skill in the art would understand how to fabricate p-type transistors in accordance with the invention, e.g., by inverting the type of dopants, as compared to that shown in the figures.{0069] The semiconductor substrates may include various elements therein and/or layers thereon. These can include barrier layers, other dielectric layers, device structures, active elements and passive elements including, source regions, drain regions, bit lines, bases, emitters, collectors, conductive lines, conductive vias, etc. Moreover, the invention can be based on a variety of processes including CMOS, BiCMOS and BCD (Bipolar-CMOS-DMOS) technologies.]0061 ] While various examples of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Numerous changes to the disclosed examples can be made in accordance with the disclosure herein without departing from the spirit or scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above described examples. Rather, the scope of the invention should be defined in accordance with the following claims and their equivalents.[0062] Although the invention has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In particular regard to the various functions performed by the above described components (assemblies, devices, circuits, systems, etc.), the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the invention. In addition, while a particular feature of the invention may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “including”, “includes”, “having”, “has”, “with”, or variants thereof are used in either the detailed description and/or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising.” Unless otherwise stated, “about,” “approximately,” or “substantially” preceding a value means +/— 10 percent of the stated value.[0063] The Abstract of the Disclosure is provided to comply with 37 C.F.R. § 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the following claims.
A circuit (100) for reducing sleep state current leakage is described. The circuit (100) includes a hardware unit (102) selected from at least one of a latch, a flip-flop, a comparator, a multiplexer, or an adder. The hardware unit (102) includes a first node (110). The hardware unit further includes a sleep enabled combinational logic (104) coupled to the first node, wherein a value of the first node is preserved during a sleep state.
1.A circuit that contains:A hardware unit selected from at least one of a latch, a flip-flop, a comparator, a multiplexer, or an adder, the hardware unit includes:First node; andSleep enable combined logic, which is coupled to the first node, wherein the value of the first node is retained during the sleep state.2.The circuit of claim 1, wherein the output of the sleep enable combinational logic is configured to match a predefined value of an output vector when the sleep enable combinational logic is enabled.3.The circuit of claim 2, wherein the output vector depends on at least one of a design pattern of the circuit, an analog of the circuit, or a manufacturing process technology of the circuit.4.The circuit of claim 2, wherein the sleep enable combination logic is configured to invert the value of the first node of the hardware unit when the sleep enable combination logic is not enabled.5.The circuit of claim 4, further comprising a sleep signal as an input to the sleep enable combined logic to enable the sleep enable combined logic.6.The circuit of claim 5, wherein the sleep enable combinational logic is at least one of a NAND gate, a NOR gate, an AND gate, an OR gate, or a multiplexer.7.The circuit of claim 2, wherein the sleep enable combinational logic is configured to replace the output inverter of the hardware unit.8.The circuit of claim 2, wherein the type of the sleep enable combinational logic depends on the leakage current value of the circuit for the output vector.9.The circuit of claim 8, wherein the sleep enable combinational logic is configured to output logic 1, logic 0, or programmable logic value when enabled.10.A method that includes:The node value of the first node of the hardware unit selected from at least one of a latch, a flip-flop, a comparator, a multiplexer, or an adder is retained by the sleep enable combination logic during the sleep state; andThe node value of the hardware unit is transmitted through the sleep enable combination logic during a non-sleep state.11.The method of claim 10, further comprising inverting the node value of the hardware unit by the sleep enable combination logic during the non-sleep state.12.The method of claim 11, further comprising enabling the sleep enable combined logic immediately after placing the hardware unit in the sleep state.13.The method of claim 12, further comprising matching a predefined value of an output vector through the output of the sleep enable combinational logic during the sleep state.14.The method of claim 13, wherein the output vector depends on at least one of a design pattern of the circuit, simulation of the circuit, or a manufacturing process technology of the circuit.15.A circuit that contains:Means for retaining a node value of a node of a hardware unit selected from at least one of a latch, a flip-flop, a comparator, a multiplexer, or an adder during a sleep state; andMeans for transmitting the node value of the hardware unit during a non-sleep state.16.The circuit of claim 15, further comprising means for inverting the node value of the hardware unit during the non-sleep state.17.The circuit of claim 16, further comprising means for outputting a predetermined value of an output vector during the sleep state.18.The circuit of claim 17, wherein the output vector depends on at least one of a design pattern of the circuit, an analog of the circuit, or a manufacturing process technology of the circuit.19.The circuit of claim 17, wherein the predetermined value is one of logic 1, logic 0, or programmable logic value when enabled.20.The circuit according to claim 17, wherein the type of means for retaining the node value of the hardware unit depends on the leakage current value of the circuit for the output vector.
Circuit and method for reducing leakage current in sleep stateTechnical fieldEmbodiments of the inventive concepts disclosed herein generally relate to the field of data processing systems. More specifically, embodiments of the inventive concepts disclosed herein relate to circuits and methods for reducing sleep state leakage current.Background techniqueThe design of electronic and computing devices has become increasingly focused on power saving in order to improve performance including aspects of battery life or thermal emissions. One aspect of saving power is by reducing the amount of current leakage that occurs in the circuit. Circuits inherently have current leakage through different components. For example, in digital logic, each gate leaks a certain amount of current over time. Higher leakage means higher power consumption. One circuit state for reducing leakage current is the standby state or the sleep state, in which the circuit is not in use but can be used at a later time. Therefore, the sleep state allows the circuit to save power by stopping the effective operation of the circuit (eg, effective switching of multiple components) and waiting to be put from the sleep state into the non-sleep state. Therefore, the existing value in the circuit can be retained in the sleep state until the circuit comes out of the sleep state. Therefore, the value is not loaded into the circuit or recalculated by the circuit because the value already exists in the circuit when leaving the non-sleep state.Compared with power off the circuit, the advantage of the sleep state is that the circuit is easier to enter the non-sleep state from the sleep state than the initialization circuit. During initialization, the circuit loads or calculates the value that will be stored in the sleep state. Therefore, time and power are lost during initialization. However, when the circuit is in a sleep state, current may leak from the circuit component because it is still possible to apply power to the component. Therefore, there is still leakage current in the circuit during the sleep state.In one method, the total leakage current can be reduced by putting different nodes of the circuit to a predetermined logic value during the sleep state. For example, a logic 1 at a node of a circuit may have a lower leakage current than a circuit having a logic 0 at the node. In addition, however, the values of some nodes in the circuit should be retained, while forcing each other node of the circuit to a logical value.In one embodiment of the method, a logical AND gate is inserted at each of the predefined nodes, where the input is reduced to logic 0 when the circuit is going to sleep. Therefore, the predefined node is split so that the input to the AND gate retains a value, while the output from the AND gate forces the node to a predetermined logical value. In addition, several AND gates equal to the number of nodes are added to the circuit, so more logic is added to the circuit. One problem with the described embodiment is that the inserted door itself leaks. In addition to increasing the circuit size and degrading the circuit timing, the inserted gate can also substantially increase power consumption.In another embodiment, the existing logic gate is modified to add a transistor in series with the pull-up stack of the AND gate and another transistor in parallel with the pull-down stack, or vice versa. Therefore, the transistor allows the output of the gate to be forced to logic 1 or logic 0. But the problem is that the conventional cell library may not be used, and the modified door is slow and requires a large area. In another embodiment, a pre-existing scan chain of the circuit is used in order to scan the input predefined output vector into the latch of the circuit, thereby forcing the output of the latch to a certain value. One problem with the described embodiment is that scanning the input vector takes multiple steps to switch the latch. Therefore, scanning the vector into the chain takes time and draws power.Summary of the inventionIn one embodiment, a circuit for reducing sleep state current leakage is described. Describe a circuit for reducing current leakage during sleep. The circuit includes a hardware unit selected from at least one of a latch, a flip-flop, a comparator, a multiplexer, or an adder. The hardware unit includes a first node. The hardware unit further includes sleep enable combined logic coupled to the first node, wherein the value of the first node is retained during the sleep state.The advantages of one or more embodiments disclosed herein may include an increase in the minimum size of the circuit, no need for special logic gate libraries, the speed of placing the circuit from a sleep state to a non-sleep state, and reduced power consumption of the circuit during sleep (Leakage current).The reference to this illustrative embodiment is not intended to limit or define the inventive concepts disclosed herein, but to provide examples to assist in the understanding of the present invention. After reviewing the entire application, other aspects, advantages, and features of the present invention will become apparent. The application includes the following parts: drawings, specific implementations, and claims.BRIEF DESCRIPTIONWhen reading the following specific embodiments with reference to the accompanying drawings, these and other features, aspects, and advantages of the current inventive concept disclosed herein are better understood, in which:FIG. 1 is a schematic diagram illustrating an example hardware unit with a sleep-enabled NAND gate.2 is a schematic diagram illustrating the example hardware unit of FIG. 1 with a sleep-enabled NOR gate.3 is a schematic diagram illustrating the example hardware unit of FIG. 1 with a sleep-enabled multiplexer.4 is a schematic diagram illustrating the example hardware unit of FIG. 1 with a sleep-enabled OR gate.5 is a schematic diagram illustrating a second example hardware unit with sleep-enabled NAND gates.6 is a schematic diagram illustrating a third example hardware unit with a sleep-enabled NAND gate.7 is a schematic diagram illustrating a fourth example hardware unit with a sleep-enabled NAND gate.8 is a flowchart illustrating an exemplary method for operating the sleep enable combination logic in FIGS. 1-7.9 is a flowchart illustrating an exemplary method for enabling sleep enable combinational logic in FIGS. 1-7.FIG. 10 is a flowchart illustrating an exemplary method for operating the sleep enable combination logic in FIGS. 1-5.11 is a general diagram illustrating an example portable communication device incorporating a digital circuit (eg, a digital signal processor) that may include sleep enable combinational logic.FIG. 12 is a general diagram illustrating an example cellular telephone incorporating digital circuits (eg, digital signal processors) that may include sleep enable combinational logic.13 is a general diagram illustrating an example wireless Internet protocol telephone incorporating digital circuits (eg, digital signal processors) that may include sleep enable combinational logic.14 is a general diagram illustrating an example portable digital assistant incorporating digital circuits (eg, a digital signal processor) that may include sleep enable combinational logic.15 is a general diagram illustrating an example audio file player incorporating a digital circuit (eg, a digital signal processor) that may include sleep enable combinational logic.detailed descriptionThroughout the description, for the purpose of explanation, many specific details are stated in order to provide a thorough understanding of the inventive concepts disclosed herein. However, those skilled in the art will understand that the inventive concepts disclosed herein can be practiced without some of the specific details. In other examples, well-known structures and devices are shown in block diagram form to avoid obscuring the basic principles of the inventive concepts disclosed herein.Embodiments of the inventive concepts disclosed herein relate to circuits and methods for sleep state leakage current reduction. When reducing the leakage current of a circuit, the nodes of the circuit can be selected to force it to a predetermined logic state. As stated previously, nodes in different logic states affect the leakage current of the circuit. In one embodiment, the output of the hardware unit in the circuit may be a selected node of the circuit, where the output of the hardware unit is coupled to the input of the subsequent circuit. The hardware unit may be a circuit component including a conventional output driver. An example output driver is an inverter. Other examples may include, but are not limited to, conventional logic gates such as NAND or NOR gates, latches, adders, voltage level shifters, and comparators.By replacing the hardware unit's conventional output driver (which can be an inverter, NAND gate, NOR gate, or another conventional combinational logic gate) with sleep-enabled combinational logic configured to receive sleep signals, the output value of the hardware unit can be retained , While forcing the input to the subsequent circuit to a predetermined logic value. Therefore, no additional number of gates are added to the circuit because pre-existing gates can be replaced. In one embodiment, the sleep-enabled combinational logic includes (but is not limited to) a NAND gate, a NOR gate, an AND gate, an OR gate, or a multiplexer, where one input of the combinational logic is coupled to the output of the hardware unit, and the other The input is connected to a sleep signal that enables the sleep enable combinational logic to switch between a non-sleep state (eg, effective operation of the circuit) and a sleep state (eg, when the circuit is placed in hibernation). The sleep enable combinational logic can replace one conventional output driver in the circuit to all conventional output drivers. In one embodiment, the selective output driver is strategically replaced with sleep-enabled combinational logic through observations, such as empirical research on where the leakage current of the circuit is affected more. In another embodiment, sleep enabled combinational logic may replace each conventional output driver in the circuit.When the sleep signal is disabled (eg, logic 0), the sleep enable combinational logic may transmit the output value of the hardware unit to the input of the subsequent circuit. In addition, the sleep enable combinational logic can invert the output value of the hardware unit, thus performing some conventional output driver operations. When the sleep signal is enabled (eg, logic 1), the sleep enable combinational logic may prevent the output value of the hardware unit (eg, state "q") from being transmitted, thereby retaining the value in the hardware unit or on the output node of the hardware unit, and The predetermined logic value is output depending on the type and configuration of the sleep enable combinational logic. For example, when the sleep signal is enabled, the NOR gate outputs a logic 0, and when the sleep signal is enabled, the NAND gate outputs a logic 1.The schematic diagrams of FIGS. 1-7 illustrate an embodiment of a portion of a circuit that includes a hardware unit and sleep enable combined logic. The schematic diagrams of FIGS. 1-4 illustrate the sleep enable combination logic as NAND gate 104 (FIG. 1), NOR gate 202 (FIG. 2), multiplexer 302 (FIG. 3), and OR gate 402 coupled to the output of hardware unit 102. (Figure 4) embodiment.Referring to FIG. 1, the circuit 100 includes a hardware unit 102 coupled to sleep enable combination logic 104. The hardware unit 102 is a conventional flip-flop that removes the output inverter that is conventionally located at the position of the sleep enable combination logic 104. A flip-flop is a digital component capable of storing the logical value of one or more clock cycles. Among other uses, flip-flops can be used to continuously output values or delay values for a predetermined number of clock cycles.In one embodiment, the flip-flop can receive a clock signal (clk), a scan input signal (si), a shift signal, and an input value (d). The trigger can output the output value (q) and the scan output signal (so). The shift signal may be a signal for enabling the scan chain so that values are input to and output from si and so of the flip-flop, respectively. The scan input signal (si) may be the value of the scan chain input to the trigger. The scan output signal (so) may be the scan chain output signal from the flip-flop. The shift signal can control the trigger to move the current scan chain value out of the trigger on so, and receive the new scan chain value on si. In one embodiment, the so of the previous trigger is attached to the si of the current trigger so that the scan output value from the previous trigger can be scanned into the current trigger. Therefore, values can be made to travel through a sequence of triggers organized into scan chains. When shifting is not enabled, the flip-flop is operable to receive d and output q (ie, the scan chain is not enabled).In one embodiment, when the circuit 100 is in the sleep state, the sleep signal 106 is logic one. Therefore, the output 108 of the NAND gate 104 is a logic 1 regardless of the value at the output 110 of the hardware unit 102. Therefore, the hardware unit 102 stores its output value during the sleep state, and the sleep enable combination logic 104 transmits a logic 1 to the input of the subsequent circuit.Referring to FIG. 2, the sleep enable combination logic 202 is a NOR gate. In this embodiment, when the circuit 100 is in the sleep state, the sleep signal 204 is logic one. Therefore, the output of the NOR gate 202 is a logic 0 regardless of the output value of the hardware unit 102.Referring to FIG. 3, the sleep enable combination logic 302 is a multiplexer. In this embodiment, when the circuit 100 is in a sleep state, the output of the multiplexer 302 is a logic 1 or a logic 0 depending on the value of the input "v".Referring to FIG. 4, the sleep enable combination logic 402 is an OR gate. In this embodiment, when the circuit 100 is in the sleep state, the output of the OR gate 402 is logic 1, regardless of the output value of the hardware unit 102.In one embodiment, the output vector is the vector output of the sleep enable combinational logic into the circuit. For example, if there are forty sleep enable combined logic in the circuit, the output vector may be forty bits output from the forty sleep enable combined logic to the forty nodes of the circuit. Therefore, the leakage current of the circuit can be determined for each combination of bit values of multiple sleep-enabled combinational logic. After determining the possible leakage current of the circuit for each combination, the output vector to be implemented during the sleep state can be selected in order to reduce the actual leakage current that will be present in the circuit.The value of the output vector to be implemented by the sleep enable combination logic can help determine what type or configuration of sleep enable combination logic will be used. For example, during the sleep state (sleep signal 106 is equal to logic 1), the output of the NAND gate 104 in FIG. 1 is logic 1. During the sleep state (sleep signal 204 is equal to logic 1), the output of the NOR gate 202 in FIG. 2 is logic 0. Therefore, if logic 1 is to be implemented, a NAND gate can be used, and if logic 0 is to be implemented, a NOR gate can be used. In another embodiment, the sleep enable combinational logic may be configured to output high impedance during the sleep state.The schematic diagrams of FIGS. 5-7 illustrate various embodiments with hardware units 500, 600, 700 coupled to sleep enable combined logic 502, 602, 702, where sleep signals 504, 604, 704 are used to enable sleep enable combined logic 502, 602, 702. In the embodiment shown in the schematic diagrams of FIGS. 5-7, the hardware units 500, 600, 700 shown are latches, and there is no conventional output inverter at the sleep enable combination logic 502, 602, 702.Operation of sleep-enabled combinational logic8 is a flowchart illustrating an exemplary method 800 of operation of sleep-enabled combinational logic such as shown in the schematic diagrams of FIGS. 1-7. Beginning at 802, sleep enable combinational logic (eg, logic 104) receives the output of a hardware unit (eg, hardware unit 102). Proceeding to 804, the sleep enable combination logic determines whether the circuit is in the sleep state. In one embodiment, whether the circuit is in the sleep state is determined by whether the sleep signal of the circuit is valid or invalid. As previously described in one embodiment, if the sleep signal is active (eg, logic 1), the circuit is in a sleep state, and the sleep enable combinational logic is enabled. If the sleep signal is inactive (eg, logic 0), the circuit is in a non-sleep state, and the sleep enable combinational logic is disabled.9 is a flowchart illustrating an exemplary method 900 for enabling sleep enable combinational logic when the circuit is placed in a sleep state, such as shown in the schematic diagrams of FIGS. 1-7. Starting at 902, the sleep signal is switched or activated (eg, 106, 204, 304, 404, 504, 604, 704). In one embodiment, the circuit starts to enter the sleep state after receiving the control signal. The sleep signal may be a control signal or an output in response to receiving the control signal. For example, when the circuit is going to sleep, the sleep signal is activated and control begins to put the circuit into sleep. In another example, the circuit activates the sleep signal after receiving the control signal to enter the sleep state. One embodiment of when the control signal to enter the sleep state will be issued includes a circuit external to the circuit or a portion of the circuit receiving input from the user that the circuit will enter the sleep state. In another embodiment, the circuit may determine that the circuit will not perform any valid operations, or that the circuit has been free of external input or idle for a predetermined amount of time before issuing the control signal.Therefore, the sleep signal may be 0 during the non-sleep state and 1 during the sleep state, or vice versa. After the sleep signal is activated, the switched sleep signal in 904 enables the sleep enable combination logic from the non-sleep state to the sleep state. The sleep enable combinational logic can be enabled immediately after switching the sleep signal or after a certain delay. In certain embodiments with multiple sleep-enabled combinational logic, the time for the sleep signal to reach each sleep-enabled combinational logic may be different. In another embodiment, the delay from the non-sleep state to the sleep state (or vice versa) may be different for each sleep enable combination logic. For example, the sleep signal 106 equal to logic 1 in FIG. 1 forces the NAND gate 104 to output logic 1. In order to bring the sleep enable combination logic out of the sleep state, in one embodiment, the sleep signal is deactivated by returning to the previous logic value (eg, switching the sleep signal from logic 1 to logic 0).Referring back to FIG. 8, if the circuit is not in the sleep state, the sleep enable combinational logic transmits the output of the hardware unit to the input of the subsequent circuit in 806. If the circuit is in a sleep state, the sleep enable combinational logic retains the output value of the hardware unit in 808. As previously described in one embodiment that preserves the output value of the hardware unit, the sleep enable combination logic may prevent the output value from being transmitted to the input of the subsequent circuit. Therefore, the output value can be stored in the hardware unit or on the output node of the hardware unit during the sleep state. Advancing to 810, when the sleep signal is valid (eg, logic 1), the sleep enable combinational logic transmits a predetermined logic state to the input of the subsequent circuit. The process can then revert to 802 and repeat for each signal received from the hardware unit.FIG. 10 is a flowchart illustrating another exemplary method 1000 of operation of sleep-enabled combinational logic. In the method 1000 illustrated in the flowchart of FIG. 10, the sleep enable combination logic inverts the output value of the hardware unit during the non-sleep state in addition to transmitting the output of the hardware unit.Beginning at 1002, sleep-enabled combinational logic (eg, logic 104) receives the output of a hardware unit (eg, hardware unit 102). Proceeding to 1004, the sleep enable combination logic determines whether the circuit is in the sleep state. In one embodiment, whether the circuit is in the sleep state is determined by whether the sleep signal of the circuit is valid or invalid. As previously described in one embodiment, if the sleep signal is active (eg, logic 1), the circuit is in a sleep state, and the sleep enable combinational logic is enabled. If the sleep signal is inactive (eg, logic 0), the circuit is in a non-sleep state, and the sleep enable combinational logic is disabled.If the circuit is not in a sleep state, the sleep enable combination logic in 1006 inverts the output value of the hardware unit. As previously described, the sleep enable combinational logic can perform the function of a conventional inverter that is replaced when the circuit is in a non-sleep state (eg, inverting the value from one logic state to another logic state). In one embodiment, NAND and NOR gates are configured to invert their outputs. For example, for a two-input NAND gate, the two inputs are ANDed and inverted. Therefore, if the inputs are 0 and 1, the AND operation is equal to 0 and the inversion produces a 1 output by the NAND gate. In another example, for a two-input NOR gate, the two inputs are ORed and inverted. Therefore, if the inputs are 0 and 1, the OR operation is equal to 1 and the inverse produces 0 output by the NOR gate. Therefore, for the NAND gate 104 in FIG. 1, when the sleep signal 106 is logic 0, conceptually, the sleep signal is inverted to logic 1 and ANDed with the output of the hardware unit 102. Since the inverted sleep signal is logic 0, the value of the AND operation is the output value of the hardware unit 102. Therefore, inverting the AND operation value produces the inverted output value 108 of the hardware unit 102 as emitted by the NAND gate 104.Referring back to FIG. 10, in 1008 the sleep enable combinational logic transmits the inverted value to the input of the subsequent circuit. If the circuit is in the sleep state in 1004, the sleep enable combinational logic retains the output value of the hardware unit in 1010. As previously described in one embodiment that preserves the output value of the hardware unit, the sleep enable combination logic may prevent the output value from being transmitted to the input of the subsequent circuit. Therefore, the output value can be stored in the hardware unit or on the output node of the hardware unit during the sleep state. Advancing to 1012, when the sleep signal is valid (eg, logic 1), the sleep enable combinational logic transmits a predetermined logic state to the input of the subsequent circuit. The process can then revert to 1002 and repeat for each signal received from the hardware unit.Example device including the above featuresThe sleep enable combined logic may be included in any digital circuit such as a processor. The general diagrams of FIGS. 11-15 illustrate example devices that may incorporate sleep enable combination logic to implement output vectors during the sleep state.FIG. 11 is a diagram illustrating an exemplary embodiment of a portable communication device 1100. As illustrated in the general diagram of FIG. 11, the portable communication device includes an on-chip system 1102 that includes a digital signal processor (DSP) 1104. The general diagram of FIG. 11 also shows a display controller 1106 coupled to the digital signal processor 1104 and the display 1110. In addition, the input device 1110 is coupled to the DSP 1104. As shown, the memory 1112 is coupled to the DSP 1104. In addition, an encoder / decoder (CODEC (codec)) 1114 may be coupled to the DSP 1104. The speaker 1116 and the microphone 1118 may be coupled to the CODEC 1114.The general diagram of FIG. 11 further illustrates the wireless controller 1120 coupled to the digital signal processor 1104 and the wireless antenna 1122. In a particular embodiment, power supply 1124 is coupled to system on chip 602. Furthermore, in the specific embodiment as illustrated in FIG. 6, the display 626, input device 630, speaker 1116, microphone 1118, wireless antenna 1122, and power supply 1124 are external to the system-on-chip 1102. However, each is coupled to a component of the system-on-chip 1102.In certain embodiments, the DSP 1104 includes sleep enable combination logic to implement the output vector during the sleep state and retain the value of the hardware unit. For example, when the device 1100 is placed in the sleep state, the sleep signal of the sleep enable combination logic is switched (the sleep enable combination logic is enabled) and an output vector is output by a plurality of the sleep enable combination logic to reduce leakage current and thus save power Supply 1124. In one embodiment, the DSP 1104 may include a sleep controller 1162 to switch sleep enable combinational logic. Therefore, when the DSP 1104 can receive the sleep signal or other signals, the sleep controller 1162 receives the signal and controls the sleep enable combination logic. For example, in FIGS. 1-7 the sleep controller may send a sleep signal to initiate the sleep enable combinational logic. In another embodiment, the sleep controller may be located outside the DSP 1104.FIG. 12 is a diagram illustrating an exemplary embodiment of a cellular phone 1200. As shown, the cellular phone 1200 includes a system-on-chip 1202 that includes a digital baseband processor 1204 and an analog baseband processor 1206 coupled together. In a particular embodiment, the digital baseband processor 1204 is a digital signal processor. As illustrated in the general diagram of FIG. 12, the display controller 1208 and the touch screen controller 1210 are coupled to a digital baseband processor 1204. The touch screen display 1212 external to the system on chip 1202 is in turn coupled to the display controller 1208 and the touch screen controller 1210.The general diagram of FIG. 12 further illustrates the video encoder 1214 (eg, phase-alternating (PAL) encoder, sequential color and memory system (SECAM) encoder, or the National Television System Committee ( The national television system (s) committee (NTSC) encoder) is coupled to the digital baseband processor 1204. In addition, the video amplifier 1216 is coupled to the video encoder 1214 and the touch screen display 1212. Also, the video port 1218 is coupled to the video amplifier 1216. As depicted in the general diagram of FIG. 12, a universal serial bus (USB) controller 1220 is coupled to a digital baseband processor 1204. Also, the USB port 1222 is coupled to the USB controller 1220. The memory 1224 and the subscriber identity module (SIM) card 1226 may also be coupled to the digital baseband processor 1204. In addition, as shown in the general view of FIG. 12, the digital camera 1228 may be coupled to the digital baseband processor 1204. In an exemplary embodiment, the digital camera 1228 is a charge coupled device (CCD) camera or a complementary metal-oxide semiconductor (CMOS) camera.As further illustrated in the general diagram of FIG. 12, the stereo audio CODEC 1230 may be coupled to the analog baseband processor 1206. In addition, the audio amplifier 1232 can be coupled to the stereo audio CODEC 1230. In an exemplary embodiment, the first stereo speaker 1234 and the second stereo speaker 1236 are coupled to the audio amplifier 1232. The microphone amplifier 1238 can also be coupled to the stereo audio CODEC 1230. In addition, the microphone 1240 may be coupled to the microphone amplifier 1238. In certain embodiments, a frequency modulation (FM) radio tuner 1242 may be coupled to the stereo audio CODEC 1230. Also, the FM antenna 1244 is coupled to the FM radio tuner 1242. In addition, stereo headphones 1246 can be coupled to stereo audio CODEC 1230.The general diagram of FIG. 12 further illustrates that a radio frequency (RF) transceiver 1248 can be coupled to the analog baseband processor 1206. The RF switch 1250 may be coupled to the RF transceiver 1248 and the RF antenna 1252. The keypad 1254 may be coupled to the analog baseband processor 1206. Also, a mono headset 1256 with a microphone may be coupled to the analog baseband processor 1206. In addition, the vibrator device 1258 may be coupled to the analog baseband processor 1206. The general diagram of FIG. 12 also shows that the power supply 1260 can be coupled to the system-on-chip 1202. In certain embodiments, the power supply 1260 is a direct current (DC) power supply that provides power to various components of the cellular phone 1200. Furthermore, in certain embodiments, the power supply is a rechargeable DC battery or a DC power supply derived from alternating current (AC) coupled to an AC power source to a DC transformer.As depicted in the general view of FIG. 12, touch screen display 1212, video port 1218, USB port 1222, camera 1228, first stereo speaker 1234, second stereo speaker 1236, microphone 1240, FM antenna 1244, stereo headset 1246, The RF switch 1250, the RF antenna 1252, the keypad 1254, the mono headset 1256, the vibrator 1258, and the power supply 1260 may be external to the system-on-chip 1202.In certain embodiments, the digital baseband processor 1204 may include sleep enable combined logic to implement output vectors during the sleep state to reduce leakage current and preserve hardware unit values in order to conserve power from the power supply 1260. In one embodiment, the DSP 1204 may include a sleep controller 1262 to switch the sleep enable combinational logic. Therefore, when the DSP 1204 receives the sleep signal or other signals, the sleep controller 1262 receives the signal and controls the sleep enable combinational logic. For example, in FIGS. 1-7 the sleep controller may send a sleep signal to initiate the sleep enable combinational logic. In another embodiment, the sleep controller may be located outside the DSP 1204.FIG. 13 is a diagram illustrating an exemplary embodiment of a wireless Internet Protocol (IP) phone 1300. As shown, the wireless IP phone 1300 includes a system-on-chip 1302 that includes a digital signal processor (DSP) 1304. The display controller 1306 may be coupled to the DSP 1304, and the display 1308 is coupled to the display controller 1306. In the exemplary embodiment, display 1308 is a liquid crystal display (LCD). FIG. 13 further shows that the keypad 1310 can be coupled to the DSP 1304.The flash memory 1312 may be coupled to the DSP 1304. Synchronous dynamic random access memory (SDRAM) 1314, static random access memory (SRAM) 1316, and electrically erasable programmable read only memory (EEPROM) 1318 can also be coupled to the DSP 1304. The general diagram of FIG. 13 also shows that a light emitting diode (LED) 1320 can be coupled to the DSP 1304. In addition, in certain embodiments, the voice CODEC 1322 may be coupled to the DSP 1304. The amplifier 1324 may be coupled to the speech CODEC 1322, and the mono speaker 1326 may be coupled to the amplifier 1324. The general view of FIG. 13 further illustrates the mono headset 1328 coupled to the voice CODEC 1322. In certain embodiments, the mono headset 1328 includes a microphone.A wireless local area network (WLAN) baseband processor 1330 may be coupled to the DSP 1304. The RF transceiver 1332 may be coupled to the WLAN baseband processor 1330, and the RF antenna 1334 may be coupled to the RF transceiver 1332. In certain embodiments, the Bluetooth controller 1336 may also be coupled to the DSP 1304, and the Bluetooth antenna 1338 may be coupled to the controller 1336. The general diagram of FIG. 13 also shows that the USB port 1340 can also be coupled to the DSP 1304. In addition, the power supply 1342 is coupled to the system-on-chip 1302 and provides power to various components of the wireless IP phone 1300.As indicated in the general view of FIG. 13, the display 1308, keypad 1310, LED 1320, mono speaker 1326, mono headset 1328, RF antenna 1334, Bluetooth antenna 1338, USB port 1340, and power supply 1342 may be external to system on chip 1302 and coupled to one or more components of system on chip 1302. In certain embodiments, the DSP 1304 may include sleep enable combinational logic to implement the output vector during the sleep state to reduce leakage current and retain hardware unit values in order to conserve power from the power supply 1342. In one embodiment, the DSP 1304 may include a sleep controller 1362 to switch the sleep enable combinational logic. Therefore, when the DSP 1304 can receive the sleep signal or other signals, the sleep controller 1362 receives the signal and controls the sleep enable combinational logic. For example, in FIGS. 1-7 the sleep controller may send a sleep signal to initiate the sleep enable combinational logic. In another embodiment, the sleep controller may be external to the DSP 1304.FIG. 14 is a diagram illustrating an exemplary embodiment of a portable digital assistant (PDA) 1400. As shown, the PDA 1400 includes a system-on-chip 1402 that includes a digital signal processor (DSP) 1404. The touch screen controller 1406 and the display controller 1408 are coupled to the DSP 1404. In addition, touch screen display 1410 is coupled to touch screen controller 1406 and to display controller 1408. The general view of FIG. 14 also indicates that the keypad 1412 can be coupled to the DSP 1404.In certain embodiments, the stereo audio CODEC 1426 may be coupled to the DSP 1404. The first stereo amplifier 1428 may be coupled to the stereo audio CODEC 1426, and the first stereo speaker 1430 may be coupled to the first stereo amplifier 1428. In addition, the microphone amplifier 1432 may be coupled to the stereo audio CODEC 1426, and the microphone 1434 may be coupled to the microphone amplifier 1432. The general diagram of FIG. 14 further shows that the second stereo amplifier 1436 can be coupled to the stereo audio CODEC 1426, and the second stereo speaker 1438 can be coupled to the second stereo amplifier 1436. In certain embodiments, the stereo headset 1440 may also be coupled to the stereo audio CODEC 1426.The general diagram of FIG. 14 also illustrates that the 802.11 controller 1442 can be coupled to the DSP 1404, and the 802.11 antenna 1444 can be coupled to the 802.11 controller 1442. In addition, the Bluetooth controller 1446 may be coupled to the DSP 1404, and the Bluetooth antenna 1448 may be coupled to the Bluetooth controller 1446. The USB controller 1450 may be coupled to the DSP 1404, and the USB port 1452 may be coupled to the USB controller 1450. In addition, a smart card 1454 (eg, a multimedia card (MMC) or a secure digital card (SD)) may be coupled to the DSP 1404. In addition, the power supply 1456 can be coupled to the system on chip 1402 and can provide power to various components of the PDA 1400.As indicated in the general view of FIG. 14, the display 1410, keypad 1412, IrDA port 1422, digital camera 1424, first stereo speaker 1430, microphone 1434, second stereo speaker 1438, stereo headset 1440, 802.11 antenna 1444, Bluetooth antenna 1448, the USB port 1452, and the power supply 1450 may be external to the system-on-chip 1402 and coupled to one or more components on the system-on-chip. In certain embodiments, the DSP 1404 may include sleep enable combination logic to implement an output vector during the sleep state to reduce leakage current and retain hardware unit values in order to save power from the power supply 1456. In one embodiment, the DSP 1404 may include a sleep controller 1462 to switch the sleep enable combinational logic. Therefore, when the DSP 1404 can receive a sleep signal or other signals, the sleep controller 1462 receives the signal and controls the sleep enable combinational logic. For example, in FIGS. 1-7 the sleep controller may send a sleep signal to initiate the sleep enable combinational logic. In another embodiment, the sleep controller may be external to the DSP 1404.15 is a diagram illustrating an exemplary embodiment of an audio file player (eg, MP3 player) 1500. As shown, the audio file player 1500 includes a system-on-chip 1502 that includes a digital signal processor (DSP) 1504. The display controller 1506 may be coupled to the DSP 1504, and the display 1508 is coupled to the display controller 1506. In the exemplary embodiment, display 1508 is a liquid crystal display (LCD). The keypad 1510 can be coupled to the DSP 1504.As further depicted in the general diagram of FIG. 15, flash memory 1512 and read-only memory (ROM) 1514 may be coupled to DSP 1504. Additionally, in certain embodiments, audio CODEC 1516 may be coupled to DSP 1504. The amplifier 1518 may be coupled to the audio CODEC 1516, and the mono speaker 1520 may be coupled to the amplifier 1518. The general view of FIG. 15 further indicates that the microphone input 1522 and the stereo input 1524 can also be coupled to the audio CODEC 1516. In certain embodiments, the stereo headset 1526 may also be coupled to the audio CODEC 1516.The USB port 1528 and the smart card 1530 can be coupled to the DSP 1504. In addition, the power supply 1532 can be coupled to the system-on-chip 1502 and can provide power to various components of the audio file player 1500.As indicated in the general view of FIG. 15, the display 1508, the keypad 1510, the mono speaker 1520, the microphone input 1522, the stereo input 1524, the stereo headset 1526, the USB port 1528, and the power supply 1532 are external to the system-on-chip 1502, And coupled to one or more components on the system-on-chip 1502. In certain embodiments, the digital signal processor 1504 may include sleep enable combination logic to implement an output vector during the sleep state to reduce leakage current and retain hardware unit values in order to conserve power from the power supply 1532. In one embodiment, the DSP 1504 may include a sleep controller 1562 to switch the sleep enable combinational logic. Therefore, when the DSP 1504 can receive a sleep signal or other signals, the sleep controller 1562 receives the signal and controls the sleep enable combinational logic. For example, in FIGS. 1-7 the sleep controller may send a sleep signal to initiate the sleep enable combinational logic. In another embodiment, the sleep controller may be located outside the DSP 1504.OverviewThe above description of the embodiments of the inventive concepts disclosed herein has been presented for purposes of illustration and description only, and is not intended to be exhaustive or to limit the inventive concepts disclosed herein to the precise forms disclosed. Those skilled in the art may understand many modifications and changes without departing from the spirit and scope of the inventive concepts disclosed herein.
According to the invention there is provided a method of rendering user interface elements on a display, the method comprising: defining an archive file hierarchy, wherein the archive file hierarchy includes a plurality of archive files and each of the plurality of archive files has a position in the archive file hierarchy in a range between a highest position and a lowest position; storing one or more user interface elements in each of the plurality of archive files; and rendering each of the one or more user interface elements based on the position of a respective archive file in which each of the one or more user elements is stored.
A method of rendering user interface elements on a display, the method comprising:defining an archive file hierarchy, wherein the archive file hierarchy includes a plurality of archive files and each of the plurality of archive files has a position in the archive file hierarchy in a range between a highest position and a lowest position;storing one or more user interface elements in each of the plurality of archive files;and rendering each of the one or more user interface elements based on the position of a respective archive file in which each of the one or more user elements is stored.A method according to claim 1, further comprising:rendering a user interface element within an archive file having a highest position in the archive file hierarchy to appear in a display.A method according to claim 2, further comprising:giving a user interface element within an archive file having a highest position in the archive file hierarchy preference to pixels in the display over any other user interface element having a lower position in the archive file hierarchy that attempts to use the pixels.A method according to any one of the preceding claims, wherein the one or more user interface elements are defined by a mobile network operator, a device manufacturer, a trig, a user, or a combination thereof.A method according to claim 4, wherein the one or more user interface elements are prioritized in the following order from highest to lowest: mobile network operator defined user interface elements, device manufacturer defined user interface elements, trig defined user interface elements, and user defined user interface elements.A method according to any one of the preceding claims, further comprising:defining a windowtitle.txt element in one or more archive files, wherein the windowtitle.txt element defines one or more attributes for text used in a title of a window to be rendered at a display; and displaying text associated with a windowtitle.txt element having a highest position in the archive file hierarchy unless an archive file not associated with a windowtitle.txt element and having a higher position than the windowtitle.txt element includes an instruction to ignore any windowtitle.txt elements associated with lower archive files.A method according to any one of the preceding claims, further comprising:defining an obscuring element in one or more archive files, wherein the obscuring element is configured to mask a user interface element that occupies a common region of a display as the obscuring element and is stored within an archive file having a lower position in the archive file hierarchy than an archive file in which the obscuring element is located.A method according to any one of the preceding claims, further comprising:refusing to fetch a user element to be masked from an archive file having a lower position in the archive file hierarchy when an obscuring element in an archive file having a higher position in the archive file hierarchy is to be rendered and occupies a common region of the display as the user element to be masked.A device, comprising:means for defining an archive file hierarchy, wherein the archive file hierarchy includes a plurality of archive files and each of the plurality of archive files has a position in the archive file hierarchy in a range between a highest position and a lowest position; means for storing one or more user interface elements in each of the plurality of archive files; and means for rendering each of the one or more user interface elements based on the position of a respective archive file in which each of the one or more user elements is stored.A device according to claim 9, further comprising:means for rendering a user interface element within an archive file having a highest position in the archive file hierarchy to appear in a display.A device according to claim 10, further comprising:means for giving a user interface element within an archive file having a highest position in the archive file hierarchy preference to pixels in the display over any other user interface element having a lower position in the archive file hierarchy that attempts to use the pixels.A device according to any one of claims 9 to 11, wherein the one or more user interface elements are defined by a mobile network operator, a device manufacturer, a trig, a user, or a combination thereof.A device according to claim 12, wherein the one or more user interface elements are prioritized in the following order from highest to lowest: mobile network operator defined user interface elements, device manufacturer defined user interface elements, trig defined user interface elements, and user defined user interface elements.A device according to any one of claims 9 to 11, further comprising:means for defining a windowtitle.txt element in one or more archive files, wherein the windowtitle.txt element defines one or more attributes for text used in a title of a window to be rendered at a display; and means for displaying text associated with a windowtitle.txt element having a highest position in the archive file hierarchy unless an archive file not associated with a windowtitle.txt element and having a higher position than the windowtitle.txt element includes an instruction to ignore any windowtitle.txt elements associated with lower archive files.A device according to any one of claims 9 to 11, further comprising:means for defining an obscuring element in one or more archive files, wherein the obscuring element is configured to mask a user interface element that occupies a common regionof a display as the obscuring element and is stored within an archive file having a lower position in the archive file hierarchy than an archive file in which the obscuring element is located.A device according to claim 15, further comprising:means for refusing to fetch a user element to be masked from an archive file having a lower position in the archive file hierarchy when an obscuring element in an archive file having a higher position in the archive file hierarchy is to be rendered and occupies a common region of a display as the user element to be masked.A computer readable medium carrying a computer program for implementing the method according to any one of the preceding claims.
FIELD OF THE INVENTIONThe present invention relates to user interfaces and in particular to the user interface for devices for use with a mobile communications network.BACKGROUND OF THE INVENTIONThere is now a significant market in downloading images, ringtones, wallpapers, etc., to enable users to modify the appearance of their mobile phones. For commercial reasons it is desirable for mobile network operators and/or content providers to be able to have some control over the user interface that will be displayed on the screen of a mobile device. Conventional methods for implementing user interfaces lack the flexibility and configurability that enable such schemes to be implemented.SUMMARY OF THE INVENTIONAccording to the invention there is provided a method of rendering user interface elements on a display, the method comprising: defining an archive file hierarchy, wherein the archive file hierarchy includes a plurality of archive files and each of the plurality of archive files has a position in the archive file hierarchy in a range between a highest position and a lowest position; storing one or more user interface elements in each of the plurality of archive files; and rendering each of the one or more user interface elements based on the position of a respective archive file in which each of the one or more user elements is stored.In one embodiment, the method may further comprise rendering a user interface element within an archive file having a highest position in the archive file hierarchy to appear in a display.In that embodiment the method may further comprise giving a user interface element within an archive file having a highest position in the archive file hierarchy preference to pixels in the display over any other user interface element having a lower position in the archive file hierarchy that attempts to use the pixels.The one or more user interface elements may be defined by a mobile network operator, a device manufacturer, a trig, a user, or a combination thereof.The one or more user interface elements may be prioritized in the following order from highest to lowest: mobile network operator defined user interface elements, device manufacturer defined user interface elements, trig defined user interface elements, and user defined user interface elements.In one embodiment the method further comprises defining a windowtitle.txt element in one or more archive files, wherein the windowtitle.txt element defines one or more attributes for text used in a title of a window to be rendered at a display; and displaying text associated with a windowtitle.txt element having a highest position in the archive file hierarchy unless an archive file not associated with a windowtitle.txt element and having a higher position than the windowtitle.txt element includes an instruction to ignore any windowtitle.txt elements associated with lower archive files.In one embodiment the method further comprises defining an obscuring element in one or more archive files, wherein the obscuring element is configured to mask a user interface element that occupies a common region of a display as the obscuring element and is stored within an archive file having a lower position in the archive file hierarchy than an archive file in which the obscuring element is located.In one embodiment the method further comprises refusing to fetch a user element to be masked from an archive file having a lower position in the archive file hierarchy when an obscuring element in an archive file having a higher position in the archive file hierarchy is to be rendered and occupies a common region of the display as the user element to be masked.The invention also provides a computer readable medium, comprising: at least one instruction for implementing any of the above methods is also provided.According to another aspect of the invention there is provided a device, comprising: means for defining an archive file hierarchy, wherein the archive file hierarchy includes a plurality of archive files and each of the plurality of archive files has a position in the archive file hierarchy in a range between a highest position and a lowest position; means for storing one or more user interface elements in each of the plurality of archive files; and means for rendering each of the one or more user interface elements based on the position of a respective archive file in which each of the one or more user elements is stored.In one embodiment the device further comprises means for rendering a user interface element within an archive file having a highest position in the archive file hierarchy to appear in a display. In this case the device may further comprise means for giving a user interface element within an archive file having a highest position in the archive file hierarchy preference to pixels in the display over any other user interface element having a lower position in the archive file hierarchy that attempts to use the pixels.According to another aspect of the invention there is provided a memory; a processor coupled to the memory, wherein the processor is operable to execute one or more instructions stored within the memory in order to: define an archive file hierarchy, wherein the archive file hierarchy includes a plurality of archive files and each of the plurality of archive files has a position in the archive file hierarchy in a range between a highest position and a lowest position; store one or more user interface elements in each of the plurality of archive files; and render each of the one or more user interface elements based on the position of a respective archive file in which each of the one or more user elements is stored.In one embodiment the processor is further operable to: render a user interface element within an archive file having a highest position in the archive file hierarchy to appear in a display. In this case the processor may be further operable to: give a user interface element within an archive file having a highest position in the archive file hierarchy preference to pixels in the display over any other user interface element having a lower position in the archive file hierarchy that attempts to use the pixels.The one or more user interface elements may be defined by a mobile network operator, a device manufacturer, a trig, a user, or a combination thereof.The one or more user interface elements may be prioritized in the following order from highest to lowest: mobile network operator defined user interface elements, device manufacturer defined user interface elements, trig defined user interface elements, and user defined user interface elements.In one embodiment the device further comprises means for defining a windowtitle.txt element in one or more archive files, wherein the windowtitle.txt element defines one or more attributes for text used in a title of a window to be rendered at a display; and means for displaying text associated with a windowtitle.txt element having a highest position in the archive file hierarchy unless an archive file not associated with a windowtitle.txt element and having a higher position than the windowtitle.txt element includes an instruction to ignore any windowtitle.txt elements associated with lower archive files.The device may further comprise means for defining an obscuring element in one or more archive files, wherein the obscuring element is configured to mask a user interface element that occupies a common region of a display as the obscuring element and is stored within an archive file having a lower position in the archive file hierarchy than an archive file in which the obscuring element is located.The device may further comprise means for refusing to fetch a user element to be masked from an archive file having a lower position in the archive file hierarchy when an obscuring element in an archive file having a higher position in the archive file hierarchy is to be rendered and occupies a common region of a display as the user element to be masked.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 shows a schematic depiction of a system incorporating the present invention;Figure 2 depicts in greater detail the structure and operation of server;Figure 3 shows a schematic depiction of the software for the mobile devices;Figure 4 shows a schematic depiction of four hierarchical planes; andFigure 5 shows a schematic depiction of a device that comprises a user interface according to an embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe invention will now be described by way of illustration only and with respect to the accompanying drawings, in which Figure 1 shows a schematic depiction of a system incorporating the present invention. The system comprises server 100, content toolset 200, mobile devices 300, operational support systems (OSSs) 700, content feeds 500 and user interface (UI) sources 610, 620. In use, the server 100 communicates content data and UI data to the mobile devices 300, each of which comprise software package 400. The server 100 interfaces with OSSs 700, with the OSSs being those conventionally used to operate mobile networks, for example billing, account management, etc. The server 100 further interfaces with the content toolset 200: the content toolset receives data from UI sources 610, 620, ..., and packages the UI data such that the server can transmit the packaged UI data to the software packages 400 comprised within the mobile devices 300. The server receives data from a plurality of content feeds 510, 520, 530, and this data is processed and packaged such that it can be sent to the software packages 400 or so that the mobile devices 300 can access the data using the software package 400.The system can be envisaged as being divided into three separate domains: operator domain comprises the systems and equipment operated by the mobile network operator (MNO); user domain comprises a plurality of mobile devices and third-party domain comprises the content feeds and UI feeds that may be controlled or operated by a number of different entities.Figure 2 depicts in greater detail the structure and operation of server 100. Server 100 comprises publishing component 110 and content server component 150. Publishing component comprises database 111, import queue 112, content toolset interface 113, user interface 114 & catalogue 115. In operation, the publishing component receives content from the content toolset at the content toolset interface. The content is presented in the form of a parcel 210, ..., (see below) comprising one or more Trigs and one or more Triglets. A trig is a user interface for a mobile device, such as a mobile telephone and a triglet is a data file that can be used to extend or alter a trig. If a parcel comprises more than one trig then one of the Trigs may be a master trig from which the other Trigs are derived.The publishing component user interface 114 can be used to import a parcel into the database 111, and this process causes references to each trig and triglet to be loaded into the import queue 114, which may comprise references to a plurality of parcels 210,... . The contents of the parcel may be examined using the user interface and the contents of the parcel can be passed to the catalogue.The MNO may have several publishing domains, for example one for each target server in a number of countries or regions. Each domain is treated in isolation from other domains and has its own publishing scheme describing how objects are to be published onto content servers in both live and staging environments. The publishing component GUI provides several different views to each domain, enabling operators to completely manage the publishing of content. The catalogue comprises references to the Trigs stored in the catalogue and the update channels and feed channels used to transfer content to the various domains. For each domain, the operator uses the publishing component GUI to set up the domain structure and allocate trigs from the catalogue to each domain node. To aid the operator in selecting trigs efficiently, a filter is provided in the catalogue so that only relevant items are shown.The content server component 150 is a standard implementation of a web server and as such the scaling model is well understood. The capabilities of a server can be rated using a "SPECweb99" number indicating the number of concurrent sessions that the web server can handle under benchmark conditions. Published SPECweb99 numbers range from 404 to 21,000 with typical commercial web servers having SPECweb99 numbers in the order of 5,000. A typical deployment scenario of 1 m subscribers with hourly updating content requires a web server with a SPECweb99 rating of only 1,112. A successful deployment will lead to increased service use which can be provided for by enabling additional servers to create an infrastructure that can be both scalable and highly resilient to failure.A connection may be made to the server from a mobile device via a WAP Gateway. In this case the web server session exists between the WAP gateway and the web server, rather than the mobile phone and web server. When a request is made for a file via the WAP gateway, the session with the web server lasts only as long as it takes to transfer the file from the web server to the WAP gateway - i.e. the session is extremely short since the connection bandwidth will be very high and latency extremely low.Alternatively a direct connection may be established between the mobile phone and the web server. In this case, the web server will need to keep the session open for as long as it takes to download the data to the phone.There are two types of content that are delivered by the content server component: trigs, typically of the order of 100KB and regularly updating triglets which are typically of the order of 1 KB. The traffic created by trig downloads is very similar to the traffic generated by existing content downloads. And thus the related issues are well understood. Downloads of regular triglet updates are a new feature in an MNO's traffic model but because of the small size of the updates, which typically fit within one data packet, it is possible to show that the traffic can still be handled by typical web servers.In the case of a triglet download, typically only one data packet is required to transfer 1 KB. Assuming a round-trip latency across a GPRS network of 2 seconds, the web server will need to hold open a typical session for around 4 seconds. For the scenario of 1 million subscribers having a trig on their phone with content that updates every hour, this implies 278 hits per second on the web server and 1,112 concurrent sessions. As stated earlier, this number is well within the capability of typical web servers.Figure 3 shows a schematic depiction of the software 400 for the mobile devices 300, which comprises a mark-up language renderer 410, update manager 420, network communication agent 425, resource manager 430, virtual file system 435, actor manager 440, a plurality of actors 445 ..., native UI renderer 450, support manager 460, trig manager 465 and mark-up language parser 470.It is preferred that the software operates using TrigML, which is an XML application and that mark-up language renderer 410 renders the TrigXML code for display on the mobile device 300. The mark-up language renderer also uses the TrigML Parser to parse TrigML resources, display content on the device screen and controlling the replacement and viewing of content on the handset. The native UI renderer is used to display UI components that can be displayed without the use of TrigML, and for displaying error messages.The software 400 is provisioned and installed in a device specific manner. For example for a Nokia Series 60 device the software is installed using a SIS file, whereas for a MS Smartphone device the software is installed using a CAB file. Similarly, software upgrades are handled in a device specific manner. The software may be provisioned in a more limited format, as a self-contained application that renders its built in content only: i.e. the software is provisioned with a built-in trig but additional trigs cannot be added later. The supplied trig may be upgraded over the air.The trig manager 465 presents an interface to the resource manager 430 and the mark-up language renderer. It is responsible for trig management in general. This includes: persisting knowledge of the trig in use, changing the current trig, selection of a trig on start-up, selection of a further trig as a fall back for a corrupt trig, maintaining the set of installed trigs, identifying where a particular trig is installed to the resource manager and reading the update channel definitions of a trig and configuring the update manager appropriately.The resource manager provides an abstraction of the persistent store on device, i.e. storing the files as real files, or as records in a database. The resource manager presents a file system interface to the mark-up language renderer and the update manager. It is responsible for handling file path logic, distinguishing between real resource files and actor attributes, mapping trig-relative paths onto absolute paths, interfacing with the trig manager and providing a modification interface to the update manager.The Update Manager handles the reception and application of Trigs and Triglets. The Update Manager presents an interface to the Renderer and the trig Manager and is responsible for: the initiation of manual updates when instructed to by the Renderer; controlling and implementing the automatic update channel when so configured by the trig manager; indicating the progress of a manual update and recovering an Update following unexpected loss of network connection and/or device power. The update packet format may be defined as a binary serialisation of an XML schema.The Support Manager provides an interface for other components to report the occurrence of an event or error. Depending on the severity of the error, the Support Manager will log the event and/or put up an error message popupXML is a convenient data formatting language that is used to define the update packet format as well as TrigML content. For bandwidth and storage efficiency reasons, text XML is serialised into a binary representation. Both update packets and TrigML fragments are parsed by the same component, the mark-up language parser. Any further use of XML in the software must use the binary XML encoding and therefore re-use the parser.The Actor Manager 440 looks after the set of actors 445 present in the software. It is used by: the renderer when content is sending events to an actor; actors that want to notify that an attribute value has changed and actors that want to emit an event (see below).The software may comprise a multi-threaded application running a minimum of two threads, with more possible depending on how many and what sort of actors are included. The software runs mostly in one thread, referred to as the main thread. The main thread is used to run the renderer which communicates synchronously with other components. Actors always have a synchronous interface to the Renderer. If an actor requires additional threads for its functionality, then it is the responsibility of the Actor to manage the inter-thread communication. It is preferred that a light messaging framework is used to avoid unnecessary code duplication where many actors require inter-thread communication.In addition to the main thread, the update manager runs a network thread. The network thread is used to download update packets and is separate from the main thread to allow the renderer to continue unaffected until the packet has arrived. The Update Manager is responsible for handling inter-thread messaging such that the Update Manager communicates synchronously with the Renderer and Resource Manager when applying the changes defined in an Update Packet.The Renderer receives information regarding the key press. If there is no behaviour configured at build time for a key, it is sent as a TrigML content event to the current focus element. The content event is then handled as defined by TrigML's normal event processing logic.For example, if a key is pressed down, a 'keypress' event is delivered to the Renderer with a parameter set to they relevant key. When the key is released, a '!keypress' event is delivered to the Renderer. If a key is held down for a extended period of time, a 'longkeypress' event is delivered to the renderer. On release, both a '!Iongkeypress' and a '!keypress' event are delivered to the Renderer.A trig is started by loading the defined resource name, start-up/default. The TrigML defined in start-up/default is parsed as the new contents for the content root node.The first time a trig is run by the software following its installation, the trig is started by loading the resource name startup/firsttime. The software may record whether a trig has been run or not in a file located in the top level folder for that trig. Dependent on the platform used by the mobile device, the automatic start-up of the software may be set as a build-time configuration option. Furthermore, placing the software in the background following an auto-start may also be a build-time configuration option.A launcher may appear to the user as an application icon and selecting it starts the software with a trig specified by that launcher (this trig may be indicated by a launcher icon and/or name). When using a launcher to start a trig, it is possible to specify an 'entry point' parameter. The parameter is a resource name of a file found in the 'start-up' folder. This file is not used if the trig has never been run before, in which case the file called 'firsttime' is used instead.The software uses content resource files stored in a virtual file system on the device. The file system is described as virtual as it may not be implemented as a classical file-system, however, all references to resources are file paths as if stored in a hierarchical system of folders and files. Furthermore, the software stores some or all of the following information: usage statistics; active user counts; TrigManager state; TrigML fragments & update channel definition (serialised as binary XML); PNG images; plain text, encoded as UTF-8 OTA and then stored in a platform specific encoding; other platform specific resources, e.g. ring tone files, background images, etc.Files in the file system can be changed, either when an actor attribute value changes, or when a file is replaced by a triglet. When files in the /attrs directory change, the Renderer is immediately notified and the relevant branches of the content tree are updated and refreshed. When images and text resources are changed, the Renderer behaves as if the affected resources are immediately reloaded (either the whole content tree or just the affected branches may be refreshed). When TrigML fragments are changed, the Renderer behaves as if it is not notified and continues to display its current, possibly out of date, content. This is to avoid the software needing to persist <include> elements and the <load> history of the current content.The software 400 is provisioned to mobile devices in a device specific method. One or more Trigs can be provisioned as part of the installation, for example, stored as an uncompressed update packet. On start-up, the packet can be expanded and installed to the file-system.The Actors 445 are components that publish attribute values and handle and emit events. Actors communicate with the Renderer synchronously. If an actor needs asynchronous behaviour, then it is the responsibility of the actor to manage and communicate with a thread external to the main thread of the Renderer. An Actor can be messaged by sending it an event with the <throw> element. Events emitted by actors can be delivered to the content tree as content events: these can be targeted at an element Id or 'top'. The interface to an actor is defined by an Actor Interface Definition file. This is an XML document that defines the attributes, types, fieldnames, events-in and parameters, and events out. The set of actors is configurable at build-time for the software.For commercial reasons it is desirable for MNOs and/or content providers to be able to have some control over the user interface that will be displayed on the screen of a mobile device. It is also important that there is a degree of flexibility that allows users to download triglets or new trigs to modify the appearance of their device and also to make further changes to the displayed image that is determined by the trig or triglet in use.Content for the user interface is stored in archive files and the UI of an app can be defined by one or more archive files. Each archive file may contain multiple resources (mark-up elements, images, text etc). The files within the archive are, like other archive file formats, stored in a tree structure, in a manner similar to a known folder/file structure. Where more than one archive file is used to define a UI, the required archive files are arranged in a strict sequence (or list). When a resource is required it is searched for in each archive file, the search returning the first file found.It is possible to order the archive files in such a manner that resources extracted from an archive file earlier in the list mask resources found in an archive file (or files) which are found lower down in the list of archive files. For example, if there is a requirement to temporarily obscure an element of the UI, for example a window, then a mask element can be defined in an archive file which is stored at a higher level than the archive file that comprises the UI element to be obscured.It should be noted that when the user interface content is interpreted to render the UI, the addition of an obscuring element causes the obscuring element to be rendered within the UI. As the obscuring element and the UI element to be masked occupy a common region of the UI and the obscuring element is held within an archive file which is a higher layer than the archive file that comprises the element to be masked, only the obscuring element is to be shown in the UI and thus only the obscuring element is fetched by the renderer in order to render the UI. Because the obscuring element is to be rendered and it occupies the same position as the element to be masked, the element to be masked is not fetched the renderer. This approach reduces the amount of data that is fetched by the renderer in order to render the UI and thus reduces the amount of processing power that is required to render the UI and decreases the time that is taken to render the UI.This method is of significant use where an archive file is needed by more than one application on a device (such an archive file may be referred to as the common archive or base archive). Each application can then supply their own application-specific archive file that masks some of the resources in the common archive file and adds further resources that are required. Each application has a list of archive file(s) that it uses, but some of these archive files will be used by other applications as well. This can be thought of as a hierarchy of archive files: the common archive files are at the top layer of the hierarchy, with the application-specific archive files comprising the lower layers of the hierarchy. Archive files specific to an application may be referred to as flat files, as the different archive files are to be found in the same layer of the hierarchy.According to a further embodiment the above arrangement may be extended to allow the configuration and appearance of the UI to be controlled. For example, as the UI is effectively formed from the interaction of the elements that are stored in archive files that are associated with different layers of the hierarchy, it is possible to assign one or more layers of the hierarchy to different entities, such as the MNO, device manufacturer, trig provider, the device user, etc. This allocation of the different layers of the hierarchy, for example, enables a logo associated with the device manufacturer to be permanently displayed within the UI by assigning the top layer of the hierarchy to the manufacturer. If the network operator, or a trig provider, wishes to publicise a time-limited promotional offer then a suitable element can be added to an archive file in a layer associated with the network operator. The entity that is allocated the lowest level(s) of the hierarchy is able to change and modify all of the UI elements that have not been defined in archive files in any of the higher levels of the hierarchy.Figure 4 shows a schematic depiction of four hierarchical planes 405a-d: plane 405a comprises UI elements defined buy the MNO; plane 405b comprises UI elements defined buy the device manufacturer; plane 405c comprises UI elements defined by a trig; and plane 405d comprises UI elements defined by the user. Plane 405a has the highest position in the hierarchy and plane 405d has the lowest position in the hierarchy. For example, the mno_logo element in plane 405a defines the graphic element used and its position on the display screen of the device. As it is in the highest plane of the hierarchy it will always appear and will take preference over any other UI element in a lower hierarchy element that attempts to use the pixels used by mno_logo. Plane 405d comprises the backgroundcolour element, which is not defined in any of the other planes and thus the colour defined in backgroundcolour will be used in the UI.Plane 405c comprises the windowtitle.txt element that defines the attributes for the text used in the title of a window. This may be overwritten by adding a windowtitle.txt element to either plane 405a or 405b to define the text attributes, or by adding a windowtitle.txt_deleted element to either plane 405a or 405b to instruct the UI renderer to ignore any subsequent windowtitle.txt element.This enables each feature of the UI to be configurable but also provides a framework within which certain UI elements can be defined by an entity, such as the network operator, in a manner that prevents other entities from altering or interfering with those defined U I element(s).Although the preceding discussion has referred to user interface content being stored within archive files, it will be readily understood that the user interface content may alternatively be stored in other forms that provide a collection of files, for example, a folder of unpacked files, or a folder comprising unpacked files and other folders.In conventional mobile devices, information regarding the battery strength, signal strength, new text messages, etc. are shown to the user. Typically this information is obtained by the operating system sending a call to the relevant hardware or software entity and the UI interpreting the received answer and displaying it.This information may be displayed in the UI using a TrigML tag (see below) <phonestatus> (or <signalstrength>). Rendering this tag causes a listening query to be opened to the relevant hardware or software entity. If a change in state occurs then the UI renderer is notified and the UI renderer loads the relevant icon or graphic to communicate the change in state to the user. If the user changes the view within the UI the tag may be withdrawn and the listening query is terminated. This approach is more resource efficient as the listening query is only active when the tag is in use.TrigML can use constant variables instead of attribute values. Constant variables are accessed with the same syntax as <include> parameters, e.g. $background_colour. Constants are treated as global variables in a trig and are defined in the reserved folder, constants/. The variable definitions contained in the files in the constants/ folder may be resolved at compile time with direct substitution of their values. In an alternative embodiment the variable definitions in constants/ are compiled as global variables and resolved at content parse time by the software. This allows the trig to be updated by a simple replacement of one or all of its constants files.A System String Dictionary defines the integer values to use for all well known strings, i.e. reserved words. These have several types, including: TrigML element and attribute names ('group', 'res', 'layer', 'image', 'x'), TrigML attribute values (e.g.: 'left', 'activate', 'focus') and common resource paths (e.g.: 'attr', 'start-up', 'default'). As an input, the String Dictionary is optional. The first time a trig is compiled it will not have a String Dictionary. This first compilation creates the String Dictionary, which is then used for all future compilations of that trig. Triglet compilation must have a String Dictionary that defines all the string mappings used by the trig it is modifying.In order to successfully render the user interface of a mobile device, the mark-up language must have the following qualities: concise page definitions, consistent layout rules, be implementable in a compact renderer, provide multiple layering and arbitrary overlapping content, event model, require the repaint of only the areas of the display that have to change between pages of the UI, include hooks to the platform for reading property values receiving events and sending events, extensible, and be graphically flexible. TrigML provides these features and an overview of the elements and attributes that provide the desired functionality can be found in our co-pending application GB0403709.9, filed February 19th 2004 .It is desirable that the cost of re-branding UIs and producing a continual stream of updates is minimal. This is enabled by providing an efficient flow of information from the creative process through to the transmission of data to user.A container, referred to as a parcel, is used for UIs, UI updates, and templates for 3rd party involvement. Parcels contain all the information necessary for a 3rd party to produce, test and deliver branded UIs and updates.Figure 5 shows a schematic depiction of a device 800 that comprises a user interface according to an embodiment of the present invention. The device comprises a display 810 that displays the user interface 815 and user interface means 820, that enable the user to interact with the user interface 815. A processor 830 executes the software that is stored within one or more storage means 840 and there may be provided one or more wireless communication interfaces 850, to enable communication with other devices and/or communication networks. One or more batteries 860 may be received to power the device, which may also comprise interfaces to receive electrical power and/or communication cables.The nature of these components and interfaces will depend upon the nature of the device. It will be understood that such a user interface can be implemented within a mobile or cellular telephone handset, but it is also applicable to other portable devices such as digital cameras, personal digital organisers, digital music players, GPS navigators, portable gaming consoles, etc. Furthermore, it is also applicable to other devices that comprise a user interface, such as laptop or desktop computers.The user interface means may comprise a plurality of buttons, such as a numerical or alpha-numerical keyboard, or a touch screen or similar. One or more storage devices may comprise a form of non-volatile memory, such as a memory card, so that the stored data is not lost if power is lost. ROM storage means may be provided to store data which does not need updating or changing. Some RAM may be provided for temporary storage as the faster response times support the caching of frequently accessed data. The device may also accept user removable memory cards and optionally hard disk drives may be used as a storage means. The storage means used will be determined by balancing the different requirements of device size, power consumption, the volume of storage required, etc.Such a device may be implemented in conjunction with virtually any wireless communications network, for example second generation digital mobile telephone networks (i.e. GSM, D-AMPS), so-called 2.5G networks (i.e. GPRS, HSCSD, EDGE), third generation WCDMA or CDMA-2000 networks and improvements to and derivatives of these and similar networks. Within buildings and campuses other technologies such as Bluetooth, IrDa or wireless LANs (whether based on radio or optical systems) may also be used. USB and/or FireWire connectivity may be supplied for data synchronisation with other devices and/or for battery charging.Computer software for implementing the methods and/or for configuring a device as described above may be provided on data carriers such as floppy disks, CD-ROMS. DVDs, non-volatile memory cards, etc.
A method of manipulating an image by a device is disclosed. The method includes segmenting image data corresponding to the image into a first image layer and a second image layer. The method further includes adjusting a first attribute of the first image layer independently of a second attribute of the second image layer based on user input.
1.A method of manipulating an image by a device, the method comprising:Segmenting image data corresponding to the image into a first image layer and a second image layer;The first attribute of the first image layer is adjusted based on a user input independent of a second attribute of the second image layer.2.The method of claim 1 further comprising:Receiving the user input at the device;The image data is modified based on the user input to produce modified image data representing the modified image, wherein the modified image depicts an independent modification of the first image layer relative to the second image layer.3.The method of claim 1 wherein one of said first image layer and said second image layer corresponds to a foreground of said image, and wherein said first image layer and said second image layer The other of them corresponds to the background of the image.4.The method of claim 1 wherein one of said first image layer and said second image layer corresponds to a first portion of a foreground of said image, and wherein said first image layer and said first The other of the two image layers corresponds to the second portion of the foreground of the image.5.The method of claim 1 wherein one of said first image layer and said second image layer corresponds to a first portion of a background of said image, and wherein said first image layer and said first The other of the two image layers corresponds to the second portion of the background of the image.6.The method of claim 1, wherein one or more of the first attribute or the second attribute corresponds to a color attribute, a sharpness attribute, a fuzzy attribute, or a context attribute.7.The method of claim 1 further comprising loading an image editing application to a memory of the device to enable editing of the image.8.The method of claim 1 further comprising identifying a cluster associated with the image data, wherein the image data is based on the cluster segmentation.9.The method of claim 1 wherein said first layer corresponds to a background of said image, wherein said second layer corresponds to a foreground of said image, and wherein said first layer is relative to said second Layer blur to approximate the hyperfocus camera effect.10.The method of claim 1 further comprising performing one or more component marking operations using the first image layer.11.The method of claim 10 wherein the user input is received via a user interface UI of the device.12.An apparatus comprising:Memory;a processor coupled to the memory, wherein the processor is configured to segment image data corresponding to an image into a first image layer and a second image layer, and independent of the second image layer based on user input The second attribute adjusts the first attribute of the first image layer.13.The device of claim 12, wherein the processor is further configured to receive the user input and modify the image data based on the user input to generate modified image data representing a modified image, and wherein the The modified image depicts an independent modification of the first image layer relative to the second image layer.14.The apparatus of claim 12, wherein one of the first image layer and the second image layer corresponds to a foreground of the image, and wherein the first image layer and the second image layer The other of them corresponds to the background of the image.15.The apparatus of claim 12, wherein one of the first image layer and the second image layer corresponds to a first portion of a foreground of the image, and wherein the first image layer and the first The other of the two image layers corresponds to the second portion of the foreground of the image.16.The apparatus of claim 12, wherein one of the first image layer and the second image layer corresponds to a first portion of a background of the image, and wherein the first image layer and the first The other of the two image layers corresponds to the second portion of the background of the image.17.The apparatus of claim 12, wherein one or more of the first attribute or the second attribute corresponds to a color attribute, a sharpness attribute, a fuzzy attribute, or a context attribute.18.The device of claim 12, wherein the processor is further configured to load an image editing application to enable editing of the image.19.The device of claim 12, wherein the processor is further configured to identify a cluster associated with the image data and segment the image data based on the cluster.20.The device of claim 12 integrated in a mobile device.21.The device of claim 12, wherein the processor is further configured to perform one or more component marking operations using the first image layer.22.The device of claim 21, further comprising a display device, wherein the processor is further configured to cause a user interface UI to be displayed at the display device, and wherein the user input is received via the display device.23.A non-transitory computer readable medium storing instructions executable by a processor to cause the processor to:Segmenting the image data associated with the image into a first image layer and a second image layer;The first attribute of the first image layer is adjusted based on a user input independent of a second attribute of the second image layer.24.The non-transitory computer readable medium of claim 23, wherein the instructions are further executable by the processor to receive the user input, and modifying the image data based on the user input to generate a representation of the modified image Modified image data, and wherein the modified image depicts an independent modification of the first image layer relative to the second image layer.25.A non-transitory computer readable medium according to claim 23, wherein one of said first image layer and said second image layer corresponds to a foreground of said image, and wherein said first image layer and The other of the second image layers corresponds to the background of the image.26.A non-transitory computer readable medium according to claim 23, wherein one of said first image layer and said second image layer corresponds to a first portion of a foreground of said image, and wherein said first The other of the image layer and the second image layer corresponds to a second portion of the foreground of the image.27.A non-transitory computer readable medium according to claim 23, wherein one of said first image layer and said second image layer corresponds to a first portion of a background of said image, and wherein said first The other of the image layer and the second image layer corresponds to a second portion of the background of the image.28.The non-transitory computer readable medium of claim 23, wherein one or more of the first attribute or the second attribute corresponds to a color attribute, a sharpness attribute, a fuzzy attribute, or a context attribute.29.The non-transitory computer readable medium of claim 23, wherein the instructions are further executable by the processor to load an image editing application to enable editing of the image.30.The non-transitory computer readable medium of claim 23 wherein said processor and said memory are integrated within a mobile device.31.The non-transitory computer readable medium of claim 23, wherein the instructions are further executable by the processor to identify a cluster associated with the image data, and segment the image data based on the cluster .32.The non-transitory computer readable medium of claim 23, wherein the instructions are further executable by the processor to perform one or more component marking operations using the first image layer.33.The non-transitory computer readable medium of claim 32, wherein the instructions are further executable by the processor to receive the user input via a user interface UI.34.An apparatus comprising:Means for segmenting image data associated with an image into a first image layer and a second image layer;Means for adjusting a first attribute of the first image layer independently of a second attribute of the second image layer based on user input.35.The device of claim 34, further comprising:Means for receiving the user input;Means for modifying the image data based on the user input to generate modified image data representing the modified image,Wherein the modified image depicts an independent modification of the first image layer relative to the second image layer.36.The apparatus of claim 35, wherein one of the first image layer and the second image layer corresponds to a foreground of the image, and wherein the first image layer and the second image layer The other of them corresponds to the background of the image.37.The apparatus of claim 35, wherein one of the first image layer and the second image layer corresponds to a first portion of a foreground of the image, and wherein the first image layer and the first The other of the two image layers corresponds to the second portion of the foreground of the image.38.The apparatus of claim 35, wherein one of the first image layer and the second image layer corresponds to a first portion of a background of the image, and wherein the first image layer and the first The other of the two image layers corresponds to the second portion of the background of the image.39.The apparatus of claim 35, wherein one or more of the first attribute or the second attribute corresponds to a color attribute, a sharpness attribute, a fuzzy attribute, or a context attribute.40.The device of claim 35, further comprising means for loading an image editing application to enable editing of the image.41.The apparatus of claim 35, further comprising means for identifying a cluster associated with the image data, wherein the image data is based on the cluster segmentation.42.The device of claim 35 integrated in a mobile device.43.The apparatus of claim 35, further comprising means for performing one or more component marking operations using the first image layer.44.The device of claim 43, further comprising means for displaying a user interface UI to a user.45.A method comprising:Displaying a first image at the mobile device;Receiving a first user input at the mobile device, the first user input indicating a direction relative to the mobile device;Performing a first image editing operation on the first image to generate a second image based on the first user input;Displaying the second image at the mobile device;Receiving a second user input at the mobile device, the second user input indicating the direction;A second image editing operation is performed on the second image to generate a third image based on the second user input.46.The method of claim 45 wherein said first user input and said second user input correspond to a slip operation at a display device of said mobile device.47.The method of claim 45, wherein the first image editing operation comprises an image blurring operation, and wherein the second image editing operation comprises a color changing operation.48.The method of claim 45, wherein the order of the first image editing operation and the second image editing operation is based on user configuration parameters stored at the mobile device.49.The method of claim 48, further comprising receiving a user preference input, wherein the user preference input reconfigures the user configuration parameter to indicate that the color change operation precedes an image blur operation.50.The method of claim 45, further comprising:Receiving a third user input at the mobile device, the third user input indicating the direction;The first image is displayed at the mobile device based on the third user input.51.The method of claim 50 wherein said third user input corresponds to a command to revoke said first image editing operation and said second image editing operation.52.An apparatus comprising:Memory;a processor coupled to the memory, wherein the processor is configured to cause a mobile device to: display a first image; receive a first user input at the mobile device, the first user input indication relative to the a direction of the mobile device; performing a first image editing operation on the first image to generate a second image based on the first user input; causing the mobile device to: display the second image, receive a second user input, The second user input indicates the direction, and based on the second user input, performs a second image editing operation on the second image to generate a third image.53.The apparatus of claim 52, wherein the first user input and the second user input correspond to a slip operation at a display device of the mobile device.54.The apparatus of claim 52, wherein the first image editing operation comprises an image blurring operation, and wherein the second image editing operation comprises a color changing operation.55.The apparatus of claim 52, wherein the order of the first image editing operation and the second image editing operation is based on user configuration parameters stored at the mobile device.56.The device of claim 55, further comprising receiving a user preference input, wherein the user preference input reconfigures the user configuration parameter to indicate that the color change operation precedes an image blur operation.57.The device of claim 52, further comprising:Receiving a third user input at the mobile device, the third user input indicating the direction;The first image is displayed at the mobile device based on the third user input.58.The apparatus of claim 57, wherein the third user input corresponds to a command to cancel the first image editing operation and the second image editing operation.59.A computer readable medium storing instructions executable by a processor to cause a mobile device to:Displaying a first image at the mobile device;Receiving a first user input at the mobile device, the first user input indicating a direction relative to the mobile device;Performing a first image editing operation on the first image to generate a second image based on the first user input;Displaying the second image at the mobile device;Receiving a second user input at the mobile device, the second user input indicating the direction;A second image editing operation is performed on the second image to generate a third image based on the second user input.60.The computer readable medium of claim 59, wherein the first user input and the second user input correspond to a slip operation at a display device of the mobile device.61.The computer readable medium of claim 59, wherein the first image editing operation comprises an image blurring operation, and wherein the second image editing operation comprises a color changing operation.62.The computer readable medium of claim 59, wherein the order of the first image editing operation and the second image editing operation is based on user configuration parameters stored at the mobile device.63.The computer readable medium of claim 62, wherein the instructions are further executable by the processor to cause the mobile device to receive a user preference input, wherein the user preference input reconfigures the user configuration parameter to indicate The color change operation precedes the image blur operation.64.The computer readable medium of claim 59, wherein the instructions are further executable by the processor to cause the mobile device to receive a third user input at the mobile device, the third user input indicating the Direction, and displaying the first image at the mobile device based on the third user input.65.The computer readable medium of claim 64, wherein the third user input corresponds to a command to revoke the first image editing operation and the second image editing operation.66.An apparatus comprising:Means for displaying a first image at a mobile device;Means for receiving a first user input at the mobile device, the first user input indicating a direction relative to the mobile device;Means for performing a first image editing operation on the first image based on the first user input to generate a second image;Means for causing the mobile device to display the second image;Means for receiving a second user input, the second user input indicating the direction;Means for performing a second image editing operation on the second image based on the second user input to generate a third image.67.The apparatus of claim 66, wherein the first user input and the second user input correspond to a slip operation at a display device of the mobile device.68.The apparatus of claim 66, wherein the first image editing operation comprises an image blurring operation, and wherein the second image editing operation comprises a color changing operation.69.The apparatus of claim 66, wherein the order of the first image editing operation and the second image editing operation is based on user configuration parameters stored at the mobile device.70.The apparatus of claim 69, further comprising means for receiving a user preference input, wherein the user preference input reconfigures the user configuration parameter to indicate that the color change operation precedes an image blur operation.71.The device of claim 66, further comprising:Means for receiving a third user input at the mobile device, the third user input indicating the direction;Means for displaying the first image at the mobile device based on the third user input.72.The apparatus of claim 71, wherein the third user input corresponds to a command to cancel the first image editing operation and the second image editing operation.73.A method comprising:Receiving a first user input from a user interface, the first user input selecting an image for displaying an operation;Based on the first user input:Performing the display operation;The cluster operation is automatically initiated using image data corresponding to the image.74.The method of claim 73 wherein said first user input corresponds to a touch screen operation, said touch screen operation selecting said image from an image gallery presented at said user interface.75.The method of claim 73 wherein said clustering operation is initiated to identify clusters within said image data while said display operation is performed to magnify said image from a thumbnail to a full view.76.The method of claim 75 wherein said clustering operation uses a simple linear iterative cluster SLIC technique to identify said cluster.77.The method of claim 73, further comprising:Receiving a second user input from the user interface, the second user input identifying a first image layer of the image;An image segmentation operation associated with the first image layer is automatically initiated.78.The method of claim 77 wherein said first image layer corresponds to a foreground of said image, and wherein said second user input corresponds to selecting a sliding motion of said foreground.79.The method of claim 77, further comprising receiving a third user input from the user interface, the third user input identifying a second image layer of the image.80.The method of claim 79 wherein said second image layer corresponds to a background of said image, and wherein said third user input corresponds to a sliding motion selection of said background.81.The method of claim 79 wherein said image segmentation operation processes said first image layer using a grabcut technique upon receiving said third user input.82.An apparatus comprising:Memory;a processor coupled to the memory, wherein the processor is configured to: receive a first user input from a user interface, the first user input selecting an image for displaying an operation; performing based on the first user input The display operation; and automatically initiating a cluster operation using image data corresponding to the image based on the first user input.83.The device of claim 82, wherein the first user input corresponds to a touch screen operation, the touch screen operation selecting the image from an image gallery presented at the user interface.84.The apparatus of claim 82 wherein said clustering operation is initiated to identify a cluster within said image data while said displaying operation is performed to magnify said image from a thumbnail to a full view.85.The apparatus of claim 84 wherein said clustering operation uses a simple linear iterative clustering SLIC technique to identify said cluster.86.The device of claim 82, wherein the processor is further configured to receive a second user input from the user interface, the second user input identifying a first image layer of the image, and automatically initiating The image segmentation operation associated with the first image layer.87.The apparatus of claim 86, wherein the first image layer corresponds to a foreground of the image, and wherein the second user input corresponds to selecting a sliding motion of the foreground.88.The device of claim 86, wherein the processor is further configured to receive a third user input from the user interface, the third user input identifying a second image layer of the image.89.The apparatus of claim 88, wherein the second image layer corresponds to a background of the image, and wherein the third user input corresponds to a sliding motion selection of the background.90.The apparatus of claim 88, wherein upon receiving the third user input, the image segmentation operation processes the first image layer using a grabcut technique.91.A computer readable medium storing instructions executable by a processor to cause the processor to:Receiving a first user input from a user interface, the first user input selecting an image for displaying an operation;Based on the first user input:Performing the display operation;The cluster operation is automatically initiated using image data corresponding to the image.92.The computer readable medium of claim 91, wherein the first user input corresponds to a touch screen operation, the touch screen operation selecting the image from an image gallery presented at the user interface.93.The computer readable medium of claim 91 wherein said clustering operation is initiated to identify a cluster within said image data while said displaying operation is performed to magnify said image from a thumbnail to a full view.94.The computer readable medium of claim 93, wherein the clustering operation uses a simple linear iterative clustering SLIC technique to identify the cluster.95.The computer readable medium of claim 91, wherein the instructions are further executable by the processor to receive a second user input from the user interface, the second user input identifying a first image layer of the image And automatically initiate an image segmentation operation associated with the first image layer.96.The computer readable medium of claim 95, wherein the first image layer corresponds to a foreground of the image, and wherein the second user input corresponds to selecting a sliding motion of the foreground.97.The computer readable medium of claim 95, wherein the instructions are further executable by the processor to receive a third user input from the user interface, the third user input identifying a second image layer of the image .98.The computer readable medium of claim 97, wherein the second image layer corresponds to a background of the image, and wherein the third user input corresponds to a sliding motion selection of the background.99.The computer readable medium of claim 97, wherein the image segmentation operation processes the first image layer using a grabcut technique upon receiving the third user input.100.An apparatus comprising:Means for receiving a first user input from a user interface, the first user input selecting an image for displaying an operation;Means for performing the display operation based on the first user input;Means for automatically initiating a cluster operation using image data corresponding to the image based on the first user input.101.The device of claim 100, wherein the first user input corresponds to a touch screen operation, the touch screen operation selecting the image from an image gallery presented at the user interface.102.The apparatus of claim 100 wherein said clustering operation is initiated to identify a cluster within said image data while said displaying operation is performed to magnify said image from a thumbnail to a full view.103.The apparatus of claim 102 wherein said clustering operation uses a simple linear iterative cluster SLIC technique to identify said cluster.104.The device of claim 100, further comprising:Means for receiving a second user input from the user interface, the second user input identifying a first image layer of the image;Means for automatically initiating an image segmentation operation associated with the first image layer.105.The apparatus of claim 104, wherein the first image layer corresponds to a foreground of the image, and wherein the second user input corresponds to selecting a sliding motion of the foreground.106.The device of claim 104, further comprising means for receiving a third user input from the user interface, the third user input identifying a second image layer of the image.107.The apparatus of claim 106, wherein the second image layer corresponds to a background of the image, and wherein the third user input corresponds to a sliding motion selection of the background.108.The apparatus of claim 106 wherein said image segmentation operation processes said first image layer using a grabcut technique upon receiving said third user input.
Device image editing technologyTechnical fieldThe present invention generally relates to image editing for a device.Background techniqueTechnological advances have produced smaller and stronger electronic devices. For example, there currently exist a variety of mobile devices, such as wireless telephones, personal digital assistants (PDAs), and paging devices. Wireless devices can be small, lightweight, and easily carried by users. Wireless telephones, such as cellular telephones and Internet Protocol (IP) telephones, can carry voice and data packets over a wireless network. Moreover, the wireless telephone can process executable instructions, including software applications, such as web browser applications that can be used to access the Internet. In addition, many wireless telephones include other types of devices incorporated therein. For example, a wireless telephone can also include a digital still camera, a digital video camera, a digital recorder, and an audio file player. Thus, wireless telephones and other mobile devices can include significant computing power.The mobile device can include a camera and an image editing application that can be used to change (or "edit") the image captured with the camera. A user of the mobile device can capture the image using the camera and then use the image editing application to change the image, for example, before sharing the image with friends or family. A particular image editing application may enable a user to perform computationally simple operations, such as removing (or "cropping") a portion of an image. More advanced image editing applications enable users to perform computationally intensive operations on mobile devices, but these operations may still not provide the user with adequate control over image editing operations to achieve specific image editing effects, which may be User is frustrated. Advanced image editing applications can also take advantage of sophisticated user input techniques that users may find difficult or cumbersome to use.Summary of the inventionThe processor can receive image data corresponding to the image. To illustrate, a mobile device can include a processor and a camera, and the camera can capture images. The processor can segment the image data (eg, using segmentation techniques) into a first image layer and a second image layer. The first image layer may correspond to the foreground of the image, and the second image layer may correspond to the background of the image, as an illustrative example. Alternatively, the first image layer and the second image layer may each correspond to a foreground portion of the image (or a background portion of the image).The first image layer and the second image layer can be independently edited by a user to create one or more visual effects. To illustrate, a user may perform an image editing operation on the first image layer but not the second image layer (or vice versa). The user can perform an image editing operation using an image editing application, which can be executed by the processor. The image editing operation can include changing a color attribute of the first image layer but not the second image layer (eg, changing the color of an object independently of the color of another object). As another example, an image editing operation may include blurring the first image layer but not the second image layer, for example by "blurring" the background but not the foreground to approximate the "super focus" camera effect of the camera. The camera captures the image using a large aperture where the foreground is in focus and where the foreground is clearer than the background. Compared to conventional systems in which an entire image (eg, all image layers of an image) is edited based on a particular image editing operation, the user can thus experience greater control over the visual effects of the image.In a particular embodiment, the identification of the cluster is automatically initiated in response to user input selecting an image. For example, the identification of the cluster may be automatically initiated in response to the user selecting an image via a user interface (UI) (eg, selecting an image from an image gallery). Automatically identifying clusters can "hide" the time lag associated with the identification of the cluster. For example, by automatically identifying a cluster in response to selecting an image via the UI, the time lag associated with the identification of the cluster may be "hidden" during loading of the image. That is, the user can perceive that the time lag is associated with the loading of the image, and not with the beginning of the particular image editing operation that begins after the image is loaded. In this example, when the user initiates an image editing operation, the cluster recognition may already be complete, which may cause the image editing operation to appear to be faster for the user.The processor can automatically initiate one or more image processing operations in response to user input regarding the first image layer. To illustrate, for an image that includes multiple image layers (eg, multiple foreground objects and/or multiple background objects), the processing of each individual image layer can be associated with a time lag. To reduce the time lag associated with image segmentation, the processor can be associated with the first image layer using one or more identified clusters before receiving user input regarding each of the plurality of image layers One or more image processing operations (eg, image segmentation operations and/or image tagging operations). In this example, image processing of the foreground portion of the image may begin before receiving user input regarding the background of the image. Accordingly, the image editing performance of the processor can be improved as compared to a device waiting for a start image processing operation until a user input for each image layer is received.Alternatively or additionally, the mobile device can store user configuration parameters that determine image editing operations that can be performed at the mobile device in response to user input. To illustrate, a mobile device can include a display (eg, a touch screen). The display can depict the image, for example, in conjunction with an image editing application executed by the mobile device. The image editing application can perform an image editing operation on the image in response to user input. For example, the user configuration parameter can instruct the mobile device to perform a particular image editing operation (eg, a color change operation) in response to receiving a first user input indicating a particular direction of movement, such as in a vertical (or substantially vertical) The "slip" across the touch screen in the direction. The user configuration parameter may further instruct the mobile device to perform a second image editing operation (eg, a subsequent vertical slip operation) in response to the second user input indicating the particular direction received after the first user input. For example, the user configuration parameters may indicate that an image blurring operation will be performed in response to the second user input.User configuration parameters can be configured by the user. For example, the user configuration parameters can be modified by the user to indicate that a second image editing operation will be performed prior to the first image editing operation (eg, in response to the first user input). In a particular embodiment, the third user input received after the first user input and after the second user input may "undo" the first image editing operation and the second image editing operation. Accordingly, image editing is simplified for users of mobile devices. In addition, image editing operations can be configured using user configuration parameters.In a particular embodiment, a method of manipulating an image by a device is disclosed. The method includes segmenting image data corresponding to an image into a first image layer and a second image layer. The method further includes adjusting a first attribute of the first image layer independent of a second attribute of the second image layer based on user input.In another particular embodiment, an apparatus includes a memory and a processor coupled to the memory. The processor is configured to segment the image data corresponding to the image into a first image layer and a second image layer. The processor is further configured to adjust the first attribute of the first image layer independently of the second attribute of the second image layer based on the user input.In another particular embodiment, a non-transitory computer readable medium stores instructions. The instructions are executable by the processor to cause the processor to segment the image data associated with the image into a first image layer and a second image layer. The instructions are further executable by the processor to adjust a first attribute of the first image layer independent of a second attribute of the second image layer based on user input.In another particular embodiment, an apparatus includes means for segmenting image data associated with an image into a first image layer and a second image layer. The apparatus further includes means for adjusting a first attribute of the first image layer independently of a second attribute of the second image layer based on user input.In another particular embodiment, a method includes displaying a first image at a mobile device. The method further includes receiving a first user input at the mobile device. The first user input indicates a direction relative to the mobile device. A first image editing operation is performed on the first image to generate a second image based on the first user input. The method further includes displaying a second image at the mobile device and receiving a second user input at the mobile device. A second user input indicates the direction. The method further includes performing a second image editing operation on the second image based on the second user input to generate a third image.In another particular embodiment, an apparatus includes a memory and a processor coupled to the memory. The processor is configured to cause the mobile device to display the first image and receive the first user input at the mobile device. The first user input indicates a direction relative to the mobile device. The processor is further configured to perform a first image editing operation on the first image based on the first user input to generate the second image and cause the mobile device to display the second image and receive the second user input. A second user input indicates the direction. The processor is further configured to perform a second image editing operation on the second image based on the second user input to generate a third image.In another particular embodiment, a computer readable medium stores instructions executable by a processor to cause a mobile device to display a first image at a mobile device and to receive a first user input at the mobile device. The first user input indicates a direction relative to the mobile device. The instructions are further executable by the processor to perform a first image editing operation on the first image based on the first user input to generate a second image, display the second image at the mobile device, and receive the second user input at the mobile device . The second user input indicates the direction. The instructions are further executable by the processor to perform a second image editing operation on the second image based on the second user input to generate a third image.In another particular embodiment, an apparatus includes means for displaying a first image at a mobile device and means for receiving a first user input at the mobile device. The first user input indicates a direction relative to the mobile device. The apparatus further includes means for performing a first image editing operation on the first image to generate a second image based on the first user input, means for causing the mobile device to display the second image, and for receiving the second user input s installation. The second user input indicates the direction. The apparatus further includes means for performing a second image editing operation on the second image based on the second user input to generate a third image.In another particular embodiment, a method includes receiving a first user input from a user interface. The first user input selects an image for displaying an operation. The method further includes performing a display operation based on the first user input, and automatically initiating a cluster operation using the image data corresponding to the image based on the first user input.In another particular embodiment, an apparatus includes a memory and a processor coupled to the memory. The processor is configured to receive a first user input from a user interface. The first user input selects an image for displaying an operation. The processor is further configured to perform a display operation based on the first user input and automatically initiate a cluster operation using image data corresponding to the image based on the first user input.In another particular embodiment, a computer readable medium stores instructions executable by a processor to cause the processor to receive a first user input from a user interface. The first user input selects an image for displaying an operation. The instructions are further executable by the processor to perform a display operation based on the first user input and automatically initiate a cluster operation using image data corresponding to the image based on the first user input.In another particular embodiment, an apparatus includes means for receiving a first user input from a user interface. The first user input selects an image for the display operation. The apparatus further includes means for performing a display operation based on the first user input, and means for automatically initiating a cluster operation using image data corresponding to the image based on the first user input.One particular advantage provided by at least one of the disclosed embodiments is independent image editing of the first image layer and the second image layer of the image. Conventional systems that can edit a whole image (eg, all image layers of an image) based on a particular image editing operation can thus enable the user to "fine tune" the image editing operation. Another particular advantage provided by at least one of the disclosed embodiments is the simplified control of the user interface (UI) by the user of the mobile device. For example, the UI may enable a user to set user configuration parameters that assign specific image editing operations (eg, slips in a particular direction) to a particular user input, which simplifies image editing performed by the mobile device User control of the application. Another particular advantage of at least one of the disclosed embodiments is a faster image editing experience as perceived by a user of the device. Other aspects, advantages, and features of the invention will become apparent from the review of the appended claims.DRAWINGS1 is a block diagram of a particular illustrative embodiment of a processor;2 illustrates aspects of a particular example image processing operation that may be performed by the processor of FIG. 1;3 illustrates additional aspects of an example image processing operation that may be performed by the processor of FIG. 1;4 illustrates additional aspects of an example image processing operation that may be performed by the processor of FIG. 1;Figure 5 illustrates additional aspects of an example image processing operation that may be performed by the processor of Figure 1;6 is a flow chart illustrating a method that may be performed by the processor of FIG. 1;7 is a flow chart illustrating another method that may be performed by the processor of FIG. 1;8 is a block diagram of a particular illustrative embodiment of a mobile device that can include the processor of FIG. 1;9 is a block diagram illustrating an example operational state of a mobile device;10 is a flow chart illustrating a method that can be performed by the mobile device of FIG. 9;11 is a block diagram of a particular illustrative embodiment of the mobile device of FIG. 9;12 is a flow diagram illustrating a method that may be performed by a device, such as a mobile device including the processor of FIG. 1.Detailed waysReferring to Figure 1, a particular illustrative embodiment of a processor is depicted and is generally designated 100. The processor 100 includes a cluster recognizer 124, an image segmentation generator 128, an image component marker 132, and an image modifier 136.In operation, processor 100 can be responsive to image data 102. For example, image data 102 can be received from a camera or from a camera controller associated with the camera. Image data 102 may include one or more image layers, such as image layer 104a and image layer 106a. The image layers 104a, 106a may correspond to the foreground portion of the image and the background portion of the image, respectively. Alternatively, image layers 104a, 106a may each correspond to a foreground portion or may each correspond to a background portion.Image data 102 may further include one or more clusters of pixels (eg, corresponding to a cluster of pixels of the objects depicted in the image). For example, Figure 1 illustrates that image layer 104a can include cluster 108a and cluster 110a. As another example, FIG. 1 further illustrates that image layer 106a can include cluster 112a and cluster 114a. The clusters 108a, 110a, 112a, and 114a may include one or more attributes, such as attributes 116, attributes 118a, attributes 120a, and/or attributes 122a. Attributes 116, 118a, 120a, and/or 122a may correspond to visual aspects of the image, such as color, sharpness, contrast, context of the image (eg, settings, such as background settings), blurring effects, and/or another aspect, As an illustrative example.Cluster recognizer 124 may be responsive to image data 102 to identify one or more clusters of images using one or more cluster recognition techniques. For example, cluster recognizer 124 can identify one or more clusters of image data 102, such as one or more of clusters 108a, 110a, 112a, or 114a. Cluster recognizer 124 may analyze image data 102 to generate cluster identification 126. Cluster identification 126 may identify one or more of clusters 108a, 110a, 112a, or 114a. The clusters may correspond to groups of similar pixels of image data 102. To illustrate, the pixels may be similar if they are spatially similar (eg, within a common threshold region) and/or if the pixels are numerically similar (eg, within a pixel value threshold). Cluster recognizer 124 may perform one or more operations to compare pixels of image data 102 to identify one or more groups of similar pixels to produce cluster identification 126.The cluster recognizer 124 can be configured to generate the cluster identification 126 using a "superpixel" technique that identifies one or more superpixels of the image data 102. The one or more superpixels may correspond to clusters 108a, 110a, 112a, and 114a. In a particular example, cluster recognizer 124 is configured to operate in accordance with Simple Linear Iterative Cluster (SLIC) technology. The SLIC technique can divide the image data into "grids" and can compare the pixels of the image data 102 within each component of the raster to identify a cluster of image data 102. SLIC technology can be implemented in conjunction with color space models that map colors to multidimensional models, such as the International Lighting Commission L*, a*, and b* (CIELAB) color space models.The SLIC technique can identify K superpixel centers C_k at grid intervals, where k=1, 2, . . . K, where C_k=[l_k, a_k, b_k, x_k, y_k]T, each grid spacing has a grid The interval size S, where K is a positive integer, where T indicates a transpose operation, and wherein l, a, b, x, and y may indicate parameters associated with the CIELAB color space model. In a particular embodiment, the spatial extent of any superpixel is approximately 2S. Accordingly, pixels included in a particular superpixel may be located within a 2S x 2S region around the center of the superpixel (relative to the x-y plane). The 2S×2S region may correspond to a “search area” similar to a pixel of each super pixel center.In the CIELAB color space model, a particular Euclidean distance (eg, the distance between points indicating a color in a multi-dimensional model) can be perceived by the user when implemented at the display, potentially resulting in a poor visual appearance or another effect. If the spatial pixel distance exceeds this perceived color distance threshold, then the spatial pixel distance may outweigh the pixel color similarity, causing the image to be distorted (eg, resulting in superpixels that do not adhere to the region boundaries (only close in the image plane)). Therefore, instead of using a simple Euclidean norm in a five-dimensional (5D) space, the distance metric D_s can be defined such that D_s = d_lab+(m/S)d_xy, where d_lab=sqrt[(l_k-l_i)^2+ (a_k-a_i)^2+(b_k-b_i)^2)], d_xy=sqrt[(x_k-x_y)^2+(x_k-x_y)^2],m indicates variable enablement of the tightness of the superpixel Control, and S indicates the grid spacing size S. In this example, D_s corresponds to the sum of the lab distance (d_lab) and the x-y plane distance (d_xy), which is normalized by the grid spacing size S and has "tightness" determined by the variable m. For further explanation, Table 1 illustrates an example pseudo code corresponding to an example operation of cluster recognizer 124.Table 1Image segmentation generator 128 may be responsive to cluster identifier 124 to segment the image using one or more segmentation techniques. For example, image segmentation generator 128 may generate segmentation mask 130 based on cluster identification 126. In a particular example, segmentation mask 130 identifies one or more foreground or background layers of image data 102, such as by separating image layer 104a from image layer 106a based on cluster identification 126. Image segmentation generator 128 may generate segmentation mask 130 by isolating one or more clusters identified by cluster recognizer 124 from the remainder of image data 102. For example, image segmentation generator 128 may segment (eg, remove, segment, etc.) one or more groups of pixels from image data 102 indicated by cluster identification 126 to produce segmentation mask 130.In a particular embodiment, image segmentation generator 128 is responsive to a set z_n of superpixels generated by cluster recognizer 124. Superpixels can be represented using the CIELAB color space model. Image segmentation generator 128 may apply a "grabcut" technique to the set of superpixels. Image segmentation generator 128 may utilize a grabcut technique to generate a Gaussian Mixture Model (GMM). In a particular embodiment, image segment generator 128 is configured to generate a first GMM having a first set of Gaussian distributions of superpixels corresponding to the foreground of the image, and further configured to generate a background having a background corresponding to the image A second GMM of the second set of Gaussian distributions of superpixels. Each GMM may correspond to a fully covariant Gaussian mixture having a positive integer number of K components (eg, K=5). To improve the tractability of the image processing operations, the vector k={k_1,...k_n,...k_N} can be used in conjunction with the GMM, where k_n∈{1,...K}. A corresponding GMM component (eg, a_n) can be assigned to each pixel. The GMM component may be selected from a background GMM or a foreground GMM (eg, according to a_n=0 or a_n=1).The operation of image segmentation generator 128 may be associated with energy consumption, such as Gibbs corresponding to E(α, k, θ, z) = U(α, k, θ, z) + V(α, z) Energy, where k can indicate a GMM variable, U can indicate a data item, U(α,k,θ,z)=∑ n D n (an , kn , θ, zn ), D n (an , kn , θ, zn )=-logp(zn |an , kn , θ)-logπ(an , kn ), D n may indicate a Gaussian probability distribution, and p(·) may indicate a mixed weighting coefficient such that (until constant):Therefore, The parameters of the model may correspond to θ={π(α,k), μ(α,k), ∑(α,k),α=0,1,k=1...K} (ie, for background and foreground The weight of the distributed 2K Gaussian component is π, the mean μ, and the covariance Σ). In a particular example, the smoothness term V is unchanged relative to the monochrome instance, but the contrast term is calculated using the Euclidean distance in the color space according to the following formula: V(α,z)=γ∑ (m,n)∈ C [α m ≠α n ]exp(-β‖zm -zn ‖ 2 ). For further explanation, Table 2 illustrates an example pseudocode corresponding to an example operation of processor 100.Table 2Image component marker 132 may be responsive to image segmentation generator 128. For example, image component marker 132 may analyze segmentation mask 130 to find one or more image artifacts, such as image artifacts 134. Image artifact 134 may correspond to a portion of the image that is "inadvertently" separated from another portion of the image. For example, a portion of an image may be "falsely identified" as being in the foreground or background of the image, and image artifact 134 may correspond to the "misidentified" portion. In a particular embodiment, image component marker 132 can be responsive to user input to identify image artifact 134.In a particular embodiment, image component marker 132 is configured to compensate for the operation of image segmentation generator 128. To illustrate, segmentation mask 130 may have one or more "holes" (eg, image artifacts, such as image artifacts 134) due to color-based operations of image segmentation generator 128. Furthermore, one or more objects or layers may be "mis-marked" due to color similarity. For example, different colors of a common object may be mistakenly marked as foreground and background, and/or similar colors of different objects may be mistakenly marked as foreground or background. Image component marker 132 may be configured to operate the foreground region as an object, and the object may be operated as a domain, such as a "simple connectivity domain." For further explanation, Table 3 illustrates an example pseudo code corresponding to an example operation of image component marker 132.table 3Image modifier 136 may be responsive to image component marker 132. The image modifier 136 may further adjust the first layer of the image data 102 independently of the second attribute of the second layer of the image data 102, for example, via user input via a user interface (UI) of the device including the processor 100. An attribute. To illustrate, the example of FIG. 1 illustrates that image modifier 136 can generate modified image data 138 corresponding to image data 102. The modified image data 138 can depict that the image layer 104a is modified independently of the image layer 106a.The modified image data 138 may include an image layer 104b corresponding to the image layer 104a, and may further include an image layer 106b corresponding to the image layer 106a. Image layer 104b may include clusters 108b, 110b corresponding to clusters 108a, 110a, and image layer 106b may include clusters 112b, 114b corresponding to clusters 112a, 114a. The example of FIG. 1 illustrates that cluster 108b has attributes 140 that have been modified (eg, based on user input) relative to attributes 116. To illustrate, user input may indicate modification of color attributes, sharpness attributes, blur attributes, and/or context attributes of image data 102 to cause processor 100 to generate modified image data 138. Moreover, the example of FIG. 1 illustrates that attribute 116 has been independent of one or more other attributes (eg, independent of attributes 118b, 120b, and 122b, which may remain unchanged relative to attributes 118a, 120a, and 122a or may depend on particular user input. The adjustment) is modified to produce the attribute 140.The technique of Figure 1 illustrates independently adjusting multiple layers of an image to achieve one or more visual effects. The example of FIG. 1 thus implements increased user control of an image editing operation of a device that includes processor 100. As a specific illustrative example, a user of the device may independently modify the attributes of the foreground of the image (or vice versa) relative to the attributes of the background of the image, for example by "blurring" the background but not the foreground. As a specific example, the background can be blurred to approximate the "super focus" camera effect.Furthermore, Figure 1 depicts an example of a "superpixel based grabcut" technique for extracting image layers. Certain conventional image processing techniques attempt to segment the image "globally" (or on a "per pixel" basis). The example of FIG. 1 identifies clusters of image data 102 and segments the images based on clusters, which may improve the performance of image processing operations as compared to global techniques. Furthermore, image improvement operations (eg, one or more algorithm iterations to "correct" one or more boundaries of an image layer or object) may use a superpixel based grabcut compared to a global technique that analyzes image data on a per pixel basis. Technology acceleration. In addition, edge review and compactness have been found to be two features of clustering technology (eg, SLIC). Edge review can be associated with enhanced boundary detection, and tightness can be used in conjunction with image segmentation operations such as grabcuts. Accordingly, devices utilizing superpixel-based grabcut technology can have improved performance.Referring to Figure 2, an example of an image is depicted and is generally designated 200. Image 200 includes background 202 and foreground 204. In a particular example, background 202 corresponds to image layer 104a and foreground 204 corresponds to image layer 106a. Image 200 may correspond to image data 102 (eg, image data 102 may represent image 200).FIG. 2 further illustrates the clustered image 210. The clustered image 210 may be generated by the cluster recognizer 124. The clustered image 210 includes a plurality of clusters of pixels of the image 200, such as a representative cluster 212. Cluster 212 may be identified by cluster identification 126.Figure 2 further illustrates the resulting image 220. The resulting image 220 illustrates a first attribute of the first layer of the image 200 based on the second attribute of the second layer of the image 200 that is independent of the image 200. For example, as illustrated in FIG. 2, background portion 202 of image 200 has been removed to produce resulting image 220. For purposes of illustration, background 202 may be removed based on clustered image 210, eg, based on the similarity of clusters via clustered image 210. In a particular embodiment, the predetermined content may be substituted for the background 202. As a specific illustrative example, a forest scene corresponding to background 202 may be replaced with a beach scene (or another scene).The example of Figure 2 illustrates the independent modification of the layers of the image. A user of the device may thus experience greater control of the image editing operation than a device that applies an image editing effect to the entire image.FIG. 3 depicts an example of an image 300 that is an illustrative depiction of a segmentation mask 310 and a modified image 320 that correspond to image 300. In FIG. 3, the modified image 320 is generated using the segmentation mask 310 to modify the image 300. For example, as illustrated in FIG. 3, segmentation mask 310 identifies a plurality of foreground objects. Image segment generator 128 of FIG. 1 may segment image 300 by segmenting a plurality of foreground objects relative to the background of image 300. In this way, independent modification of image layer properties is achieved.By segmenting multiple layers of image 300, multiple layers of image 300 can be independently adjusted. In the example of FIG. 3, the modified image 320 contains a blurred background. Moreover, modified image 320 can include the one or more foreground objects that have been modified relative to image 300, such as by changing the color attributes of one or more foreground objects. To illustrate, the segmentation mask 310 identifies a plurality of foreground objects, each of which can be independently modified relative to each other (and relative to the background), for example, by modifying the shirt color of a foreground object independently of the shirt color of another foreground object.The example of FIG. 3 illustrates that a segmentation mask (eg, segmentation mask 310) can be used to implement independent adjustment of attributes of layers of an image. For example, segmentation mask 310 may implement independent color adjustment of the foreground portion of image 300.Referring to Figure 4, an image is depicted and is generally designated 400. Image 400 can be displayed at a user interface (UI). The UI may enable the user to independently adjust the first attribute of the first layer of image 400 relative to the second attribute of the second layer of image 400. For purposes of illustration, the example of FIG. 4 illustrates user input 402. User input 402 may correspond to a user's sliding movement at the display device, such as a sliding movement at the display device of the mobile device displaying image 400. In the example of FIG. 4, user input 402 may indicate an image layer of image 400, such as an image background. User input 402 may indicate at least a threshold number of pixels of the UI to select an image layer of image 400. To illustrate, if the user accidentally touches the UI to generate user input 402, user input 402 may indicate fewer than the threshold number of pixels, and user input 402 may not cause selection of a layer. Alternatively, if user input 402 indicates at least the threshold number of pixels, user input 402 may cause selection of a layer.FIG. 4 further illustrates image 410. In image 410, the background portion has been removed, for example, in response to user input 402. For example, if user input 402 identifies the background portion of image 400, the background portion of image 400 may be removed using cluster recognition and/or segmentation techniques (eg, one or more techniques described with reference to Figures 1-3).FIG. 4 further illustrates an improved image 420 corresponding to image 410. For example, based on additional user input, an additional background portion of image 410 may be removed to produce image 420.In FIG. 4, user input can be received to update image 420 to remove one or more additional background portions. For example, additional user input can be received to generate a segmentation mask 430. Segmentation mask 430 contains image artifacts 432. Image artifact 432 may correspond to image artifact 134 of FIG. To illustrate, the operation of image segmentation generator 128 may generate segmentation mask 130 corresponding to segmentation mask 430. However, segmentation mask 130 may include one or more image artifacts, such as image artifacts 432. In a particular embodiment, image component marker 132 operates to remove image artifact 432 to produce image 440. Image 400 may correspond to modified image data 138.Figure 4 illustrates a technique for achieving greater control of the user's image editing operations. Figure 4 further illustrates the removal of image artifacts (e.g., image artifacts 432) to further improve the quality of the image editing operation. The technique of Figure 4 can be utilized in conjunction with a user interface (UI), as further described with reference to FIG.FIG. 5 illustrates an example of a user interface (UI) 500. The UI 500 can be presented at the display, such as at the display of the mobile device. The display can correspond to a touch screen display configured to receive user input. In the example of FIG. 5, UI 500 indicates a plurality of images presented to a user in conjunction with an image editing application (eg, a mobile device application that graphically renders an image and facilitates image editing operations on the image).FIG. 5 further illustrates UI 510 corresponding to UI 500 after image 502 is selected. In response to user input 504 indicating image 502, the image editing application may magnify image 502 to produce UI 520 (eg, by increasing the image from a thumbnail to a full view). User input 504 may correspond to a slip move or click action at UI 510 as an illustrative example.In a particular illustrative embodiment, user interface (UI) 530 depicts images 502 in conjunction with a plurality of buttons, such as buttons 532, 534, 536, 538. Buttons 532, 534, 536, and 538 can be assigned one or more operations, such as adjustments to the image attributes of image 502.For illustration, the user can select button 534. Button 534 can be selected by the user to facilitate an indication of the background or foreground portion of the image depicted by UI 530. For further explanation, FIG. 5 illustrates UI 540 in which user input is received to indicate the background and/or foreground of image 502. User input may correspond to user input 402 of FIG. As a specific illustration, the user may select button 534 (eg, to enter the background identification mode of operation) and then enter a user input (eg, user input 402) to indicate the background portion of the image displayed at UI 540. The user input may correspond to a slip movement of the background portion of the marked image. One or more of buttons 532, 536, and 538 can serve as foreground indicator buttons that can be used to mark the foreground portion of the image. One or more of the buttons 532, 536, 538 may correspond to a default operation (eg, associated with a particular image editing application) and/or a user-defined operation (eg, a user-defined operation based on user preference input).To further illustrate, if multiple objects within the image layer are identified, buttons 532, 536, and 538 can enable the user to select between multiple objects. As a specific example, if multiple foreground objects are identified (eg, in conjunction with image 300 of FIG. 3), button 536 can be used to indicate the first foreground object, and button 538 can be used to identify the second foreground object. In this example, after the button 536 is pressed, the user indicating the first foreground object is slid to initiate an image editing operation that can be aimed at the first foreground object. Similarly, after the button 538 is pressed, the user indicating the second foreground object is slid to initiate an image editing operation that is aimed at the second foreground object.Figure 5 illustrates an enhanced user interface (UI) technology to enable a user to simply and effectively control image editing operations. For example, the user can use the button 534 as described in the example of FIG. 5 to indicate the background portion (or foreground portion) of the image.Referring to Figure 6, a particular illustrative embodiment of a method is depicted and is generally designated 600. Method 600 can be performed at a device, such as at a mobile device that includes a processor. In a particular illustrative embodiment, method 600 is performed by processor 100 of FIG.Method 600 includes receiving image data corresponding to an image, at 604. The image data may correspond to the image data 102. The image may correspond to an image captured by the camera, and the image data may be loaded in conjunction with the image editing application to enable editing of the image. Method 600 can further include identifying a cluster associated with the image data, at 608. In a particular example, cluster recognizer 124 can identify a cluster of image data 102, such as cluster 108a.The method 600 can further include segmenting the image data by identifying the first image layer of the image based on the cluster, at 612. To illustrate, cluster recognizer 124 can provide cluster identification 126 to image segmentation generator 128. Based on the cluster identification 126, the image segmentation generator 128 can segment the image data by identifying the foreground portion of the image. Image segmentation generator 128 may generate segmentation mask 130 to enable independent modification of the image layer of the image.The method 600 can further include initiating one or more component marking operations using the first image layer, at 616. Method 600 can further include identifying a second image layer (eg, background) of the image, at 620. The method 600 can further include the user of the prompting device adjusting the first attribute of the first image layer independently of the second attribute of the second image layer, at 624. As a specific example, a user of the device may be prompted to adjust the attribute 116 independently of one or more of the attributes 118a, 120a, and 122a (eg, to generate the attribute 140).Method 600 can further include receiving user input, at 628. In a particular embodiment, the user input is received at a display of the device, such as at a touch screen. Method 600 can further include generating a modified image based on user input, at 640. The modified image may correspond to the modified image data 138 of FIG.The method 600 of Figure 6 enables user simplified and efficient control of image editing operations. For example, method 600 can be utilized in conjunction with a portable device (eg, a mobile device with a touch screen user interface) while still achieving advanced control of the user's image editing operations (eg, adjustment of layers of independent images).Referring to Figure 7, a particular illustrative embodiment of a method is depicted and is generally designated 700. Method 700 can be performed at a device, such as at a mobile device that includes a processor. In a particular illustrative embodiment, method 700 is performed by processor 100 of FIG.Method 700 includes segmenting image data corresponding to an image into a first image layer and a second image layer, 708. The first image layer and the second image layer may correspond to image layers 104a, 106a, respectively, as illustrative examples.The method 700 can further include adjusting a first attribute of the first image layer, 712, based on a user input independent of a second attribute of the second image layer. In a particular example, the first attribute corresponds to attribute 116 and the second attribute corresponds to one or more of attributes 120a, 122a.Method 700 facilitates an enhanced image editing operation. For example, image editing operations can separately target image layers (eg, background and foreground) to enable different image editing effects on one image layer relative to another.Referring to Figure 8, a block diagram of a particular illustrative embodiment of a mobile device is depicted and is generally designated 800. Mobile device 800 can include one or more processing resources 810. One or more processing resources 810 include processor 100. One or more processing resources 810 can be coupled to a computer readable medium, such as to memory 832 (eg, a non-transitory computer readable medium). Memory 832 can store instructions 858 that can be executed by one or more processing resources 810 and data 852 that can be used by one or more processing resources 810. Memory 832 can further store cluster identification instructions 892, image segmentation instructions 894, and/or image tagging instructions 896.Mobile device 800 can include a camera having an image sensor, such as a charge coupled device (CCD) image sensor and/or a complementary metal oxide semiconductor (CMOS) image sensor. For example, FIG. 8 depicts camera 856 can be coupled to camera controller 890. Camera controller 890 can be coupled to one or more processing resources 810. The instructions 858 can include an image editing application that can be executed by the processing resource 810 to edit one or more images captured by the camera 856, and the data 852 can include image data (eg, image data 102) corresponding to one or more images.FIG. 8 also shows display controller 826 coupled to one or more processing resources 810 and coupled to display 828. The display can be configured to present a user interface (UI) 872. In a particular embodiment, display 828 includes a touch screen, and UI 872 is responsive to user operations (eg, slip operations) at the touch screen.A decoder/decoder (CODEC) 834 can also be coupled to one or more processing resources 810. Speaker 836 and microphone 838 can be coupled to CODEC 834. FIG. 8 also indicates that wireless controller 840 can be coupled to one or more processing resources 810. Wireless controller 840 can be further coupled to antenna 842 via a radio frequency (RF) interface 880.In a particular embodiment, one or more processing resources 810, memory 832, display controller 826, camera controller 890, CODEC 834, and wireless controller 840 are included in an in-package system or on-chip system device 822. Input device 830 and power source 844 can be coupled to system-on-chip device 822.Moreover, in a particular embodiment, and as illustrated in FIG. 8, display 828, input device 830, camera 856, speaker 836, microphone 838, antenna 842, RF interface 880, and power supply 844 are external to system-on-chip device 822. However, each of display 828, input device 830, camera 856, speaker 836, microphone 838, antenna 842, RF interface 880, and power source 844 can be coupled to components of system-on-chip device 822, such as to an interface or to Controller.In connection with the described embodiments, the non-transitory computer readable medium stores instructions. The non-transitory computer readable medium can correspond to memory 832, and the instructions can include any of cluster identification instructions 892, image segmentation instructions 894, image tag instructions 896, and/or instructions 858. The instructions are executable by a processor (eg, processor 100) to cause the processor to segment image data associated with an image into a first image layer and a second image layer. The image data may correspond to image data 102, and the first image layer and the second image layer may correspond to image layers 104a, 106a. The instructions are further executable by the processor to adjust a first attribute of the first image layer independent of a second attribute of the second image layer based on user input. The first attribute and the second attribute may correspond to attributes 116, 120a as illustrative examples.In another particular embodiment, a device (e.g., processor 100) includes means for segmenting image data associated with an image into a first image layer and a second image layer. The first image layer and the second image layer may correspond to the image layers 104a, 106a. The means for segmenting the image data may correspond to the image segmentation generator 128, and the image data may correspond to the image data 102. The apparatus further includes means for adjusting a first attribute of the first image layer independently of a second attribute of the second image layer based on user input. The means for adjusting the first attribute may correspond to the image modifier 136. The first attribute and the second attribute may correspond to attributes 116, 120a as illustrative examples.Referring to Figure 9, a first operational state of the mobile device 902 is depicted and is generally designated 900. Mobile device 902 can include processor 100 (not shown in FIG. 9). Alternatively or additionally, the mobile device can include another processor.Mobile device 902 includes display device 904 and memory 906. Display device 904 can display image 908 having an attribute 910 (eg, a color attribute) and further having an attribute 912 (eg, a fuzzy attribute). The attributes 910, 912 may correspond to a common layer of the image 908 or a separate layer corresponding to the image 908. In a particular example, attribute 910 corresponds to attribute 116 of image layer 104a, and attribute 912 corresponds to attribute 120a of image layer 106a.Memory 906 can store image data 914 corresponding to image 908 and can further store one or more user configuration parameters 916. User configuration parameters 916 can determine how user input received at mobile device 902 affects one or more of attributes 910, 912. For illustration, user input 918 can be received at mobile device 902. User input 918 can generally indicate a first direction, such as a vertical or horizontal direction relative to mobile device 902.As used herein, a user input may "substantially" have the direction if the user input is to be recognized by the device as indicating a direction, depending on the particular device configuration and/or application. To illustrate, if the device will recognize the slip input as indicating a vertical direction, then the slip input may not be exactly vertical but may be substantially vertical. As a specific, non-limiting example, the apparatus can be configured such that if the slip operation has a particular vector component within the direction, the slip operation is recognized as indicating the direction. For example, user input at the device can be resolved by the device into multiple directional components (eg, vectors). The device can compare the plurality of directional components to determine a ratio of the plurality of directional components. If the ratio exceeds a threshold, the device can determine that the user input indicates the direction. To further illustrate, if the user input is not straight (or not substantially straight), the device can approximate the user input by "installing" (eg, interpolating) a point associated with the user input to a line according to a technique. The techniques may include "Minimum Mean Square Error" (MMSE) techniques as an illustrative example.User configuration parameters 916 may indicate that a user input indicating a first direction indicates a first image editing operation to be performed on image 908. For example, user configuration parameters 916 can indicate that a user input indicating a first direction indicates a color attribute change operation. In a particular embodiment, user input 918 includes a slip operation (eg, vertical or horizontal slip). It should be appreciated that in one or more alternative examples, user input 918 can include another operation, such as a hovering operation, a click operation, a stylus input operation, an infrared (IR) based operation, a pointing gesture (eg, a combination configured to Detecting a multi-camera arrangement of pointing gestures, or another operation, depending on the particular implementation.FIG. 9 further indicates a second operational state 902 of the mobile device 902. In the second operational state 920, the attribute 910 has been modified based on the user input 918 to generate the attribute 922. Mobile device 902 can generate modified image data 926 in response to user input 918.User input 928 can be received at mobile device 902. User input 928 can generally indicate the first direction. User configuration parameters 916 may indicate that the user input identifying the first direction is causing another image editing operation on image 908. For example, the third operational state 930 of the mobile device 902 indicates that the attribute 912 has been modified based on the user input 928 to generate the attribute 932. Attribute 932 may correspond to blur of the image layer of image 908. Mobile device 902 can generate modified image data 936 indicating attributes 932. The modified image data 936 may correspond to modified image data 926 having a "blur" effect (eg, after applying Gaussian blurring techniques to the modified image data 926).In a particular embodiment, the user input indicating the first direction indicates a first image editing operation. To illustrate, the horizontal slip movement is a color change operation that can indicate a particular layer (eg, foreground) of the aiming image. One or more subsequent horizontal slip movements may "cycle" through different color changing operations (eg, red to blue to green, etc.). A user input indicating the second direction may indicate a second image editing operation, such as an image editing operation on a different layer of the image. For example, a vertical slip movement may select or cause an image blurring operation, such as to the background of the image. One or more subsequent vertical slip movements may select or cause one or more additional image editing operations to the background, such as by replacing the background with predetermined content (eg, a beach scene) and/or other content. Thus, in one embodiment, the slip in the first direction (eg, vertical) may cycle between different available image editing operations (eg, visual effects), and in the second direction (eg, horizontal) slip may be targeted Loops between different options (such as color, blur strength, etc.) for the selected image editing operation. In alternative embodiments, slipping in different directions or along different axes may correspond to different image editing operations (eg, up/down shifting corresponds to color change, left/right slip motion corresponds to blurring, along pairing) The corner slip corresponds to the background scene change, etc.). The particular direction associated with the user input operation can be configured by the user. For example, the user configuration parameters 916 can be user configurable to indicate that the diagonal slip movement is indicative of a color changing operation (eg, instead of a horizontal direction), or an image blurring operation (eg, instead of a vertical direction). The user configuration of the user configuration parameters 916 is further described with reference to FIG.The example of Figure 9 illustrates simplified control of an image editing operation. For example, because user inputs 918, 928 correspond to respective image editing operations, a user of mobile device 902 can perform multiple image editing operations using a convenient and fast input method (eg, sliding movement), thereby reducing image editing operations The complexity.Referring to Figure 10, a particular illustrative embodiment of a method is depicted and is generally designated 1000. Method 1000 includes displaying a first image at the mobile device, at 1004. The mobile device can correspond to the mobile device 902 and the first image can correspond to the image 908.The method 1000 further includes receiving a first user input at the mobile device, 1008. The first user input indicates the direction relative to the mobile device. For example, the first user input can indicate a vertical direction or a horizontal direction. The first user input can correspond to user input 918.The method 1000 can further include performing a first image editing operation on the first image based on the first user input to generate a second image, 1012. The first image editing operation may generate image 924, for example, by modifying attribute 910 to generate attribute 922. As a specific example, the first image editing operation can include modifying the color attributes of image 908 to produce image 924.Method 1000 can further include displaying a second image at the mobile device, 1016. For example, image 924 can be displayed at display device 904 of mobile device 902.The method 1000 can further include receiving a second user input at the mobile device, 1020. A second user input indicates the direction. In one or more other configurations, the second user input can generally indicate another direction relative to the mobile device (eg, a horizontal direction, instead of a vertical direction indicated by the first user input, etc.). The second user input can correspond to user input 928.The method 1000 can further include performing a second image editing operation on the second image to generate a third image, at 1024. For example, the second image editing operation can modify the attributes 912 to produce the attributes 932, for example, by blurring the layers of the image 924. The third image may correspond to image 934.Method 1000 can optionally include receiving a third user input indicating a direction relative to the mobile device. The third user input corresponds to a command to cancel the first image editing operation and the second image editing operation. To illustrate, if the user of the mobile device is dissatisfied with the first image editing operation and the second image editing operation, the user may "repeat" the user input (eg, a sliding operation in a substantially specific direction) to "undo" the first image. Edit operation and second image editing operation.Method 1000 illustrates simplified control of an image editing operation. For example, a user of a mobile device can perform multiple image editing operations using a particular input method (eg, a sliding motion), thereby reducing the complexity of the image editing operation. Furthermore, as described with reference to Figure 11, the user can reconfigure the user configuration parameters to adjust the order of the image editing operations.FIG. 11 depicts a particular illustrative embodiment of a mobile device 902. Mobile device 902 can include one or more processing resources 1110 (eg, a processor (eg, processor 100), another processor, or a combination thereof). The one or more processing resources 1110 can be coupled to a computer readable medium, such as to a memory 906 (eg, a non-transitory computer readable medium). Memory 906 can store instructions 1158 that are executable by the one or more processing resources 1110 and data 1152 that can be used by the one or more processing resources 1110. Memory 906 can store image data 914 and user configuration parameters 916.Mobile device 902 can include a camera having an image sensor, such as a charge coupled device (CCD) image sensor and/or a complementary metal oxide semiconductor (CMOS) image sensor. For example, FIG. 11 depicts camera 1156 can be coupled to camera controller 1190. A camera controller 1190 can be coupled to the one or more processing resources 1110. Image data 914 may correspond to an image captured by camera 1156.FIG. 11 also shows display controller 1126 coupled to the one or more processing resources 1110 and coupled to display device 904. Display device 904 can be configured to present a user interface (UI) 1172. In a particular embodiment, display device 904 includes a touch screen, and UI 1172 is responsive to user operations (eg, slip operations) at the touch screen.A decoder/decoder (CODEC) 1134 can also be coupled to the one or more processing resources 1110. Speaker 1136 and microphone 1138 can be coupled to CODEC 1134. FIG. 11 also indicates that the wireless controller 1140 can be coupled to the one or more processing resources 1110. Wireless controller 1140 can be further coupled to antenna 1142 via radio frequency (RF) interface 1180.In a particular embodiment, the one or more processing resources 1110, memory 906, display controller 1126, camera controller 1190, CODEC 1134, and wireless controller 1140 are included in an in-package system or on-chip system device 1122. Input device 1130 and power supply 1144 can be coupled to system-on-chip device 1122.Moreover, in a particular embodiment, and as illustrated in FIG. 11, display device 904, input device 1130, camera 1156, speaker 1136, microphone 1138, antenna 1142, RF interface 1180, and power supply 1144 are external to system-on-chip device 1122. However, each of display device 904, input device 1130, camera 1156, speaker 1136, microphone 1138, antenna 1142, RF interface 1180, and power source 1144 can be coupled to components of system-on-chip device 1122, such as to an interface or coupling. To the controller.In operation, user preference input 1192 can be received at mobile device 902. User preference input 1192 can adjust user configuration parameters. User preference input 1192 may be received at display device 904 (eg, at a touch screen of display device 904), at input device 1130 (eg, at the keyboard of input device 1130), or a combination thereof. In the example of FIG. 11, user preference input 1192 may reconfigure the order of image editing operations performed at mobile device 902. User preference input 1192 may reconfigure user configuration parameters 916 to indicate that the color change operation will precede the image blur operation as an illustrative example.To further illustrate, user preference input 1192 can reconfigure user configuration parameters 916 from a first state to a second state. The first state may indicate that an initial user input (eg, user input 918 of FIG. 9) is a starting color change operation, and subsequent user input (eg, user input 928 of FIG. 9) is a starting image blur operation. By reconfiguring the user preference input 1192 from the first state to the second state, an initial user input (eg, user input 918 of FIG. 9) may initiate an image blur operation, and subsequent user input (eg, user input 928 of FIG. 9) may Start color change operation.The technique of Figure 11 enables simplified control of a user interface (UI) by a user of a mobile device. For example, the UI may enable a user to set user configuration parameters that assign a particular image editing operation to a particular user input (eg, a slip in a particular direction), which may simplify user execution by the mobile device Control of the image editing application.Instructions 1158 may be executed by the one or more processing resources 1110 to perform one or more operations described herein. For further explanation, in connection with the described embodiments, a computer readable medium (eg, memory 906) stores instructions (eg, instructions 1158) that are executable by a processor (eg, the one or more processing resources 1110) such that A mobile device (e.g., mobile device 902) is caused to display a first image (e.g., image 908) at the mobile device and a first user input (e.g., user input 918) at the mobile device. The first user input indicates the direction relative to the mobile device. The instructions are further executable by the processor to perform a first image editing operation on the first image based on the first user input to generate a second image (eg, image 924), display the second image at the mobile device, and at the mobile device A second user input is received (eg, user input 928). The second user input indicates the direction relative to the mobile device. The instructions are further executable by the processor to perform a second image editing operation on the second image based on the second user input to generate a third image (eg, image 934).In conjunction with the described embodiments, an apparatus includes means (e.g., display device 904) for displaying a first image (e.g., image 908) at a mobile device (e.g., mobile device 902), and for receiving at a mobile device A device (e.g., display device 904 and/or input device 1130) that is input by a user (e.g., user input 918). The first user input indicates the direction relative to the mobile device. The apparatus further includes means for performing a first image editing operation on the first image based on the first user input to generate a second image (eg, image 924) (eg, the one or more processing resources 1110) for causing A device that displays a second image (eg, display device 904 and/or input device 1130), and a device for receiving a second user input (eg, user input 928) (eg, display device 904 and/or input device 1130) . The second user input indicates the direction relative to the mobile device. The apparatus further includes means for performing a second image editing operation on the second image based on the second user input to generate a third image (eg, image 934) (eg, the one or more processing resources 1110).Referring to Figure 12, a particular embodiment of a method is depicted and is generally designated 1200. Method 1200 can be performed by a processor, such as processor 100 and/or processing resources 810, 1110. Method 1200 can be performed at a device, such as a mobile device (eg, one or more of mobile devices 800, 902).The method 1200 includes receiving a first user input from a user interface, 1204. The first user input can correspond to user input 504 and the user interface can correspond to any of UIs 500, 872, and 1172. The first user input indicates an image for the display operation. As an example, the image may correspond to image 502. To further illustrate, the first user input can correspond to a touch screen operation that selects an image from an image gallery presented at the user interface. The first user input may correspond to a request to enlarge an image at the user interface from a "thumbnail" to a "full" view.The method 1200 further includes performing a display operation and automatically initiating a cluster operation using image data corresponding to the image based on the first user input, at 1208. To illustrate, cluster operations can be performed concurrently with "loading" images at the mobile device. Loading an image may include increasing the image (eg, from a thumbnail to a full view) or launching an image editing application as an illustrative example. Cluster operations can include SLIC operations. The clustering operation can be initiated to identify clusters within the image data while performing a display operation to magnify the image from the thumbnail to the full view.Method 1200 can further include receiving a second user input from the user interface, at 1216. The second user input can correspond to user input 918. The second user inputs a first image layer that identifies the image. The first image layer may correspond to the image layer 104a. The second user input can use the sliding movement at the touch screen device to identify the foreground of the image. The second user input may indicate an image editing operation (eg, a color changing operation, an image blurring operation, etc.) that is aimed at the foreground.The method 1200 can further include automatically initiating an image segmentation operation associated with the first image layer, at 1220. For example, the image segmentation operation can be initiated automatically after the second user input is completed (eg, the slide motion is completed). In one or more other examples, the image segmentation operation may automatically initiate upon receipt of a user input identifying the background of the image.Method 1200 can further include performing an image component marking operation, at 1222. The image component marking operation can be initiated after the image segmentation operation is completed.Method 1200 can further include receiving a third user input from the user interface, at 1224. The third user inputs a second image layer that identifies the image. For example, the third user input can use the slip movement at the touch screen device to identify the background of the image. The background may correspond to a second image layer. The third user input may correspond to user input 928. The third user input may indicate an image editing operation (eg, a color changing operation, an image blurring operation, etc.) that targets the background.Method 1200 can further include modifying the image based on the third user input to produce a modified image, 1228. As a specific illustrative example, the modified image may include foreground and background modified based on the second user input and the third user input, respectively.Method 1200 facilitates an enhanced image editing experience for the user. For example, by "hiding" the hysteresis associated with image clustering operations during image loading, the image editing operation can be rendered faster for the user (because, for example, a cluster of images can be performed by the user via an image layer indicating that the image is to be performed The user input of the image editing operation is completed directly before starting the image editing operation). Moreover, the image segmentation operation can be initiated prior to receiving user input regarding image processing operations for all image layers of the image. For example, the image segmentation may be initiated once user input regarding the first image layer (eg, foreground) is received and user input (eg, before) is not received with respect to the second image layer (eg, background) Segment operation. In a particular example, image segmentation and component marking operations are performed relative to the first image layer when the user performs a sliding motion to indicate an image editing operation associated with the second image layer, thereby enhancing responsiveness of the image editing operation and Speed and improve the user experience.In connection with the described embodiments, a computer readable medium can store instructions executable by a processor to cause the processor to receive a first user input from a user interface. The computer readable medium can correspond to one or more of the memories 832, 906, and the processor can correspond to any of the processor 100 and/or processing resources 810, 1110. The user interface may correspond to any of the UIs 500, 872, and 1172. The first user input selects an image for the display operation. The image may correspond to image 502 and the first user input may correspond to user input 504. The instructions are further executable by the processor to perform a display operation based on the first user input and automatically initiate a cluster operation using image data corresponding to the image based on the first user input. The image data may correspond to image data 102.In connection with the described embodiments, a device, such as any of the mobile devices 800, 902, includes means for receiving a first user input from a user interface. The means for receiving the first user input may correspond to display 828 and/or display device 904. The user interface may correspond to any of the UIs 500, 872, and 1172. The first user input selects an image for the display operation. The image may correspond to image 502 as an illustrative example. The apparatus further includes means (e.g., display 828 and/or display device 904) for performing a display operation based on the first user input, and for using image data (e.g., image data 102) corresponding to the image based on the first user input. A device that automatically initiates cluster operations (eg, processor 100 and/or processing resources 810, 1110).Those skilled in the art will appreciate that the devices and functionality disclosed above can be designed and configured as computer files (e.g., RTL, GDSII, GERBER, etc.) stored on a computer readable medium. Some or all of these files may be provided to manufacturing operators that manufacture devices based on such documents. The resulting product comprises a semiconductor wafer that is separated into semiconductor dies and packaged into a semiconductor chip. The semiconductor chip is then employed within the device, such as within mobile device 800 and/or mobile device 902.The various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether this functionality is implemented as hardware or software depends on the particular application and design constraints imposed on the overall system. The described functionality may be implemented in a different manner for each particular application, and such implementation decisions should not be construed as causing a departure from the scope of the invention.The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. Software modules can reside in random access memory (RAM), flash memory, read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable Programmable Read Only Memory (EEPROM), registers, hard disk, removable disk, compact disk read only memory (CD-ROM), or any other form of storage medium known in the art. An exemplary non-transitory medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium can reside in an application specific integrated circuit (ASIC) and/or field programmable gate array (FPGA) chip. The ASIC and/or FPGA chip can reside in a computing device or user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal.The previous description of the disclosed embodiments is provided to enable a person skilled in the art to make or use the disclosed embodiments. Various modifications to the embodiments are readily apparent to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the invention. Therefore, the present invention is not intended to be limited to the embodiments shown herein, but the scope of the invention may be accorded to the broadest scope of the principles and novel features as defined by the appended claims.
The present invention relates to a well-drive process in which the process of well driving is carried out simultaneously with a densification cycle. The inventive method is particularly applicable to isolation trenches having widths at or below about 0.2 microns. The inventive method may be applied to other semiconductive structures of varying geometries.
What is claimed and desired to be secured by United States Letters Patent is: 1. A method of following a semiconductor structure comprising: providing a semiconductive substrate of a first semiconductivity type having an nitride layer thereover; forming a recess extending through the nitride layer and terminating within the semiconductive substrate; filling the recess with an dielectric material that projects out of the recess and above the nitride layer, said dielectric material being compositionally different than the material of the nitride layer; planarizing the dielectric material to form a top surface over the recess adjacent to the nitride layer; implanting through both the planarized dielectric material and the nitride layer a dopant of a second semiconductivity type to form a region of the second semiconductivity type in the semiconductive substrate; and increasing the density of the dielectric material while expanding the size of the region of the second semiconductivity type in the semiconductive substrate. 2. The method as defined in claim 1, wherein said expanding expands the region of the second semiconductivity type below the recess. 3. The method as defined in claim 1, wherein said expanding expands the region of the second semiconductivity type on opposite sides of the recess. 4. The method as defined in claim 1, wherein said expanding expands the region of the second semiconductivity type on only one of two opposite sides of the recess. 5. A method as defined in claim 1, wherein: filling the recess with said dielectric material forms a seam in the dielectric material within the recess; and said expanding eliminates said seam. 6. A method of forming a semiconductor structure comprising: providing a semiconductive substrate of a first semiconductivity type; forming an oxide layer upon the semiconductive substrate; forming a nitride layer upon the oxide layer; forming a recess extending through the nitride layer, past the oxide layer, and terminating at a bottom surface within the semiconductive substrate; filling the recess with an oxide material that projects out of the recess and above the nitride layer; planarizing the oxide material in the recess to form a top surface thereon that is co-planar with an adjacent top surface of the nitride layer; implanting through the nitride layer and the oxide material a dopant of a second semiconductivity type to form a region of the second semiconductivity is type in the semiconductive substrate; and increasing the density of the oxide material in the recess while expanding the size of the region of semiconductivity of the second semiconductivity type in the semiconductive substrate. 7. The method as defined in claim 6, wherein the size of the region of semiconductivity of the second semiconductivity type in the semiconductive substrate is expanded below the bottom surface of the recess. 8. A method of forming a semiconductor structure comprising: providing a semiconductive substrate of a first semiconductivity type; forming a nitride layer over the semiconductive substrate, said nitride layer having a planar top surface; forming a recess extending from the planar top surface of the nitride layer and terminating at a bottom surface within the semiconductive substrate; filling the recess with an oxide material that projects out of the recess and above the nitride layer; planarizing the oxide material to form a planar top surface immediately over the recess that is co-planar with the planar top surface of the nitride layer; bombarding the co-planar top surfaces of the oxide material and the nitride layer with a dopant to form a region of the second semiconductivity type in the semiconductive substrate, wherein the dopant that is implanted passes through: the bottom surface of the recess; and the nitride layer; increasing the density of the oxide material in the recess while expanding the size of the region of semiconductivity of the second semiconductivity type in the semiconductive substrate. 9. A method as defined in claim 8, wherein forming the recess in the semiconductive substrate includes anisotropically etching into said semiconductive substrate to form at least one isolation trench. 10. A method as defined in claim 8, wherein filling the recess includes filling the recess with silicon dioxide through a TEOS decomposition process. 11. A method as defined in claim 8, wherein: filling the recess forms one or more seams in the oxide material within the recess; and increasing the density of the oxide material within the recess while expanding the size of the region of semiconductivity of the second semiconductivity type in the semiconductive substrate includes thermal densifying of the oxide material within the recess, whereby said one or more seams formed within the recess are eliminated. 12. A method as defined in claim 8, further comprising, prior to filling the recess with the dielectric material, forming a dielectric film upon a surface within the recess. 13. A method of forming a semiconductor structure according to claim 12, wherein forming the dielectric film upon the surface within the recess includes thermal oxidation of said semiconductive substrate. 14. A method of forming a semiconductor structure comprising: providing a P-type silicon substrate; thermally oxidizing the P-type silicon substrate to form a layer of silicon dioxide upon the P-type silicon substrate; forming a layer of silicon nitride on the layer of silicon dioxide; forming a planar top surface on the layer of silicon nitride; anisotropically etching an isolation trench having sidewalls and bottom surface and extending through the planar top surface of the layer of silicon nitride, past the layer of silicon dioxide, and terminating at a bottom surface within the P-type silicon substrate; oxidizing the P-type silicon substrate that define the isolation trench; filling the isolation trench with a dielectric material that projects out of the isolation trench and above the planar top surface of the layer of silicon nitride; forming a planar top surface on the dielectric material immediately over the isolation trench that is co-planar with the planar top surface of the layer of silicon nitride; bombarding the co-planar top surfaces of the dielectric material and the layer of silicon nitride with an N-type dopant to form an N-type region in the P-type silicon substrate, wherein the N-type dopant passes through each of the dielectric material in the isolation trench; the bottom surface of the isolation trench; the nitride layer; and the layer of silicon dioxide; heating the dielectric material and the P-type silicon substrate to increase the density of the dielectric material in the isolation trench while expanding the size of the N-type region in the P-type silicon substrate below the isolation trench. 15. A method as defined in claim 14, wherein the dielectric material filling the isolation trench is formed through a TEOS decomposition process. 16. A method as defined in claim 14, wherein: filling the isolation trench forms one or more seams of the dielectric material within the isolation trench, and said heating eliminates said one or more seams. 17. A method of forming a semiconductor structure comprising: providing a semiconductive substrate of a first semiconductivity type having an nitride layer thereover; forming a first recess and a second recess each extending through the nitride layer and terminating within the semiconductive substrate; filling the first and second recesses with a dielectric material that projects out of both said first and second recesses and above the nitride layer, said dielectric material being compositionally different than the material of the nitride layer; planarizing the dielectric material to form a top surface over both said first and second recesses adjacent to the nitride layer; implanting through both the planarized dielectric material and the nitride layer a dopant of a second semiconductivity type to form a region of the second semiconductivity type in the semiconductive substrate, and increasing the density of the dielectric material in both the first and second recesses while expanding the size of the region of the second semiconductivity type in the semiconductive substrate. 18. The method as defined in claim 17, wherein said expanding expands the region of the second semiconductivity type below both the first and second recesses. 19. The method as defined in claim 17, wherein said expanding expands the region of the second semiconductivity type to contact one side of the first recess and opposite sides of the second recess. 20. The method as defined in claim 17, wherein said expanding expands the region of the second semiconductivity type to contact both the first and second recesses. 21. A method as defined in claim 17, wherein: filling the first and second recesses with said dielectric material forms a seam in the dielectric material within each of said first and second recessed, and said expanding eliminates each said seam.
BACKGROUND OF THE INVENTION 1. The Field of the Invention The present invention relates to the formation of semiconductor devices. More particularly, the present invention relates to the fabrication of locally doped regions within a semiconductive substrate. In particular, the present invention relates to a method of controlling well-drive diffusion by combining well drive with densification of an isolation film on a semiconductive substrate. 2. The Relevant Technology In the microelectronics industry, a substrate refers to one or more semiconductor layers or structures which include active or operable portions of semiconductor devices. In the context of this document, the term "semiconductive substrate" is defined to mean any construction comprising semiconductive material, including but not limited to bulk semiconductive material such as a semiconductive wafer, either alone or in assemblies comprising other materials thereon, and semiconductive material layers, either alone or in assemblies comprising other materials. The term substrate refers to any supporting structure including but not limited to the semiconductive substrates described above. In the microelectronics industry, the process of miniaturization entails shrinking the size of individual semiconductor devices and crowding more semiconductor devices into a given unit area. With miniaturization, problems of proper isolation between components arise. When miniaturization demands the shrinking of individual devices, isolation structures must also be shrunk. Attempts to isolate components from each other are currently limited to photolithographic limits of about 0.2 microns for isolation structure widths. To form an isolation trench, for example, by photolithography, the photoresist mask through which the isolation trench is etched generally utilizes a beam of light, such as ultraviolet (UV) light, to transfer a pattern through an imaging lens from a photolithographic template to a photoresist coating which has been applied to the structural layer being patterned. The pattern of the photolithographic template includes opaque and transparent regions with selected shapes that match corresponding openings and intact portions intended to be formed into the photoresist coating. The photolithographic template is conventionally designed by computer assisted drafting and is of a much larger size than a semiconductor substrate on which the photoresist coating is located. Light is shone through the photolithographic template and is focused on the photoresist coating in a manner that reduces the pattern of the photolithographic template to the size of the photolithographic coating and that develops the portions of the photoresist coating that are unmasked and are intended to remain. The undeveloped portions are thereafter easily removed. The resolution with which a pattern can be transferred to the photoresist coating from the photolithographic template is currently limited in commercial applications to widths of about 0.2 microns or greater. In turn, the dimensions of the openings and intact regions of the photoresist mask, and consequently the dimensions of the shaped structures that are formed with the use of the photoresist mask, are correspondingly limited. Photolithographic resolution limits are thus a barrier to further miniaturization of integrated circuits. Accordingly, a need exists for an improved method of forming isolation trenches that have a size that is reduced from what can be formed with conventional photolithography. The photolithography limit and accompanying problems of alignment and contamination are hindrances upon the ever-increasing pressure in the industry to miniaturize. Other problems that occur in isolation trench formation are, with trenches that are deep and wide in comparison to the size of the individual device that the trench is isolating, dielectric material such as thermal or deposited silicon oxide that fills the trench and tends to encroach upon the active area that the trench is designed to isolate. Another problem is that wide and deep trenches tend to put a detrimental amount of stress upon the silicon of the active area that leads to defects such as stress cracks, slip dislocations, and device failure. Isolation trenches and active areas are often doped, either to enhance conductivity within an isolation area, such as an increased breakdown voltage at the bottom and/or in the walls of an isolation trench before filling the isolation trench with a dielectric material, or to increase the threshold voltage (VT). For the fabrication of a complimentary metal oxide silicon (CMOS) device, ion implantation to form a preferred breakdown voltage and a preferred VT has been implemented by patterning a mask to first protect, for example, the N-well side of the CMOS device and then to ion implant the N-well portion of the device. Following ion implantation of the selected site, the photoresist material must be removed and the CMOS device must be patterned with a second photoresist material that is substantially opposite from the previous photoresist material. After patterning of the second photoresist material, the complimentary side of the semiconductor structure is ion implanted. This first mask/second mask technique was required to prevent contamination by the wrong type of dopant in each side of the CMOS device. The first mask/second mask process involved several steps including the major steps of spinning on a photoresist material; curing; aligning a photomask template; exposing the photoresist material; removing developed portions of the photoresist material so as to form a pattern in the photoresist material; etching a desired topography through the patterned photoresist material, for example an isolation trench; ion implanting, for example the isolation trench and upon an active area; removing the patterned photoresist material; and then performing essentially the same steps over again for the semiconductor structure in a scheme that is substantially opposite to the first photoresist material. Such an operation involves several possible chances for an erroneous fabrication step that will lower overall production yield. For example, where an isolation trench was formed by an anisotropic etch, a portion of the first mask is mobilized to begin to line the recess formed by the anisotropic etch. In such a case, stripping of the first photoresist material may require a stronger stripping solution than would other wise be needed. During photoresist material mobilization, the mobilized photoresist material may combine with other exposed portions of the semiconductor structure, such as metals, and thereby form a metal-polymer film within the recess being formed. Such a metal-polymer film resists stripping with conventional stripping solutions. A more effective stripping solution that removes a metal-polymer film, however, will likely also cause effacement of preferred topographies of the semiconductor structure that will compromise its integrity of the semiconductor structure. It is preferable, at some point in fabrication of the isolation trench, to densify the fill material of the isolation trench. Densification is desirable because it helps to prevent separation of materials in contact with the fill material. It is sometimes preferable to perform densification of isolation trench fill material immediately following its deposition. Depending upon the specific application, however, densification may be carried out at other stages of the process. For example, densification of fill material by rapid thermal processing (RTP) may make either etchback or planarization of the semiconductor structure more difficult. As such, it has been preferable to density later in the fabrication process, such as after a planarizing or etchback processing. It is also preferable at one point in the fabrication process, to thermally "drive in" an implanted well such as an N-well in a P-doped substrate. An example of a prior art well drive process includes formation of a P-doped region in a semiconductive substrate by a P-type material implantation, and patterning the semiconductive substrate for an N-well pattern such as with a photoresist material. An N-well is then formed, for example, with an KeV or MeV implantation of N-type materials. The N-well pattern mask is then removed and a thermal well-drive process is then carried out under such conditions as 200 DEG C. to 1,000 DEG C., 2 to 12 hours, and in a nitrogen atmosphere within a furnace. With the ever-increasing pressure upon the industry to miniaturize, thermal processing such as diffusive thermal well driving must be under increasingly stringent controls. Such thermal processing must be factored with greater care into the overall processing thermal budget. A well drive done early in the process may achieve a preferred amount of thermal diffusion of dopants, but further or not enough thermal processing will cause unwanted results. For example, encroachment of N-doping into areas that are designed to be semiconductor neutral will cause the structure to be semiconductive, resulting in higher stand by current device leakage. What is needed in the art is a method of well driving that reduces the number of processing operations and thereby enhances yield. What is also needed is a more consolidated form of semiconductor fabrication in which control over the thermal budget is enhanced. SUMMARY OF THE INVENTION The present invention relates to a well-drive process in which the process of well driving is called out simultaneously with a densification cycle. The inventive method is particularly applicable to isolation trenches having widths at or below about 0.2 microns. However, the inventive method may be applied to other semiconductive structures of varying geometries. According to a first embodiment of the present invention, a semiconductor structure may be formed with a semiconductive well within a semiconductive substrate. In a first preferred embodiment of the present invention, the semiconductor structure is blanket doped with a P-dopant material to form a substantially P-doped semiconductive substrate. After further processing, the semiconductor structure is prepared for formation of an N-well. The semiconductor structure is patterned with a mask using, for example, a photoresist material. Following formation of the mask, ion implantation is implemented so as to form the N-well. Implantation of the N-well may be carried out according to known techniques. Following implantation of the N-well, the mask is removed and the semiconductor substrate is prepared for formation of further structures. The formation of a nitride layer upon the upper surface of the semiconductive substrate includes forming it into a patterned nitride layer and forming a recess therein such as an isolation trench. An optional process step may be carried out at this point in which further doping through the bottom of the recess may be carried out, either to enhance conductivity of the N-well or to enhance the isolation of the N-well caused by the recess through implantation of a P-dopant material. Following optional additional implantation below the bottom of the recess, thermal oxidation of the exposed portion of the semiconductive substrate within the recess is carried out. Thermal oxidation forms a thermal oxide layer within the recess. The thermal oxide layer forms a first isolation film portion for this example of this embodiment. A second oxide material is formed by formation of an isolation film, for example, by formation of oxide such as through the tetraethylorthosilicate (TEOS) decomposition process, the formation of borophosphosilicate glass (BPSG), and the like. The semiconductor structure is subjected to a planarization process. Planarization is carried out before thermal processing to cause densification of the isolation film and expansion of the N-well. Planarization is preferably carried out before thermal processing because a densified film is usually more difficult to planarize than a film before densification. The patterned nitride layer has formed therefrom a reduced height nitride layer with a planarized surface of the semiconductor structure that includes the reduced height nitride layer and the isolation film that is densified within the recess and that has formed a filled recess. Densification and well-drive processing have been carried out simultaneously. Structures formed by the inventive method may include such configurations as an N-well in a P-substrate, a P-well in an N-substrate, and an N+ well in an N-substrate, or a P+ well in a P-substrate. It is to be understood herein that any of the aforementioned structures formed by the inventive method constitute a semiconductive substrate of a first semiconductivity type that has been patterned and implanted with a material to form a region of semiconductivity of a second semiconductivity type. A second preferred embodiment of the present invention includes a blanket deposition of a P-dopant material into the semiconductive substrate. Formation of a nitride layer is next called out, thus bypassing temporarily the formation of the N-well until a later stage in this embodiment of the inventive method. The nitride layer is patterned to from a patterned nitride layer without the presence of the N-well at this stage of this embodiment. Formation of a recess is next carried out, followed by optional formation of a thermal oxide layer, formation of an isolation film, and planarization of the isolation film and optionally a portion of the patterned nitride layer. The next stage of this preferred embodiment of the inventive method includes formation of a mask has been deposited and patterned upon the planarized surface of the semiconductor structure. Formation of an N-well is next called out by an MeV doping of an N-dopant material through N-well mask. Following the implantation of N-dopant materials to form the N-well, the process of densification of the isolation film and well-drive processing to expand the N-well into the expanded N-well are been carried out. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter. BRIEF DESCRIPTION OF THE DRAWINGS In order that the manner in which the above-recited and other advantages of the invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which: FIG. 1 is an elevational cross-section view of a semiconductor structure illustrating various types of dopant wells within a semiconductive substrate including an isolated well, a split well, and a multi-function well. The size of each well as illustrated is considered to be variable depending upon the degree of well-driving that has been carried out. FIG. 2 is an elevational cross-section view of a semiconductor structure that is being subjected to the inventive process, wherein a P-doped semiconductive substrate has been blanket implanted with a P-dopant to form a blanket dopant profile therein. FIG. 3 is an elevational cross-section view of the semiconductor structure depicted in FIG. 2 after further processing according to the inventive method, wherein a mask has been patterned upon the semiconductor structure and ion implantation of N-doping materials is being carried out through exposed regions in the mask to form an N-well. FIG. 4 is an elevational cross-section view of the semiconductor structure depicted in FIG. 3 after further processing, wherein the mask has been removed, and a nitride layer has been formed upon the upper surface of the semiconductor structure in preparation for further patterning and complimentary dopant implantation. FIG. 5 is an elevational cross-section view of the semiconductor structure depicted in FIG. 4 after further processing, wherein the nitride layer has been patterned and etched to form at least one recess in the semiconductive substrate, and an optional second doping operation is carried out in order to extend insulative or conductive qualities below the recess through the N-well. FIG. 6 is an elevational cross-section view of the semiconductor structure depicted in FIG. 5 after further processing, wherein thermal oxidation has been carried out to oxidize exposed portions of the semiconductive substrate, particularly within the recess, and wherein an oxide layer has been deposited upon the upper surface of the semiconductive substrate and within the recess to form a substantially continuous isolation film. FIG. 7 is an elevational cross-section view of the semiconductor structure depicted in FIG. 6 after further processing, wherein the semiconductor structure has been planarized to remove substantially superficial portions of the isolation film and optionally at least a portion of the recess-pattered nitride layer, and in which the processes of densification and well-drive diffusion have been called out according to the inventive method. FIG. 8 is an elevational cross-sectional view of a semiconductor structure processed according to another preferred embodiment of the present invention, wherein implantation of N-well materials is called out after formation of a filled recess by patterning through the nitride layer and implantation of N-dopant materials through an N-well mask. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Reference will now be made to the drawings wherein like structures will be provided with like reference designations. It is to be understood that the drawings are diagrammatic and schematic representations of the embodiment of the present invention and are not drawn to scale. The present invention relates to a well-drive process in which the process of well driving is carried out simultaneously with a densification cycle. The inventive method is particularly applicable to isolation trenches having widths at or below about 0.2 microns. However, the inventive method may be applied to other semiconductive structures of varying geometries. According to a first embodiment of the present invention, a semiconductor structure 10, as illustrated in FIG. 1, may be formed with a semiconductive well within a semiconductive substrate 12. As can be seen in FIG. 1, various well types may be formed within semiconductive substrate 12. For example, an isolated N-well 18 may be formed between two isolation structures 16. Another example is a split N-well 20 that substantially straddles a single isolation structure 16. Yet another example is a multiple-function N-well 22 that is terminated at one side with isolation structure 16, is substantially split with another isolation structure 16, and is terminated at another side with a distinct dopant gradient that changes from substantially N-doped to P-doped. It can be seen in FIG. 1 that there are two types of active areas that may be formed below upper surface 32 of semiconductive substrate 12. Within isolated N-well 18, an active area of a single semiconductivity type is formed as indicated by the letters AA immediately above upper surface 32. In the center N-well of FIG. 1, it can be seen that split N-well 20 comprises an active area that contains a semiconductor junction of an N-doped region and a P-doped region to form a junction active area (JAA). The N-well at the right of FIG. 1 depicts a combination of an active area of a single semiconductivity type and a JAA. Yet a fourth possible configuration of an active area (not picture) comprises an N-well wherein a plurality of isolation structures are formed, and wherein a semiconductor junction such as an N-P region interface comprises each lateral boundary of the N-well. It can be appreciated that various other structures that use a combination of isolation structure 16 and N-wells 18, 20, 22 may be formed according to the inventive method. For example the depth d of N-wells 18, 20, or 22, will depend upon implantation energy, implantation concentration, and well-drive intensity. An example of variation of split N-well 20 is an N-dopant depth, concentration, and well-drive intensity that would cause split N-well 20 to form substantially between the upper surface 32 of semiconductor structure 10 and the bottom 40 of isolation structure 16. This variation may be called out upon isolated N-well 18 and multiple-function N-well 22. In a first preferred embodiment of the present invention, semiconductor structure as illustrated in FIG. 2, is blanket doped with a P-dopant material to form a substantially P-doped semiconductive substrate 12. Blanket dopant profile 24 illustrates quantitatively the concentration amount of the P-dopant material within semiconductive substrate 12. The P-dopant material will preferably be implanted in a concentration range from about 1.times.10@12 atoms/cm@3 to about 1.times.10@15 atoms/cm@3, such as by single or multiple implantations. After further processing, semiconductor structure 10 as illustrated in FIG. 3 is prepared for formation of N-well 18, 20, or 22 (hereinafter referred to as N-well 62). Semiconductor structure 10 is patterned with a mask 26, for example, a photoresist material. Ion implantation with N-dopant material follows formation of mask 26 to form N-well 62. Implantation of N-well 62 may be carried out according to known techniques. For example, N-well 62 is illustrated in FIG. 3 as having been implanted by a series of at least three implantations to form well dopant concentration profiles 28 previous to a well-drive process. A dopant profile will be chosen by the semiconductor fabricator according to a specific application. The N-dopant material will preferably be implanted in a concentration range from about 1.times.10@12 atoms/cm@3 to about 1.times.10@15 atoms/cm@3. Following implantation of N-well 62, mask 26 is removed and semiconductor structure 10 is prepared for formation of further structures. FIG. 4 illustrates the formation of a nitride layer 30 upon upper surface 32 of semiconductive substrate 12. Nitride layer 30 is further processed as illustrated in FIG. 5, wherein a patterned nitride layer 34 and a recess 36, such as an isolation trench, have been formed. The choice of positioning of recess 36 within N-well 62 as illustrated in FIG. 5 has dictated that trench 32 on the right of FIG. 5 has caused N-well 62 to form split N-well 20. An optional process step may be carried out at this point in which further doping through bottom 40 of recess 36 may be carried out, either to enhance conductivity of split N-well 20 or to enhance the isolation of split N-well 20 caused by recess 36 through implantation of a P-dopant material. It can been seen that recess bottom dopant profiles 38 are formed below bottom 40 of recess 36. Following optional additional implantation below bottom 40 of recess 36, thermal oxidation of the exposed portion of semiconductive substrate 12 is carried out. Thermal oxidation forms a thermal oxide layer 42 within recess 36 as seen in FIG. 6. Thermal oxide layer 42 forms a first isolation film portion for this example of this embodiment. A second oxide material is formed by formation of an isolation film, for example, by formation of oxide such as through the tetraethylorthosilicate (TEOS) decomposition process, the formation of borophosphosilicate glass (BPSG), and the like. It can be seen within FIG. 6 that an isolation film 44 has formed and a fill seam 46 has formed within isolation film 44 that is characteristic of filling a recess such as recess 36. Alternative processing paths are available to the process engineer. For example, in one embodiment, thermal processing may now be carried out in which densification of isolation film 44 and expansion of N-well 62 are carried out simultaneously. However, the preferred embodiment of this processing alternative is depicted in FIG. 7. Semiconductor structure 10 depicted in FIG. 6 is subjected to a planarization process to achieve semiconductor structure 10 depicted in FIG. 7. Planarization is carried out before thermal processing to cause densification of isolation film 44 and expansion of N-well 62. Planarization is preferably carried out before thermal processing because a densified film is usually more difficult to planarize than a film before densification. It can be seen in FIG. 7 that planarization has substantially removed isolation film 44 above the level of patterned nitride layer 34 as depicted in FIG. 6. Patterned nitride layer 34, illustrated in FIG. 6, has formed a reduced height nitride layer 64 with a planarized surface 48 of semiconductor structure 10 that includes reduced height nitride layer 64 and isolation film 44 that is densified within recess 36 so as to form a filled recess 50. Densification and well-drive processing have preferably been carried out simultaneously. It can be seen within FIG. 7 that split N-well 20 has formed an expanded well 52. A phantom line 54 illustrates the previous boundary of split N-well 20 before the well-drive process as depicted in FIG. 6. Structures formed by the inventive method may include such configurations as an N-well in a P-substrate, and a P-well in an N-substrate, an N+ well in an N-substrate, or a P+ well in a P-substrate. It is to be understood herein that any of the aforementioned structures formed by the inventive method constitute a semiconductive substrate of a first semiconductivity type that has been patterned and implanted with a material to form a region of semiconductivity of a second semiconductivity type. As an example of the first embodiment of the present invention, an N-well is formed within a P-type semiconductive substrate. The N-well comprises an active area and at least one isolation trench that has been filled with a dielectric material. FIG. 2 illustrates the first step in the process of this example, wherein boron is implanted within semiconductive substrate 12 to a preferred depth and of a preferred blanket dopant profile 24. Either before or after formation of blanket dopant profile 24, oxide layer 14 is optionally formed upon upper surface 32 of semiconductive substrate 12. A photoresist material is spun on optional oxide layer 14 or upper surface 32 seen in FIG. 3 and cured. The photoresist material is exposed and patterned to form mask 26. Formation of N-well 62 is carried out by multiple implantations of N-type materials. In this example, a first implantation of phosphorus is carried out to form the lower dopant concentration profile of well dopant concentration profiles 28. Following implantation of phosphorus, a dual implantation of arsenic is carried out to form dopant concentration profiles making up the upper two profiles of well dopant concentration profiles 28. Implantation of phosphorus and arsenic is carried out under varying implantation energies, either in the KeV and MeV ranges. As referred to herein, a KeV range is an implantation energy in a range from about 25 KeV to about 600 KeV, and an MeV range is an implantation energy in a range from about 600 KeV to about 2800 keV. FIG. 4 illustrates further processing of semiconductor structure 10 according to this example. Mask 26 has been removed and nitride layer 30 has been formed upon upper surface 32 of semiconductive substrate 12. Nitride layer 30 may be in a thickness range from about 500 .ANG. to about 2,500 .ANG.. A preferred thickness of nitride layer is about 1,000 .ANG.. Nitride layer 30 is further processed as illustrated in FIG. 5 by spinning on a photoresist material, curing, aligning, exposing, and patterning the photoresist material. An anisotropic etch is then carried out in which etching substantially penetrates nitride layer 30, optional oxide layer 14, and penetrates into semiconductive substrate 12. Recess 36 is thereby formed. The dimensions of recess 36 are dependent upon the specific application. In this example, a preferred width of recess 36 is in a width range from about 0.1 microns to about 0.3 microns, more preferably in a range from about 0.15 microns to about 0.25 microns, and most preferably in a range from about 0.18 microns to about 0.2 microns. In order to achieve dimensions in the more preferred range, the process engineer may alternatively select to deposit an anti-reflective coating upon nitride layer 30. Nitride layer itself may be antireflective, such as a refractory metal silicon nitride, e.g., a tungsten silicon nitride, a refractory metal nitride, e.g., a tungsten nitride, or a silicon nitride. Recess 36 forms an isolation trench as illustrated in FIG. 5. The positioning of N-well 62 and recess 36 forms a JAA seen in the center of FIG. 5. Positioning of N-well 62 and recess 36 depends upon the specific application in which an active area or a JAA may be formed. An optional process step is carried out at this point in which further doping through bottom 40 of recess 36 is carried out. Doping is carried out in order to enhance the insulative quality of recess 36 as it will ultimately form an isolation trench. Because recess 36 is formed within N-well 62, a P-dopant material such as boron is implanted to form recess bottom dopant profile 38. Implantation is carried out in two steps in order to achieve a preferred depth and concentration of boron that essentially neutralizes the semiconductive nature of N-well 62 immediately below bottom 40. Following implantation below bottom 40 of recess 36, oxidation of exposed portions of semiconductive substrate 12 within recess 36 is carried out by thermal processing. FIG. 6 illustrates the result of thermal processing in order to form thermal oxide layer 42 within recess 36. Thermal oxide layer 42 is formed in order to prevent any undesirable contaminant that may be called in oxide materials that will ultimately form filled recess 50 as seen in FIG. 7. In FIG. 6, it can be seen that isolation film 44 has been formed upon semiconductor structure 10 to substantially fill recess 36. In this example, isolation film is formed through the decomposition of TEOS. Film seam 46 is shown, thus illustrating the substantially undensified nature of isolation film 44 at this state of the process. Semiconductor structure 10 is subjected to a planarization process to achieve semiconductor structure 10 depicted in FIG. 7. Planarization is carried out by chemical-mechanical polishing (CMP), although simple etchback may also be used. CMP will preferably use a polishing solution that is substantially selective to nitride layer 30 over isolation film 44 by a factor in a range from about 10:1, more preferably in a range from about 5:1, and most preferably in a range from about 2:1. The process engineer may choose a preferred selectivity depending upon the required thickness of patterned nitride layer 34 for both before and after CMP. It can be seen in FIG. 7 that CMP has substantially removed isolation film 44 above the level of patterned nitride layer 34. Patterned nitride layer 34 has formed reduced height nitride layer 64 with planarized surface of semiconductor structure 10. The quantity of reduced height nitrite layer 64 that has been removed compared to the original thickness of nitride layer 30 is about 80 percent the original thickness, more preferably about 50 percent the original thickness, most preferably about 35 percent the original thickness. Densification of isolation film 44 and substantial elimination of film seam 46 as depicted in FIG. 7 is carried out in a manner that simultaneously accomplishes the well-drive process. Because arsenic diffuses less readily than phosphorus within N-well 62, the N-well profile 66 depicted in FIG. 7 may have a profile that is wider at the N-well bottom 68 than at upper surface 32. As illustrated in FIG. 7, both an active area and a JAA have been formed. Such a configuration will depend upon the application required by the process engineer. A second preferred embodiment of the present invention is appropriately illustrated by reference to semiconductor structure 10 depicted in FIG. 2. A blanket deposition of a P-dopant material is carried out to form blanket dopant profile 24. Optionally, an oxide layer 14 may be formed to protect upper surface 32 of semiconductive substrate 12. Formation of nitride layer 30 is next carried out, thus bypassing temporarily the formation of N-well 62 until a later stage in this embodiment of the inventive method. Nitride layer 30 is patterned to form patterned nitride layer 34 analogously illustrated in FIG. 5 without the presence of N-well 62. Formation of recess 36 is next carried out, followed by optional formation of thermal oxide layer 42, formation of isolation film 44, and planarization of isolation film 44 and optionally a portion of patterned nitride layer 34 to form optional reduced height nitride layer 64 and filled recess 50 as analogously depicted in FIG. 7 without the presence of N-well 62. FIG. 8 appropriately depicts the next stage of this preferred embodiment of the inventive method. It can be seen that an N-well mask 56 has been deposited and is patterned upon planarized surface 48 of semiconductor structure 10. Formation of an N-well, for example isolated N-well 18, is next carried out by an MeV doping of an N-dopant material through N-well mask. A characteristic implantation boundary 58 can be seen in FIG. 8 as a phantom line wherein formation of isolated N-well 18 was called out by implantation through filled recess 50 to form a portion of isolated N-well 18, through reduced height nitride layer 64, oxide layer 14 and into semiconductive substrate 12 to form another portion of isolated N-well 18. Following the implantation of N-dopant materials to form isolated N-well 18, the process of densification of isolation film 44 and well-drive processing to expand isolated N-well 18 into expanded N-well 52 have been carried out. Removal of N-well mask 56 may be carried out simultaneously with thermal processing according to the inventive method, whereby N-well mask 56 is substantially driven off by the thermal energy used in the thermal processing step that simultaneously drives in N-well 62 and densifies isolation film 44 within filled recess 50. As an example of the second embodiment of the present invention, a blanket deposition of a P-dopant material is carried out to form blanket dopant profile 24 seen in FIG. 2. The P-dopant material is boron. Oxide layer 14 is optionally formed by thermal processing, either before or after implantation of boron to form blanket dopant profile 24. Nitride layer 30 is formed by chemical vapor deposition (CVD) of silicon nitride. Silicon nitride may be represented as SiXNY, where Y is in a range from about 1 to about 6 and X is in a range from about 1 to about 3. A photoresist material is spun on, cured, aligned, and exposed to form a pattern that will expose upper surface 32 of semiconductive substrate 12 in preparation for formation of recess 36 in the form of a trench. An anisotropic etch is carried out to form recess 36 in which nitride layer 32 is formed into patterned nitride layer 34, oxide layer 14 is removed according to the photoresist material, and recess 36 is formed into semiconductive substrate 12 to a desired depth d as seen in FIG. 5. Removal of the photoresist material may be called out by conventional methods. One alternative removal method is to thermally drive off the photoresist material and thereby simultaneously form thermal oxide layer 42 within exposed portions of semiconductive substrate 12 that form recess 36. Alternatively, the photoresist material may be chemically stripped and formation of thermal oxide layer 42 called out subsequently. Following formation of thermal oxide layer 42, analogously illustrated in FIG. 6 without the presence of N-well 62 in the form of split N-well 20, isolation film 44 is formed by the decomposition of TEOS. Planarization of isolation film 44 and at least a portion of patterns nitride layer 34 is called out by CMP. Planarized upper surface 48, illustrated in FIG. 8, is used as the upper surface for a photoresist material that is patterned to form N-well 62 in the form of isolated N-well 18. A photoresist material is spun on, cured, aligned, exposed, and patterned. Implantation of N-doping materials is called out first by MeV doping of phosphorus to a depth at or near the depth of bottom 40. A plurality of additional N-doping material implantations is carried out using arsenic at a depth between upper surface 32 and bottom 40. Depending upon the thickness of reduced height nitride layer 64, KeV implantation or preferably MeV implantation is carried out or the plurality of arsenic implantations. The plurality or arsenic implantations is preferably one to three, and is preferably two. Following the implantation of N-dopant materials to form isolated N-well 18, the process of densification of isolation film and well-drive processing to expand isolated N-well 18 into expanded N-well 52 are carried out. The inventive method has been demonstrated in two preferred embodiments and with examples during the formation of an isolation trench or a plurality thereof. The present invention also includes the method of combining the formation of an isolation structure such as a local oxidation of silicon (LOCOS) with the well-drive process. It can be appreciated that various other embodiments of the present invention may be contemplated by the process engineer, wherein the combination of well driving and a second thermal processing step are made. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrated and not restrictive. The scope of the invention is, therefore, indicated by the appended claims and their combination in whole or in part rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
In one embodiment, an apparatus includes: a plurality of execution lanes to perform parallel execution of instructions; and a unified symbolic store address buffer coupled to the plurality of execution lanes, the unified symbolic store address buffer comprising a plurality of entries each to store a symbolic store address for a store instruction to be executed by at least some of the plurality of execution lanes. Other embodiments are described and claimed.
An apparatus comprising:a plurality of execution lanes to perform parallel execution of instructions; anda unified symbolic store address buffer coupled to the plurality of execution lanes, the unified symbolic store address buffer comprising a plurality of entries each to store a symbolic store address for a store instruction to be executed by at least some of the plurality of execution lanes.The apparatus of claim 1, further comprising a scheduler to generate the symbolic store address based on at least some address fields of the store instruction, the symbolic store address comprising a plurality of fields including a displacement field, a base register field, and an index register field.The apparatus of claim 2, wherein the plurality of fields further includes a scale factor field and an operand size field.The apparatus of claim 2, wherein the scheduler is, for a load instruction following the store instruction in program order, to generate a symbolic load address for the load instruction based on at least some address fields of the load instruction and access the unified symbolic store address buffer based on the symbolic load address, to determine whether the load instruction conflicts with an in-flight store instruction.The apparatus of claim 4, wherein in response to a determination that the load instruction conflicts with the in-flight store instruction, the scheduler is to suppress the load instruction until the in-flight store instruction completes.The apparatus of claim 4, wherein in response to a determination that the load instruction does not conflict with the in-flight store instruction, the scheduler is to speculatively dispatch the load instruction to the plurality of execution lanes.The apparatus of claim 6, wherein in response to the speculative dispatch of the load instruction, at least some of the plurality of execution lanes are to compute a lane load address for the load instruction, execute the load instruction and store the lane load address into a memory order queue of the execution lane.The apparatus of claim 7, wherein at retirement of the store instruction, each of the plurality of execution lanes is to compute a lane store address for the store instruction and determine based at least in part on contents of the memory order queue whether one or more load instructions conflict with the store instruction.The apparatus of claim 8, wherein in response to a determination of the conflict in a first execution lane, the first execution lane is to flush the one or more load instructions from the first execution lane.The apparatus of any one of claims 1-9, wherein the apparatus is to dynamically disable speculative execution of load instructions based at least in part on a performance metric of an application in execution.A method comprising:receiving, in a scheduler of a processor, a single program multiple data (SPMD) store instruction;generating a symbolic address for the SPMD store instruction;storing the symbolic address for the SPMD store instruction in an entry of a unified symbolic store address buffer;dispatching the SPMD store instruction to a plurality of execution lanes of the processor; andspeculatively dispatching a load instruction following the SPMD store instruction in program order to the plurality of execution lanes based at least in part on access to the unified symbolic store address buffer with a symbolic address for the load instruction.The method of claim 11, further comprising preventing the load instruction from being speculatively dispatched when the symbolic address for the load instruction matches an entry in the unified symbolic store address buffer.The method of claim 12, further comprising generating the symbolic address for the SPMD store instruction based on an address of the SPMD store instruction, the symbolic address for the SPMD store instruction comprising a plurality of fields including a displacement field, a base register field, an index register field, a scale factor field and an operand size field.A computer-readable storage medium including computer-readable instructions, when executed, to implement a method as claimed in any one of claims 11 to 13.An apparatus comprising means to perform a method as claimed in any one of claims 11 to 13.
Technical FieldEmbodiments relate to processor architectures for handling store operations.BackgroundData parallel single program multiple data (SPMD) processors coordinate many execution lanes as a group to amortize control logic and state for density and energy efficiency. In non-blocking (on stores) processor microarchitectures, stores are broken into two operations: (1) a store address calculation operation (STA) that logically enforces program order with respect to other loads and stores (for self-consistency); and (2) a senior store data operation (STD) that occurs at instruction retirement to store the data into memory. However, this approach requires generation of a store address per lane at STA dispatch. This store address is then stored in a per lane store address buffer to be checked by subsequent loads with per lane content addressable memory logic to check for memory ordering conflicts, which operates until the STD operation dispatches and completes many cycles later. As such, there is considerable chip real estate and power consumption expense for such processors.Brief Description of the DrawingsFIG. 1A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention.FIG. 1B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention.FIGs. 2A-B illustrate a block diagram of a more specific exemplary in-order core architecture in accordance with an embodiment of the present invention.FIG. 3 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention.FIG. 4 is a block diagram of a system in accordance with one embodiment of the present invention.FIG. 5 is a block diagram of a first more specific exemplary system in accordance with an embodiment of the present invention.FIG. 6 is a block diagram of a second more specific exemplary system in accordance with an embodiment of the present invention.FIG. 7 is a block diagram of a SoC in accordance with an embodiment of the present invention.FIG. 8 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.FIG. 9 is a block diagram illustrating one example of a data parallel cluster (DPC) in accordance with an embodiment of the present invention.FIGS. 10A-C are block diagrams of the data parallel cluster integrated in a computer system in a variety of ways in accordance with an embodiment of the present invention.FIG. 11 illustrates one example of a microthread state in accordance with an embodiment of the present invention.FIG. 12 is a block diagram of multiple data parallel clusters collocated into a larger unit of scaling in accordance with an embodiment of the present invention.FIG. 13 is a block diagram of a portion of a processor in accordance with an embodiment.FIG. 14 is a flow diagram of a method in accordance with one embodiment of the present invention.FIG. 15 is a flow diagram of a method in accordance with another embodiment of the present invention.FIG. 16 is a flow diagram of a method in accordance with yet another embodiment of the present invention.Detailed DescriptionIn various embodiments, a processor having a single program multiple data architecture may be configured to generate symbolic addresses for store operations that leverage use of a single unified symbolic store address buffer to store information regarding these store operations, to reduce area and power consumption costs. In addition, using embodiments herein, techniques are provided to enable more rapid dispatch of load instructions following such store instructions in program order, referred to herein as younger instructions. In this way, embodiments enable speculative dispatch and execution of load instructions in a manner that improves latency and reduces power consumption.In a particular implementation, a processor architecture is provided that includes various front end circuitry configured to operate on individual instructions and a plurality of execution lanes including execution units, each of which is configured to perform operations for these instructions on a per lane basis. Note that herein, the terms "operations" and "instructions" are used interchangeably. Furthermore, while particular techniques for handling store operations using symbolic address generation are described in the context of store instructions (and dependent load instructions), understand that in at least certain architectures, user-level store and load instructions may be decoded into one or more micro-instructions (uops) that are machine-level instructions actually executed in execution units. For ease of generality, the terms "operations," "instructions," and "uops" are used interchangeably.With a SPMD processor, execution of the same program is enabled across multiple execution lanes, in which the same instruction is dispatched across the lanes in a single program multiple data model. In an implementation, multiple instruction queues may be provided, where memory instructions are stored in a first instruction queue and arithmetic-based instructions (referred to herein as ALU instructions) are stored in a second instruction queue. Memory instructions are initially dispatched from these instruction queues to parallel execution pipelines in-order.At store address dispatch (STA) of a store instruction, a single symbolic address is generated and placed in a unified symbolic store address buffer, avoiding per lane store address buffer (SAB) storage and avoiding per lane SAB content addressable memory (CAM) logic. Future load instructions (namely load instructions following this store instruction in program order) symbolically access this symbolic store address buffer based on a generated symbolic load address for the load instruction (instead of a multiplicity of SABs across lanes) to speculatively (but with high confidence) detect self-consistency (intra-lane) memory ordering violations simultaneously for all lanes. In this regard, these future load instructions need not perform per execution lane checking of store addresses at this point of dispatch, reducing complexity, chip area and power consumption.At store data dispatch (STD) at retirement of a store instruction, a per lane store address is computed. Using this per lane generated store address, access may be made to a per lane memory ordering queue (MOQ) that is populated by younger loads, to detect any mis-speculated younger loads. When such mis-speculated younger loads are identified, various operations are performed (e.g., including certain flush operations for a pipeline of a given execution lane). Thereafter, the store data of the store instruction is committed to the memory system. Note that the actual store of data for this STD operation may occur via an eager dispatch mechanism, where store data can be stored in a temporary buffer such as a store data buffer. Or the store of data may occur at retirement to avoid the expense of this extra buffer storage.Using embodiments, cost-effective non-blocking store operation may be realized, to increase memory-level parallelism (MLP) by decreasing exposed cache latency of loads, and reducing area of lanes (no SAB per lane or SAB CAM logic per lane), and reducing power consumption. Still further, embodiments reduce scheduling logic critical path (e.g., no aggregation of SAB CAM comparisons across lanes). And, by eliminating per lane SAB state, additional lane thread context storage may be provided to hide other latencies such as cache misses. In embodiments, the area and energy cost of stores are amortized to achieve nearly constant area and constant energy invariant of the number of lanes being co-scheduled up until the point that a senior store is ready to retire and is dispatched to the cache subsystem.In an embodiment, a SPMD processor architecture includes a plurality of execution lanes, each of which executes the same program. In this arrangement, a front end scheduler co-dispatches the same instruction across the lanes in a single program multiple data model. Memory instructions are initially dispatched in-order.In an embodiment, when a store instruction is to be dispatched to the multiple execution lanes, a symbolic store address is generated in scheduling logic and stored, at store address dispatch (STA), in the unified symbolic store address buffer. Note that with an implementation, STA dispatch does not require the source address register operand values to be ready as they are only required at STD dispatch when the individual lane addresses are computed.Referring now to Table 1, illustrated is an example of one possible symbolic store address formation, which when generated may be stored in a unified symbolic store address buffer entry. As shown in Table 1, only 47 bits are used for the symbolic address. Note that generation of a symbolic address may be realized by the concatenation of multiple fields of information, at least some of which may be obtained from the address fields portion of a given load or store instruction. More specifically, in one embodiment in accordance with Table 1 a symbolic address may be generated according to the following symbolic representation: Symbolic Address = Base Register + Index Register ∗ Scale Factor + Displacement, where the operators are concatenation operators. Stated another way, the symbolic address generation results in a bit vector formed of those constituent fields. This resulting value thus corresponds to a beginning address of data to be accessed, where the data to be accessed has a width according to the Operand Size (where the operand size may be one, two, four or eight bytes wide).Table 1Field NameSize (bits)DISPLACEMENT32 + 1 validBASE REGISTER4 + 1 validINDEX REGISTER4 + 1 validSCALE FACTOR2OPERAND SIZE2This minimal storage of 47 bits contrasts with a requirement without an embodiment to store a non-symbolic entry in a per lane buffer. Assuming a 64-bit virtual address, this alternate arrangement with an embodiment would require 32 lanes x 64-bit virtual address = 2,048 bits of SAB storage.Note that not all addressing modes may be supported by the symbolic store address entry. In this case, store instructions using addressing modes not represented by the chosen symbolic scheme are therefore considered blocking stores that stall younger loads issuing from that thread until both its source address operands and source data operands are ready, the instruction is ready to retire, and the store is senior.After allocation of a store instruction to the execution lanes and inclusion of a symbolic store address into the unified symbolic store address buffer, younger load instructions may be speculatively dispatched, reducing latency. On dispatch of a younger load instruction, a symbolic load address may be generated and used to access the symbolic store address buffer to speculatively check for older in-flight conflicting stores. If a conflict is detected, the load instruction is suppressed until the conflicting store instruction has completed. Otherwise, if no conflict is detected, the load instruction is speculatively dispatched to the execution lanes. In the execution lanes, each lane operates to compute a per lane load address and perform the load operation from the memory system. In addition, the load address is written into a per lane memory ordering queue (MOQ). Note that this load address is a non-symbolic calculated address, rather than a symbolic address.At store data dispatch (STD) at retirement, the store instruction then computes its address at each lane, accessing each lane's MOQ (e.g., via a CAM operation) populated by younger loads to detect any mis-speculated younger loads, and then commits the data to the memory system.In an embodiment, the symbolic comparison between addresses may be performed as a CAM operation by logical concatenation of the bits that form the symbolic address buffer entry. This comparison will only identify a subset of true dependences through memory for operations of common size using the same addressing scheme with common base/index registers. It is possible that other true dependences will not be identified and cause a mis-speculation pipeline clear at senior store dispatch. However note that these mis-speculation events are uncommon in data-parallel kernels (in part due to limited speculation). Further, stalls due to false symbolic aliases are infrequent since short instruction count loops (where false aliases could be an issue) are unrolled to utilize register address space as accumulators, etc. Note that embodiments may control operation to not enable speculative load operations in certain cases (e.g., dynamically). For example, speculative load instruction dispatch may be prevented for stack pointer-based memory operations. In this conservative flow, loads that occur in program order after a stack pointer modification instruction (that would complicate or negate symbolic disambiguation) are not speculative dispatched.In some embodiments, a more comprehensive symbolic address CAM comparison technique (resulting in less mis-speculation) may track the symbolic affine relationship between architectural registers, interpret the operand size field to compare operations of different size and/or alignment, interpret the scale factor field, etc. As this CAM is instantiated only once instead of once per lane (e.g., 32 lanes in an example architecture), there exists significant area/power budget for this complex comparison, yet retaining most of the savings of a baseline symbolic store address buffer.Embodiments may further mitigate mis-speculation in antagonistic codes. For example, in data-parallel kernels where the symbolic comparison of younger loads to older stores mis-speculates at a high rate and thus hurts performance, the load speculation mechanism may be automatically temporarily disabled until kernel exit, region exit event, temporal density, etc. To this end, embodiments may leverage performance monitoring information, e.g., from one or more performance monitoring units of a processor that tracks a variety of performance monitoring information including information regarding a number of mis-speculation events, flushes or so forth. A selective mitigation scheme also may be employed to disable load speculation only for offending instruction addresses by use of mechanisms such as a Bloom filter for store or load instruction IPs and/or by marking in a front-end decoded uop stream buffer those load instructions in the current loop that should not speculate.FIG. 1A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. FIG. 1B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in FIGs. 1A-B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG. 1A , a processor pipeline 100 includes a fetch stage 102, a length decode stage 104, a decode stage 106, an allocation stage 108, a renaming stage 110, a scheduling (also known as a dispatch or issue) stage 112, a register read/memory read stage 114, an execute stage 116, a write back/memory write stage 118, an exception handling stage 122, and a commit stage 124.FIG. 1B shows processor core 190 including a front end unit 130 coupled to an execution engine unit 150, and both are coupled to a memory unit 170. The core 190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 190 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 130 includes a branch prediction unit 132 coupled to an instruction cache unit 134, which is coupled to an instruction translation lookaside buffer (TLB) 136, which is coupled to an instruction fetch unit 138, which is coupled to a decode unit 140. The decode unit 140 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 140 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 190 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 140 or otherwise within the front end unit 130). The decode unit 140 is coupled to a rename/allocator unit 152 in the execution engine unit 150.The execution engine unit 150 includes the rename/allocator unit 152 coupled to a retirement unit 154 and a set of one or more scheduler unit(s) 156. The scheduler unit(s) 156 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 156 is coupled to the physical register file(s) unit(s) 158. Each of the physical register file(s) units 158 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 158 comprises a vector registers unit and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 158 is overlapped by the retirement unit 154 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 154 and the physical register file(s) unit(s) 158 are coupled to the execution cluster(s) 160. The execution cluster(s) 160 includes a set of one or more execution units 162 and a set of one or more memory access units 164. The execution units 162 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 156, physical register file(s) unit(s) 158, and execution cluster(s) 160 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 164). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 164 is coupled to the memory unit 170, which includes a data TLB unit 172 coupled to a data cache unit 174 coupled to a level 2 (L2) cache unit 176. In one exemplary embodiment, the memory access units 164 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 172 in the memory unit 170. The instruction cache unit 134 is further coupled to a level 2 (L2) cache unit 176 in the memory unit 170. The L2 cache unit 176 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 100 as follows: 1) the instruction fetch 138 performs the fetch and length decoding stages 102 and 104; 2) the decode unit 140 performs the decode stage 106; 3) the rename/allocator unit 152 performs the allocation stage 108 and renaming stage 110; 4) the scheduler unit(s) 156 performs the schedule stage 112; 5) the physical register file(s) unit(s) 158 and the memory unit 170 perform the register read/memory read stage 114; the execution cluster 160 perform the execute stage 116; 6) the memory unit 170 and the physical register file(s) unit(s) 158 perform the write back/memory write stage 118; 7) various units may be involved in the exception handling stage 122; and 8) the retirement unit 154 and the physical register file(s) unit(s) 158 perform the commit stage 124.The core 190 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 190 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 134/174 and a shared L2 cache unit 176, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.FIGs. 2A-B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.FIG. 2A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 202 and with its local subset of the Level 2 (L2) cache 204, according to embodiments of the invention. In one embodiment, an instruction decoder 200 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 206 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 208 and a vector unit 210 use separate register sets (respectively, scalar registers 212 and vector registers 214) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 206, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 204 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 204. Data read by a processor core is stored in its L2 cache subset 204 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 204 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring datapath is 1024-bits wide per direction in some embodiments.FIG. 2B is an expanded view of part of the processor core in FIG. 2A according to embodiments of the invention. FIG. 2B includes an L1 data cache 206A part of the L1 cache 204, as well as more detail regarding the vector unit 210 and the vector registers 214. Specifically, the vector unit 210 is a 6-wide vector processing unit (VPU) (see the 16-wide ALU 228), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 220, numeric conversion with numeric convert units 222A-B, and replication with replication unit 224 on the memory input.FIG. 3 is a block diagram of a processor 300 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in FIG. 3 illustrate a processor 300 with a single core 302A, a system agent 310, a set of one or more bus controller units 316, while the optional addition of the dashed lined boxes illustrates an alternative processor 600 with multiple cores 302A-N, a set of one or more integrated memory controller unit(s) 314 in the system agent unit 310, and special purpose logic 308.Thus, different implementations of the processor 300 may include: 1) a CPU with the special purpose logic 308 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 302A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 302A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 302A-N being a large number of general purpose in-order cores. Thus, the processor 300 may be a general purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 300 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores 304A-N, a set or one or more shared cache units 306, and external memory (not shown) coupled to the set of integrated memory controller units 314. The set of shared cache units 306 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 312 interconnects the special purpose logic 308, the set of shared cache units 306, and the system agent unit 310/integrated memory controller unit(s) 314, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 306 and cores 302-A-N.In some embodiments, one or more of the cores 302A-N are capable of multithreading. The system agent 310 includes those components coordinating and operating cores 302A-N. The system agent unit 310 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 302A-N and the special purpose logic 308.The cores 302A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 302A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.FIGs. 4-7 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG. 4 , shown is a block diagram of a system 400 in accordance with one embodiment of the present invention. The system 400 may include one or more processors 410, 415, which are coupled to a controller hub 420. In one embodiment, the controller hub 420 includes a graphics memory controller hub (GMCH) 490 and an Input/Output Hub (IOH) 450 (which may be on separate chips); the GMCH 490 includes memory and graphics controllers to which are coupled memory 440 and a coprocessor 445; the IOH 450 is couples input/output (I/O) devices 460 to the GMCH 490. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 440 and the coprocessor 445 are coupled directly to the processor 410, and the controller hub 420 in a single chip with the IOH 450.The optional nature of additional processors 415 is denoted in FIG. 4 with broken lines. Each processor 410, 415 may include one or more of the processing cores described herein and may be some version of the processor 300.The memory 440 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 420 communicates with the processor(s) 410, 415 via a multidrop bus, such as a frontside bus (FSB), point-to-point interface, or similar connection 495.In one embodiment, the coprocessor 445 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 420 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 410, 415 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 410 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 410 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 445. Accordingly, the processor 410 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 445. Coprocessor(s) 445 accept and execute the received coprocessor instructions.Referring now to FIG. 5 , shown is a block diagram of a first more specific exemplary system 500 in accordance with an embodiment of the present invention. As shown in FIG. 5 , multiprocessor system 500 is a point-to-point interconnect system, and includes a first processor 570 and a second processor 580 coupled via a point-to-point interconnect 550. Each of processors 570 and 580 may be some version of the processor 300. In one embodiment of the invention, processors 570 and 580 are respectively processors 410 and 415, while coprocessor 538 is coprocessor 445. In another embodiment, processors 570 and 580 are respectively processor 410 and coprocessor 445.Processors 570 and 580 are shown including integrated memory controller (IMC) units 572 and 582, respectively. Processor 570 also includes as part of its bus controller units point-to-point (P-P) interfaces 576 and 578; similarly, second processor 580 includes P-P interfaces 586 and 588. Processors 570, 580 may exchange information via a point-to-point (P-P) interface 550 using P-P interface circuits 578, 588. As shown in FIG. 5 , IMCs 572 and 582 couple the processors to respective memories, namely a memory 532 and a memory 534, which may be portions of main memory locally attached to the respective processors.Processors 570, 580 may each exchange information with a chipset 590 via individual P-P interfaces 552, 554 using point to point interface circuits 576, 594, 586, 598. Chipset 590 may optionally exchange information with the coprocessor 538 via a high performance interface 592. In one embodiment, the coprocessor 538 is a special-purpose processor, such as, for example, a high throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 590 may be coupled to a first bus 516 via an interface 596. In one embodiment, first bus 516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another I/O interconnect bus, although the scope of the present invention is not so limited.As shown in FIG. 5 , various I/O devices 514 may be coupled to first bus 516, along with a bus bridge 518 which couples first bus 516 to a second bus 520. In one embodiment, one or more additional processor(s) 515, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 516. In one embodiment, second bus 520 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 520 including, for example, a keyboard and/or mouse 522, communication devices 527 and a storage unit 528 such as a disk drive or other mass storage device which may include instructions/code and data 530, in one embodiment. Further, an audio I/O 524 may be coupled to the second bus 516. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 5 , a system may implement a multi-drop bus or other such architecture.Referring now to FIG. 6 , shown is a block diagram of a second more specific exemplary system 600 in accordance with an embodiment of the present invention. Like elements in FIGs. 5 and 6 bear like reference numerals, and certain aspects of FIG. 5 have been omitted from FIG. 6 in order to avoid obscuring other aspects of FIG. 6 .FIG. 6 illustrates that the processors 570, 580 may include integrated memory and I/O control logic ("CL") 672 and 682, respectively. Thus, the CL 672, 682 include integrated memory controller units and include I/O control logic. FIG. 6 illustrates that not only are the memories 532, 534 coupled to the CL 572, 582, but also that I/O devices 614 are also coupled to the control logic 572, 582. Legacy I/O devices 615 are coupled to the chipset 590.Referring now to FIG. 7 , shown is a block diagram of a SoC 700 in accordance with an embodiment of the present invention. Similar elements in FIG. 3 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In FIG. 7 , an interconnect unit(s) 702 is coupled to: an application processor 710 which includes a set of one or more cores 302A-N, cache units 304A-N, and shared cache unit(s) 306; a system agent unit 310; a bus controller unit(s) 316; an integrated memory controller unit(s) 314; a set or one or more coprocessors 720 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 730; a direct memory access (DMA) unit 732; and a display unit 740 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 720 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 530 illustrated in FIG. 5 , may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include nontransitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.FIG. 8 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIG. 8 shows a program in a high level language 802 may be compiled using a first compiler 804 to generate a first binary code (e.g., x86) 806 that may be natively executed by a processor with at least one first instruction set core 816. In some embodiments, the processor with at least one first instruction set core 816 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel® processor with at least one x86 instruction set core. The first compiler 804 represents a compiler that is operable to generate binary code of the first instruction set 806 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one first instruction set core 816. Similarly, FIG. 8 shows the program in the high level language 802 may be compiled using an alternative instruction set compiler 808 to generate alternative instruction set binary code 810 that may be natively executed by a processor without at least one first instruction set core 814 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 812 is used to convert the first binary code 806 into code that may be natively executed by the processor without an first instruction set core 814. This converted code is not likely to be the same as the alternative instruction set binary code 810 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 812 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have a first instruction set processor or core to execute the first binary code 806.Instruction set architecture (ISA) extensions for accelerating data parallel workloads require explicit vector word lengths encoded in the machine representation. One embodiment of the invention extends an existing ISA (e.g., such as an x86 ISA) with a scalar microthreaded instruction processing architecture. In particular, a data parallel single program multiple data (SPMD) microarchitecture may be used to provide for scalable execution datapath sizes beyond the limitations of existing instructions, achieving greater instruction execution throughput with reduced energy consumption.Current CPU architectures have used multiple generations of sub-word single instruction multiple data (SIMD) extensions for accelerating data parallel operations (e.g., including SSE2, SSE4, AVX, and AVX-512 in the x86 architecture). Each successive generation extends the state and instruction set of the CPU, creating legacy performance upside issues and requiring recompilation of old codes.Graphics processing units (GPUs) have implemented SPMD architectures using hardware divergence stacks to handle divergent control flow cases. The hardware divergence stack is manipulated via explicit instructions and/or control codes as statically implemented by the finalizer agent for existing GPUs.One embodiment of the invention includes a SPMD data parallel execution engine that uses a scalar microthread abstraction, similar to programming an array of scalar processors with no architected divergence instructions or control codes. As discussed below, these embodiments are particularly suitable for implementation in an existing ISA which includes a predefined Application Binary Interface (ABI).FIG. 9 illustrates one example of a data parallel cluster (DPC) 900 which may be integrated within a microarchitecture of a processor and/or may be used as an acceleration engine to execute a particular set of instructions/uops 914. In one embodiment, front end circuitry 907 comprises a gang scheduler 901 to schedule ganged execution of scalar microthreads within a plurality of scalar lanes such as lane 910. The number of scalar lanes in the data parallel cluster 900 can be varied without impacting software. In the illustrated implementation, 16 lanes are shown; however, any number of lanes may be used, depending on the implementation. In one embodiment, 32 lanes may be used.In one embodiment, the gang scheduler 901 schedules the same instruction on multiple active lanes. A microarchitectural mask 913 (e.g., read from a mask register) disables those lanes that are not required to be active. In one embodiment, the gang scheduler 901 reads the mask values to determine which lanes are to be active for which instructions/uops.In one embodiment, an instruction decode queue (IDQ) 905 within the front end 907 stores microoperations (uops) of decoded macroinstructions which are added to the IDQ in program order (e.g., in a FIFO implementation). As mentioned, the IDQ 905 may be partitioned for multiple gangs of operation.Various arrangements for coupling the DPC 900 to a host processor are described below. In an implementation in which instructions are decoded by a host processor, the DPC 900 does not include a decoder to generate the uops prior to execution on the lanes. Alternatively, in an implementation in which macroinstructions are forwarded from a host processor or read directly from memory by the DPC, the front end of the DPC (e.g., the gang scheduler 901) includes a decoder to generate sequences of uops which are then stored in the IDQ prior to execution.Each lane in the data parallel cluster 900 is coupled to the IDQ 905 from which it receives uops to be executed in parallel. In one embodiment, each lane includes an integer register file (IRF) 920 and a floating-point register file (FRF) 930 for storing integer and floating point operands, respectively. Each lane also includes a tensor arithmetic logic unit (ALU) 940 to perform adaptive lane-wise tensor processing (as described in greater detail below), a per-microthread scalar ALU 950, and a per-microthread, independent address generation unit 960. In one embodiment, the independent AGU 960 provides high throughput address generation for codes with gather/scatter memory access patterns. Other independent functional units may also be allocated to each lane. For example, in one embodiment, each lane is equipped with an independent jump execution unit (JEU) which allows the lanes to diverge and interact with the microarchitectural mask to provide the illusion of independent threads.The illustrated architecture also includes a shared data cache 980 to store local copies of data for each of the lanes. In one embodiment, if the data parallel cluster 900 is integrated in a chip or system with a host processor, it participates in the cache coherency protocol implemented by the host processor. A page miss handler 984 performs page walk operations to translate virtual addresses to physical (system memory) addresses and a data translation lookaside buffer (DTLB) 985 caches the virtual-to-physical translations.As illustrated in FIGS. 10A-C , the data parallel cluster 900 may be integrated in a computer system in a variety of ways. In FIG. 10A , the DPC 900 is integral to a core 1001a; in FIG. 10B , the DPC 900 is on the same chip and shared by a plurality of cores; and in FIG. 10C , the DPC 900 is on a different chip (but potentially in the same package) as the cores 1001a-b.Turning first to FIG. 10A , the illustrated architectures include a core region 1001 and a shared, or "uncore" region 1010. The shared region 1010 includes data structures and circuitry shared by all or a subset of the cores 1001a-b. In the illustrated embodiment, the plurality of cores 1001a-b are simultaneous multithreaded cores capable of concurrently executing multiple instruction streams or threads. Although only two cores 1001a-b are illustrated in FIG. 10A for simplicity, it will be appreciated that the core region 1001 may include any number of cores, each of which may include the same architecture as shown for core 1001a. Another embodiment includes heterogeneous cores which may have different instruction set architectures and/or different power and performance characteristics (e.g., low power cores combined with high power/performance cores).The various components illustrated in FIG. 10A may be implemented in the same manner as corresponding components in FIGS. 1-7 . In addition, the cores 1001a may include the components of core 190 shown in FIG. 1B , and may include any of the other processor/core components described herein (e.g., FIGS. 2A-B , FIG. 3 , etc.).Each of the cores 1001a-b include instruction pipeline components for performing simultaneous execution of instruction streams including instruction fetch circuitry 1018 which fetches instructions from system memory 1060 or the instruction cache 1010 and decoder 1009 to decode the instructions. Execution circuitry 1008 executes the decoded instructions to perform the underlying operations, as specified by the instruction operands, opcodes, and any immediate values.In the illustrated embodiment, the decoder 1009 includes DPC instruction decode circuitry 1099 to decode certain instructions into uops for execution by the DPC 900 (integrated within the execution circuitry 1008 in this embodiment). Although illustrated as separate blocks in FIG. 10A , the DPC decode circuitry 1099 and DPC 900 may be distributed as functional circuits spread throughout the decoder 1009 and execution circuitry 1008.In an alternate embodiment, illustrated in FIG. 10B , the DPC 900 is tightly coupled to the processor cores 1001a-b over a cache coherent interconnect (e.g., in which a data cache participates in the same set of cache coherent memory transactions as the cores). The DPC 900 is configured as a peer of the cores, participating in the same set of cache coherent memory transactions as the cores. In this embodiment, the decoders 1009 decode the instructions which are to be executed DPC 900 and the resulting microoperations are passed for execution to the DPC 900 over the interconnect 1006. In another embodiment, the DPC 900 includes its own fetch and decode circuitry to fetch and decode instructions, respectively, from a particular region of system memory 1060. In either implementation, after executing the instructions, the DPC 900 may store the results to the region in system memory 1460 to be accessed by the cores 1001a-b.FIG. 10C illustrates another embodiment in which the DPC is on a different chip from the cores 1001a-b but coupled to the cores over a cache coherent interface 1096. In one embodiment, the cache coherent interface 1096 uses packet-based transactions to ensure that the data cache 980 of the DPC 900 is coherent with the cache hierarchy of the cores 1001a-b.Also illustrated in FIGS. 10A-C are general purpose registers (GPRs) 1018d, a set of vector/tile registers 1018b, a set of mask registers 1018a (which may include tile mask registers as described below), and a set of control registers 1018c. In one embodiment, multiple vector data elements are packed into each vector register which may have a 512 bit width for storing two 256 bit values, four 128 bit values, eight 64 bit values, sixteen 32 bit values, etc. Groups of vector registers may be combined to form the tile registers described herein. Alternatively, a separate set of 2-D tile registers may be used. However, the underlying principles of the invention are not limited to any particular size/type of vector/tile data. In one embodiment, the mask registers 1018a include eight 64-bit operand mask registers used for performing bit masking operations on the values stored in the vector registers 1018b (e.g., implemented as mask registers k0-k7 described above). However, the underlying principles of the invention are not limited to any particular mask register size/type. A set of one or more mask registers 1018a may implement the tile mask registers described herein.The control registers 1018c store various types of control bits or "flags" which are used by executing instructions to determine the current state of the processor core 1001a. By way of example, and not limitation, in an x86 architecture, the control registers include the EFLAGS register.An interconnect 1006 such as an in-die interconnect (IDI) or memory fabric implementing an IDI/coherence protocol communicatively couples the cores 1001a-b (and potentially a the DPC 900) to one another and to various components within the shared region 1010. For example, the interconnect 1006 couples core 1001a via interface 1007 to a level 3 (L3) cache 1013 and an integrated memory controller 1030. In addition, the interconnect 1006 may be used to couple the cores 1001a-b to the DPC 900.The integrated memory controller 1030 provides access to a system memory 1060. One or more input/output (I/O) circuits (not shown) such as PCI express circuitry may also be included in the shared region 1010.An instruction pointer register 1012 stores an instruction pointer address identifying the next instruction to be fetched, decoded, and executed. Instructions may be fetched or prefetched from system memory 1060 and/or one or more shared cache levels such as an L2 cache 1013, the shared L3 cache 1020, or the L1 instruction cache 1010. In addition, an L1 data cache 1002 stores data loaded from system memory 1060 and/or retrieved from one of the other cache levels 1013, 1020 which cache both instructions and data. An instruction TLB (ITLB) 1011 stores virtual address to physical address translations for the instructions fetched by the fetch circuitry 1018 and a data TLB (DTLB) 1003 stores virtual-to-physical address translations for the data processed by the decode circuitry 1009 and execution circuitry 1008.A branch prediction unit 1021 speculatively predicts instruction branch addresses and branch target buffers (BTBs) 1022 for storing branch addresses and target addresses. In one embodiment, a branch history table (not shown) or other data structure is maintained and updated for each branch prediction/misprediction and is used by the branch prediction unit 1002 to make subsequent branch predictions.Note that FIGS. 10A-C are not intended to provide a comprehensive view of all circuitry and interconnects employed within a processor. Rather, components which are not pertinent to the embodiments of the invention are not shown. Conversely, some components are shown merely for the purpose of providing an example architecture in which embodiments of the invention may be implemented.Returning to FIG. 9 , the processing cluster 900 is arranged into a plurality of lanes 910 that encapsulate execution resources (e.g., an IRF 920, an FRF 930, a tensor ALU 940, an ALU 950, and an AGU 960) for several microthreads. Multiple threads share a given lane's execution resources in order to tolerate pipeline and memory latency. The per-microthread state for one implementation is a subset of a modern processor state.FIG. 11 illustrates one example of a microthread state 1100 which is a subset of a scalar x86 state. The microthread state 1100 includes state from general purpose registers 1101 (e.g., sixteen 64-bit registers), XMM registers 1102 (e.g., thirty-two 64-bit registers), an RFLAGS register 1104, an instruction pointer register 1105, segment selectors 1106, and the MXCSR register 1103. Using a subset of a scalar x86 is convenient for programmers, is software compatible with existing x86 codes, and requires minimal changes to current compilers and software toolchains. The lanes of this embodiment execute scalar, user-level instructions. Of course, the underlying principles of the invention are not limited to this particular arrangement.In one embodiment, illustrated in FIG. 12 , multiple data parallel clusters 900A-D are collocated into a larger unit of scaling referred to as a "DPC tile" 1200. The various data parallel clusters 900A-D may be coupled to one another over a high speed interconnect of fabric. The DPC tile 1200 may be integrated within a processor or computer system using any of the microarchitectural implementations described above with respect to the single DPC 900 in FIG. 10A-C (i.e., DPC tile 1200 may be substituted for the DPC 900 in these figures).The DPC tile 1200 includes a shared cache 1201 and relies on the existing fetch 1018 and decoder 1009 of one or more cores. A prefetcher 1202 prefetches data from system memory and/or the cache hierarchy in anticipation of uops executed on the data parallel clusters 900A-D. Although not illustrated, the shared cache 1201 may be coupled between the data parallel clusters 900A-D and each DPC 900A-D may be coupled to the on-chip interconnection network (e.g., IDI).Sharing the execution resources of a processor across a whole cluster amortizes the relatively complex decode process performed by decoder 1009. One embodiment of the invention can support hundreds of microthreads executing instructions using a tiny fraction of the fetch 1018 and decoder 1009 resources of a conventional processor design.Referring now to FIG. 13 , shown is a block diagram of a portion of a processor in accordance with an embodiment. More specifically as shown in FIG. 13 , the portion of a processor 1300 shown is a SPMD processor. As illustrated, a scheduler 1310 receives incoming instructions and stores information associated with the instructions in entries 1314. Scheduler 1310, upon dispatch of a store instruction, generates a symbolic store address for inclusion in a unified symbolic store address buffer 1320. As seen, each entry 1314 may include various information associated with a given instruction, including an instruction identifier, identifiers for various source and/or destination operands of the instruction, and metadata associated with the instruction, such as ready indicators to indicate whether the corresponding operands are available for execution.Understand that this scheduler, in an embodiment, may be implemented with a reservation station or other scheduling logic that tracks instructions or uops and identifies operands for these instructions and their readiness. In some cases, scheduler 1310 further may check for conflicts between instructions, e.g., via a control circuit 1312. When a given instruction is ready for execution and no conflict is detected, the instruction may be dispatched from scheduler 1310 to a plurality of execution lanes 13300-1330n.As further illustrated in FIG. 13 , unified symbolic store address buffer 1320 includes a plurality of entries 13220-1322x. In an embodiment, each entry 1322 may store a symbolic store address as generated by scheduler 1310. In some cases, information present within a given entry 1314 of scheduler 1310 may be used to generate this symbolic address. In one embodiment, this symbolic address generation may obtain fields present in a reservation station entry and copy them into symbolic store address buffer 1320. Of course in other embodiments additional information such as a base index and register affine relationships may be stored in entries 1322 of symbolic store address buffer 1320.As illustrated in the high level of FIG. 13 , each execution lane 1330 may include one or more memory execution units 1332 and one or more arithmetic logic units (ALUs) 1334. In addition, for load instructions handled within memory execution unit 1332, a corresponding load address may be generated and stored in a memory order queue 1336. As further shown, memory execution units 1332 and ALUs 1334 may use information stored in a register file 1338. Results of execution of instructions may be provided to a retirement circuit 1340, which may operate to retire an instruction when the instruction has been appropriately executed in each execution lane 1330. Understand while shown at this high level in the embodiment of FIG. 13 , many variations and alternatives are possible.Referring now to FIG. 14 , shown is a flow diagram of a method in accordance with one embodiment of the present invention. More specifically, method 1400 of FIG. 14 is a method for generating a symbolic address for a store instruction and inserting the symbolic address in a unified symbolic store address buffer at store instruction dispatch. As such, method 1400 may be performed by scheduler circuitry such as may be implemented in hardware circuitry, firmware, software and/or combinations thereof. In a particular embodiment, scheduler circuitry of an SPMD processor may, in response to receipt of a store instruction, generate a symbolic address for the instruction, insert it in the unified symbolic store address buffer, and dispatch the store instruction to multiple execution lanes for execution.As illustrated, method 1400 begins at block 1410, where an SPMD store instruction is received in the scheduler. In an embodiment, the scheduler may include a reservation station or other scheduler circuitry to track incoming instructions and schedule them for dispatch to the execution lanes. At block 1420, an entry is inserted in the scheduler for this SPMD store instruction. Thereafter, at block 1430, a symbolic address, namely a symbolic store address, is generated for this store instruction. More specifically, this symbolic store address may be generated when the SPMD store instruction is the next instruction to be dispatched. As described herein, this symbolic address may be based at least in part on a logical concatenation of multiple fields or constituent components based on instruction information. In some embodiments, information present in a reservation station entry for the store instruction may be used to generate the symbolic store address. Next, at block 1440 the symbolic address may be stored in an entry of a unified symbolic store address buffer. With this arrangement, the need for per lane store address buffers is avoided, and a concomitant reduction in address comparison circuitry to perform per lane address comparisons for succeeding load instructions is realized.Still with reference to FIG. 14 , at diamond 1450 it is determined whether the SPMD store instruction is the senior store instruction within the pipeline, such that it is ready to be dispatched. When it is determined that the SPMD store instruction is the senior store, control passes to block 1460, where the store instruction is dispatched to the execution lanes for execution. Understand that each execution lane may, based upon its internal state (e.g., register contents), generate different per lane store addresses to access different memory locations for storage of corresponding store data. Also understand that at this point in the execution, the store data itself need not be available, nor do the source address register operands for calculating the address need be ready. Understand while shown at this high level in the embodiment of FIG. 14 , many variations and alternatives are possible.Referring now to FIG. 15 , shown is a flow diagram of a method in accordance with another embodiment of the present invention. More specifically, method 1500 is a method for accessing a unified symbolic store address buffer on dispatch of a younger load instruction. As such, method 1500 may be performed by scheduler circuitry such as may be implemented in hardware circuitry, firmware, software and/or combinations thereof. In a particular embodiment, scheduler circuitry of an SPMD processor may, in response to receipt of a load instruction, generate a symbolic address for the instruction, and access the unified symbolic store address buffer, to identify potential conflicts between this load instruction and one or more in-flight store instructions. Thus as described herein, such operation may detect conflicts between an older store instruction that has not yet retired and a younger load instruction dependent upon such older store instruction.As illustrated, method 1500 begins at block 1510 by dispatching an SPMD load instruction to a scheduler. At block 1520 a symbolic load address for this load instruction is generated. Note that this symbolic address generation may occur according to the same symbolic mechanism used for generating symbolic store addresses for store instructions. Thereafter at block 1530 the unified symbolic store address buffer is accessed using this symbolic load address. In this manner, an address comparison operation using only a single load address is performed for a given load instruction, rather than requiring a per lane address comparison in the absence of the symbolic address mechanisms described herein. As such, chip area and power consumption may be reduced dramatically.Based upon the address comparison at block 1530, it is determined whether a conflict exists (diamond 1540). That is, if the symbolic load address matches one or more entries in the unified symbolic store address buffer, this indicates a conflict in that the load instruction is dependent upon one or more earlier store instructions. In this situation, control passes to block 1550 where the load instruction may be stalled. More specifically, this load instruction may remain stalled in the scheduler until the store instruction of the conflicting entry retires. In other situations, other stall handling techniques may be performed such that the load instruction is stalled until other ordering requirements are met, such as ensuring that all earlier store operations have retired or so forth, depending upon a desired implementation and aggressiveness or conservativeness desired with regard to potential mis-speculations.Still with reference to FIG. 15 , instead if it determined that no conflict is detected between a load instruction and older store instructions, control passes to block 1560 where the load instruction may be dispatched to the multiple execution lanes for execution.Note that FIG. 15 further shows operations performed on a per lane basis during execution of the load instruction. Specifically, at block 1570 in each execution lane a load address is computed based on the symbolic load address. That is, in each execution lane and based upon its own register state, a given load address can be computed. Next at block 1580, the load instruction can therefore be speculatively executed in each lane using the per lane load address (assuming the load instruction is not previously stalled at block 1550). Finally, at block 1590, each lane may write its load address into a corresponding memory order queue of the execution lane. With this memory order queue arrangement on a per lane basis, it can be determined during store instruction retirement (as described further below) whether a conflict exists between a store instruction set to retire and one or more speculatively executed load instructions following that store instruction in program order. In an embodiment, this per lane memory order queue may include a plurality of entries each to store, for a given load instruction, the load address computed in the execution lane, and an identifier, e.g., of a reorder buffer entry corresponding to the given load instruction. Understand while shown at this high level in the embodiment of FIG. 15 , many variations and alternatives are possible.Referring now to FIG. 16 , shown is a flow diagram of a method in accordance with yet another embodiment of the present invention. More specifically, method 1600 is a method for retiring store instructions that take advantage of the eager dispatch of dependent younger load instructions as described herein. In an embodiment, method 1600 may be performed by retirement circuitry and/or within the execution lanes themselves, such as may be implemented in hardware circuitry, firmware, software and/or combinations thereof.As illustrated, method 1600 begins by selecting a store instruction for retirement (block 1610). In an embodiment, a store instruction may be selected when it is the top entry in a reorder buffer or other retirement structure. Understand that a store instruction is ready for retirement when address and data operands are ready across all active lanes and optionally data has moved to a store data buffer. Next at block 1620 an address for this store instruction is computed for each execution lane. Note that at this point, namely at store data dispatch, this address computation occurs, which advantageously enables efficient operation, since source address register operand values do not need to be present until this point, rather than requiring such values be available at dispatch of the store instruction.Still with reference to FIG. 16 , next at block 1630 the per execution lane memory order queue can be accessed using this computed address. The address comparison with this computed store address in the memory order queue access can be used to identify any speculatively executed load instructions that are dependent upon this ready to retire store instruction. Based on this memory order queue access it is determined at diamond 1640 whether there is a conflict with a younger load instruction. Note it is possible for certain execution lanes to have no conflict, while other lanes have a conflict. If there is no conflict (namely where there is a miss between the store address and the load addresses present in the memory order queue), control passes to block 1660 where the store data may be committed to memory and thus the store instruction retires. It is at this point that the symbolic store address for this store instruction is dequeued from the unified symbolic store address buffer. Thus at this point, after the store instruction has been executed by all execution lanes (and thus not occurring at store instruction dispatch), the entry in the unified symbolic store address buffer is dequeued once the store instruction validly retires (block 1670).Still with reference to FIG. 16 , instead if it is determined that there is a conflict with a younger load instruction, control passes from diamond 1640 to block 1650 where a mis-prediction of the younger load instruction is thus identified, and at least part of the pipeline of the execution lane may be cleared. To this end, various mechanisms to handle the misprediction or mis-speculation may occur. For example, in a conservative approach, all younger load instructions (namely those load instructions younger than the ready to retire store instruction) may be flushed from the pipeline. In other cases, only those load instructions from the identified misprediction and younger may be flushed. In any event, appropriate flush operations to flush some or all of the execution lane pipeline may occur. Thereafter, control passes to block 1660, discussed above where the store data for the store instruction may commit to memory. Understand while shown at this high level in the embodiment of FIG. 16 , many variations and alternatives are possible.The following examples pertain to further embodiments.In one example, an apparatus comprises: a plurality of execution lanes to perform parallel execution of instructions; and a unified symbolic store address buffer coupled to the plurality of execution lanes, the unified symbolic store address buffer comprising a plurality of entries each to store a symbolic store address for a store instruction to be executed by at least some of the plurality of execution lanes.In an example, the apparatus further includes a scheduler to generate the symbolic store address based on at least some address fields of the store instruction, the symbolic store address comprising a plurality of fields including a displacement field, a base register field, and an index register field.In an example, the plurality of fields further includes a scale factor field and an operand size field.In an example, the scheduler is, for a load instruction following the store instruction in program order, to generate a symbolic load address for the load instruction based on at least some address fields of the load instruction and access the unified symbolic store address buffer based on the symbolic load address, to determine whether the load instruction conflicts with an in-flight store instruction.In an example, in response to a determination that the load instruction conflicts with the in-flight store instruction, the scheduler is to suppress the load instruction until the in-flight store instruction completes.In an example, in response to a determination that the load instruction does not conflict with the in-flight store instruction, the scheduler is to speculatively dispatch the load instruction to the plurality of execution lanes.In an example, in response to the speculative dispatch of the load instruction, at least some of the plurality of execution lanes are to compute a lane load address for the load instruction, execute the load instruction and store the lane load address into a memory order queue of the execution lane.In an example, at retirement of the store instruction, each of the plurality of execution lanes is to compute a lane store address for the store instruction and determine based at least in part on contents of the memory order queue whether one or more load instructions conflict with the store instruction.In an example, in response to a determination of the conflict in a first execution lane, the first execution lane is to flush the one or more load instructions from the first execution lane.In an example, the apparatus is to dynamically disable speculative execution of load instructions based at least in part on a performance metric of an application in execution.In an example, the performance metric comprises a mis-speculation rate.In another example, a method comprises: receiving, in a scheduler of a processor, a SPMD store instruction; generating a symbolic address for the SPMD store instruction; storing the symbolic address for the SPMD store instruction in an entry of a unified symbolic store address buffer; dispatching the SPMD store instruction to a plurality of execution lanes of the processor; and speculatively dispatching a load instruction following the SPMD store instruction in program order to the plurality of execution lanes based at least in part on access to the unified symbolic store address buffer with a symbolic address for the load instruction.In an example, the method further comprises preventing the load instruction from being speculatively dispatched when the symbolic address for the load instruction matches an entry in the unified symbolic store address buffer.In an example, the method further comprises generating the symbolic address for the SPMD store instruction based on an address of the SPMD store instruction, the symbolic address for the SPMD store instruction comprising a plurality of fields including a displacement field, a base register field, an index register field, a scale factor field and an operand size field.In an example, the method further comprises, at retirement of the SPMD store instruction: computing, in each of the plurality of execution lanes, a lane store address for the SPMD store instruction; and accessing a memory order queue of the corresponding execution lane using the lane store address to determine whether a conflict exists between the SPMD store instruction and one or more speculatively executed load instructions following the SPMD store instruction in program order.In an example, the method further comprises preventing speculatively dispatching load instructions when a mis-speculation rate exceeds a threshold.In an example, the method further comprises dequeuing the entry of the unified symbolic store address buffer including the symbolic address for the SPMD store instruction when the SPMD store instruction is retired.In another example, a computer readable medium including instructions is to perform the method of any of the above examples.In a further example, a computer readable medium including data is to be used by at least one machine to fabricate at least one integrated circuit to perform the method of any one of the above examples.In a still further example, an apparatus comprises means for performing the method of any one of the above examples.In another example, a system includes a processor and a system memory coupled to the processor. The processor may include: a host processor comprising a plurality of cores, where a first core is to execute a first thread; and a data parallel cluster coupled to the host processor. The data parallel cluster in turn may include: a plurality of execution lanes to perform parallel execution of instructions of a second thread related to the first thread; a scheduler to generate, at store address dispatch of a store instruction to be executed by the plurality of execution lanes and prior to computation of a lane store address for the store instruction by each of the plurality of execution lanes, a symbolic store address for the store instruction based on an address of the store instruction; and a unified symbolic store address buffer coupled to the plurality of execution lanes to store the symbolic store address.In an example, the scheduler is, for a load instruction following the store instruction in program order, to generate a symbolic load address for the load instruction based on an address of the load instruction and access the unified symbolic store address buffer based on the symbolic load address to determine whether the load instruction conflicts with an in-flight store instruction.In an example, in response to a determination that the load instruction does not conflict with the in-flight store instruction, the plurality of execution lanes are to compute a lane load address for the load instruction, speculatively execute the load instruction and store the lane load address in a memory order queue of the execution lane, and at retirement of the store instruction compute the lane store address for the store instruction and determine, based at least in part on contents of the memory order queue, whether one or more load instructions conflict with the store instruction.Understand that various combinations of the above examples are possible.Note that the terms "circuit" and "circuitry" are used interchangeably herein. As used herein, these terms and the term "logic" are used to refer to alone or in any combination, analog circuitry, digital circuitry, hard wired circuitry, programmable circuitry, processor circuitry, microcontroller circuitry, hardware logic circuitry, state machine circuitry and/or any other type of physical hardware component. Embodiments may be used in many different types of systems. For example, in one embodiment a communication device can be arranged to perform the various methods and techniques described herein. Of course, the scope of the present invention is not limited to a communication device, and instead other embodiments can be directed to other types of apparatus for processing instructions, or one or more machine readable media including instructions that in response to being executed on a computing device, cause the device to carry out one or more of the methods and techniques described herein.Embodiments may be implemented in code and may be stored on a non-transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. Embodiments also may be implemented in data and may be stored on a non-transitory storage medium, which if used by at least one machine, causes the at least one machine to fabricate at least one integrated circuit to perform one or more operations. Still further embodiments may be implemented in a computer readable storage medium including information that, when manufactured into a SoC or other processor, is to configure the SoC or other processor to perform one or more operations. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
PROBLEM TO BE SOLVED: To provide etching protection against unintended removal of a thin multilayer side wall spacer for an MOS transistor using the multilayer side wall spacer.SOLUTION: On a semiconductor surface of a substrate which has a gate stack, including a gate electrode on a gate dielectric, thereupon, a first dielectric material is deposited 101. For forming a side wall spacer on a side wall of the gate stack, the first dielectric material is etched 102. For providing a surface-converted side wall spacer, at least one element is added so as to cause a top part surface of the first dielectric material to chemically change into a second dielectric material 103. The second dielectric material is chemically bonded to the first dielectric material over a transition region.SELECTED DRAWING: Figure 1
A method of fabricating an integrated circuit, comprising: depositing a first dielectric material on a semiconductor surface of a substrate having a gate stack comprising a gate electrode on a gate dielectric thereon; depositing a first dielectric material on a sidewall Etching said first dielectric material to form a sidewall spacer comprising said first dielectric material in said first dielectric material and adding at least one element thereto to provide a transformed sidewall spacer surface Chemically converting the top surface of the first dielectric material to a second dielectric material by a second dielectric material, wherein the second dielectric material is chemically converted to the first dielectric material over the transition region Are combined.3. The method of claim 1, wherein said etching comprises reactive ion etching (RIE), said chemically converting comprises: introducing a chemical reaction to cause a chemical reaction with said first dielectric material Comprising flowing a gas under conditions.3. The method of claim 2, wherein the conditions comprise flowing a hydrocarbon gas at a temperature of 300 ° C. to 800 ° C., wherein the element comprises carbon and the first dielectric material comprises carbon Not, way.4. The method of claim 3, wherein the first dielectric material comprises silicon nitride and the second dielectric material is silicon carbide (SiC), silicon carbonitride (SiCN), or silicon oxy Including carbonite ride (SiOCN), method.3. The method of claim 1, further comprising: ion implanting to form lightly doped sources and drains on the semiconductor surface beside the gate stack; and depositing on the surface-converted sidewall spacers a second Forming a spacer, forming the source and drain on the semiconductor surface beside the gate stack after forming the second spacer, and after forming the source and drain, forming the second Further comprising selectively removing spacers.6. The method of claim 5, wherein the selective removal comprises a phosphoric acid (HPA) etch.7. The method of claim 6, wherein the temperature for the HPA etch is from 120 to 180 <0> C.The method of claim 1, wherein the first dielectric material comprises silicon nitride.A method of fabricating an integrated circuit comprising: depositing a first dielectric material on a semiconductor surface of a substrate having a gate stack comprising a gate electrode on a gate dielectric thereon; depositing a first dielectric material on a sidewall Reactive ion etching (RIE) the first dielectric material to form a sidewall spacer comprising the first dielectric material in a first dielectric material; providing at least one Chemically converting the top surface of the first dielectric material to a second dielectric material by adding an element to the first dielectric material, wherein the chemically converting comprises: Comprising flowing a hydrocarbon gas at a temperature between 300 ° C. and 800 ° C. under conditions to cause a chemical reaction with said first dielectric material, said element comprising carbon, said first dielectric material is free of carbon, It said second dielectric material is chemically bonded to said first dielectric material over the transition region, the method.10. The method of claim 9, further comprising: forming a second spacer on the surface converted sidewall spacer; forming a second spacer on the semiconductor surface beside the gate stack after forming the second spacer; And ion implanting to form a drain, and selectively removing the second spacer using phosphoric acid (HPA) etching at a temperature of 120 ° C. to 180 ° C. after the ion implantation ,Method.What is claimed is: 1. An integrated circuit (IC) comprising: a substrate having a semiconductor surface; and at least one metal oxide semiconductor (MOS) transistor on the semiconductor surface, wherein the MOS transistor comprises a sidewall spacer on a sidewall of the gate stack The gate stack comprising a gate electrode on a gate dielectric having a gate dielectric on the sidewall spacer, wherein the sidewall spacer comprises a second dielectric material on a first dielectric material; a sidewall spacer Wherein the second dielectric material comprises carbon and the second dielectric material comprises a material selected from the group consisting of the first dielectric IC chemically bonded to sexual material.The IC of claim 10, wherein the first dielectric material comprises silicon nitride and the second dielectric material is silicon carbide (SiC), silicon carbonitride (SiCN), or silicon oxy Including Carbonite Ride (SiOCN), IC.11. The IC of claim 10, wherein an area of ​​the second dielectric material is aligned with an area of ​​the first dielectric material.11. The IC of claim 10, wherein the total thickness of the sidewall spacer is ≦ 100 Å.
An integrated circuit having a chemically modified spacer surfaceThe disclosed embodiments relate to a semiconductor processing and integrated circuit (IC) device including a metal oxide semiconductor (MOS) transistor including a MOS transistor having a multilayer sidewall spacer.It is often advantageous to deposit or form a film which can serve as an etch stop layer when processing a semiconductor wafer while a film to be deposited or formed subsequently is removed. However, if the film does not have sufficient etch resistance during subsequent processing, such film can be unintentionally removed.One example of unintentional removal involves thin silicon nitride sidewall (or offset) spacers for MOS transistors. A thin silicon nitride sidewall spacer is generally used as an implant mask to provide a space between a lightly doped drain (LDD) implant to the semiconductor surface and the gate stack. A typical process flow initially functions as an offset spacer and is then used as a bottom layer / etch stop while additional film is deposited on top, such as a disposable second sidewall spacer comprising SiGe It has a first spacer layer, which is later removed. In one process flow, hot phosphoric acid (HPA) is used to remove the second sidewall spacer. However, even silicon nitride spacers formed from bis-tertiarybutylamino-silane (BTBAS) and ammonia reagents (note that BTBAS-based silicon nitride is the most wet-etch resistant silicon nitride for HPA It is not always possible to stop the HPA etch when the disposable SiGe second sidewall spacer is removed, which is known to be a film. In particular, when the silicon nitride sidewall spacer is exposed to a reducing agent, such as a plasma containing H 2 or N 2, the etch stop characteristics may be lost and unintentional removal of the silicon nitride offset sidewall spacers and, , A subsequent short circuit between the gate and source and / or drain, such as due to silicide subsequently deposited on the source, drain and drain. Also, as the size of the semiconductor device is reduced, the distance between the top of the gate stack and the top surface of the source / drain regions is reduced and electrical short circuiting due to the silicide formed on the sidewalls of the gate stack is possible The sex increases.The disclosed embodiment describes a solution to the above mentioned unintended removal of thin sidewall spacers for metal oxide semiconductor (MOS) transistors using multilayer sidewall spacers. By chemically converting the top surface of the first sidewall spacer comprising the first material by adding at least one element to form the second dielectric material, the second material is chemically transformed into a first Can substantially increase the etching resistance compared to the spacer material of FIG. As a result, subsequent removal of the disposable second spacer on the first spacer will not remove the first spacer as the second dielectric material can serve as an etch stop, or the May provide at least some etching protection for the first dielectric material.One disclosed embodiment fabricates an integrated circuit that includes depositing a first dielectric material on a semiconductor surface of a substrate having a gate stack comprising a gate electrode on a gate dielectric thereon Including methods. The first dielectric material is etched to form sidewall spacers on sidewalls of the gate stack, such as using RIE. The top surface of the first dielectric material is chemically converted to the second dielectric material by adding at least one element to provide a surface converted sidewall spacer. The second dielectric material is chemically bonded to the first dielectric material (over the transition region).Following forming the surface-converted sidewall spacers, ion implantation may continue to form a lightly doped drain (LDD) at the semiconductor surface beside the gate stack. Thereafter, a second spacer is formed on the surface-converted sidewall spacer. Thereafter, a source and a drain are formed beside the gate stack. Ion implantation can be used to form the source and drain on the semiconductor surface next to the gate stack after forming the second spacer. Alternatively, a second sidewall spacer can be used for the SiGe S / D process (eg, typically recessed in the PMOS region and replaced with SiGe). The second spacer can then be selectively removed after source / drain formation. The surface of the chemically converted layer remains intact after selective etching such that the first dielectric material is protected by the surface-converted layer.FIG. 1 is a flowchart illustrating the steps in an exemplary method for fabricating an integrated circuit (IC) device having a MOS transistor including a surface-converted sidewall spacer in accordance with one illustrative embodiment.FIG. 2A is a cross-sectional view illustrating process progress for an exemplary method of fabricating an IC device having a MOS transistor including a surface-converted sidewall spacer in accordance with one illustrative embodiment.FIG. 2B is a cross-sectional view illustrating process progress for an exemplary method of fabricating an IC device having a MOS transistor including a surface-converted sidewall spacer in accordance with one illustrative embodiment.FIG. 2C is a cross-sectional view illustrating process progress for an exemplary method of fabricating an IC device having a MOS transistor including a surface-converted sidewall spacer in accordance with one illustrative embodiment.FIG. 2D is a cross-sectional view illustrating process progress for an exemplary method of fabricating an IC device having a MOS transistor including a surface-converted sidewall spacer in accordance with one illustrative embodiment.FIG. 2E is a cross-sectional view illustrating process progress for an exemplary method of fabricating an IC device having a MOS transistor including a surface-converted sidewall spacer in accordance with one illustrative embodiment.FIG. 2F is a cross-sectional view illustrating process progress for an exemplary method of fabricating an IC device having MOS transistors including surface-converted sidewall spacers, according to one illustrative embodiment.FIG. 2G shows the resultant spacer structure after the known spacer process showing the result of unintended removal of the nitride offset spacer.FIG. 3 is a cross-sectional view of a portion of an IC device including a MOS transistor having a sidewall spacer including a second dielectric material over a first dielectric material, in accordance with an exemplary embodiment, wherein the second Of the dielectric material is chemically bonded to the first dielectric material over the transition region.FIG. 4 includes a highly simplified depiction of the chemical bonds provided over the thickness of the surface-converted sidewall spacer, in accordance with one illustrative example, and is a function of thickness for an exemplary surface-converted sidewall spacer The composition is shown.FIG. 1 is a flowchart illustrating the steps in an exemplary method 100 for manufacturing an IC device having a MOS transistor including a surface-converted sidewall spacer in accordance with one illustrative embodiment. Step 101 includes depositing a first dielectric material on the semiconductor surface of the substrate having the gate stack including the gate electrode on the gate dielectric thereon. Step 102 includes etching the first dielectric material to form sidewall spacers on sidewalls of the gate stack, such as using RIE.Step 103 includes chemically converting the top surface of the first dielectric material into a second dielectric material by adding at least one element to provide a surface converted sidewall spacer. The second dielectric material is chemically bonded to the first dielectric material over the transition region. The chemically transformed top surface of the sidewall spacer becomes an etch stop by adding at least one element to form the second dielectric material, which, for hot phosphoric acid (HPA) etching, etc., Which substantially increases the wet etch resistance of the film compared to the unconverted first dielectric material. In one embodiment, the added element is carbon. In another embodiment, both carbon and oxygen are added.In one particular example, the first dielectric material includes BTBAS-derived silicon nitride and includes silicon carbide (SiC), silicon carbonitride (SiCN), and / or silicon oxycarbonitride (SiOCN) film Carbon is added to the top surface of the silicon nitride that forms the thin layer of the second dielectric material, typically 10 to 20 Angstroms thick. This is accomplished by depositing a BTBAS silicon nitride film previously used as a gate stack sidewall at a temperature of generally 300 ° C. to 800 ° C. and a pressure of about 0.1 to 10 Torr, before depositing a subsequent disposable spacer film, Acetylene, or similar hydrocarbon gas at a flow rate of up to 30 to 3000 seem for 15 to 600 seconds or longer. In the tests performed SiC, SiCN or SiOCN was formed and all were found to be largely unaffected by HPA etching at temperatures below 215 ° C. Because HPA is generally used at temperatures between 120 ° C. and 180 ° C., the underlying silicon nitride sidewall spacers are protected by the second dielectric material.In addition to apparent process differences, the relationship of the second dielectric material to the first dielectric material for the disclosed surface-converted sidewall spacers that are chemically bonded together is characterized by a first dielectric Unlike the known arrangement due to the vapor deposition (eg chemical vapor deposition) of the second dielectric material on the material, the second dielectric material is formed by the relatively weak Van der Waals forces 1 dielectric material. Also, due to the disclosed chemical conversion process, the area of ​​the second dielectric material is aligned with the area of ​​the first dielectric material. In contrast, in known arrangements due to the vapor phase growth of the second dielectric material on the first dielectric material, the area of ​​the second dielectric material is the same as the etching required for spacer formation May be different compared to the area of ​​the first dielectric material due to the process.Step 104 includes ion implanting to form a lightly doped drain (LDD) at the semiconductor surface beside the gate stack. In a CMOS process, PMOS transistors and NMOS transistors generally each undergo a separate LDD implant. Step 105 includes forming a second spacer on the surface-converted sidewall spacer. Step 106 includes forming a source and a drain next to the gate stack. After forming the second spacer, ion implantation can be used to form the source and drain on the semiconductor surface beside the gate stack. In a typical CMOS process, the PMOS and NMOS transistors each receive a separate source / drain implant. However, alternatively, the second sidewall spacer can be used for a SiGe S / D process (eg, typically having a recess in the PMOS region and being replaced with SiGe). Step 107 includes selectively removing the second spacer after source / drain formation (step 106). The surface of the chemically converted layer remains intact after selective etching such that the first dielectric material is protected by the surface-converted layer.2A-2F are cross-sectional views illustrating process progress for an exemplary method of fabricating an IC device having a surface-converted sidewall spacer in accordance with one illustrative embodiment. FIG. 2G shows the resultant spacer structure after the known spacer process showing unintentional removal of sidewall spacers. FIG. 2A shows the gate stack including the gate electrode 211 on the gate dielectric 212 before any sidewall spacers are formed on the substrate 305. Substrate 305 may include any substrate material such as silicon, silicon germanium, and II-VI and III-V substrates, as well as SOI substrates. The gate electrode 211 may comprise polysilicon or various other gate electrode materials. The gate dielectric 212 may comprise a variety of gate dielectrics including optional high-k dielectrics and is defined as having, for example, k> 3.9, typically k> 7. In one particular embodiment, the high-k dielectric comprises silicon oxynitride.FIG. 2B shows the gate stack after sidewall spacers (eg, nitride offset spacers) 215, such as silicon nitride offset spacers, have been formed by the RIE process. FIG. 2C shows the results after an ion implantation process, such as LDD ion implantation to form the LDD region 225, which used implantation inhibition provided by sidewall spacers 215. FIG. 2D shows the resulting structure after the disclosed chemical surface conversion step, including flowing hydrocarbon gas forming the illustrated surface-converted layer 216. FIG. 2E shows the gate stack 211/212 after a subsequent disposable second spacer 235 has been formed, for example by chemical deposition and subsequent RIE. For a typical CMOS process, the PMOS and NMOS transistors each receive a separate source / drain implant.The disposable second spacer 235 is then selectively removed after source / drain formation. FIG. 2F shows the gate stack 212/211 after the disposable second spacer 235 has been selectively removed, such as by a thermal (eg 120-180 ° C.) HPA etch. Note that the surface transformed layer 216 remains intact after etching so that the sidewall spacer 215 is protected by the surface transformed layer 216. In the absence of the disclosed surface-converted layer, the sidewall spacers 215, such as including silicon nitride, are subjected to removal using the process used to remove the disposable second spacers 235. FIG. 2G shows the resultant spacer structure after the known spacer process, showing the result after unintentional complete removal of sidewall spacers 215.3 illustrates an IC device 300 (eg, a semiconductor die) including a MOS transistor having a surface-converted sidewall spacer including a second dielectric material on a first dielectric material, according to an example embodiment, And the second dielectric material is chemically bonded to the first dielectric material over the transition region. The back end of line (BEOL) metallization is not shown for the sake of brevity. IC 300 includes a substrate 305, such as a P-type silicon or P-type silicon germanium substrate, having a semiconductor surface 306. An optional trench isolation 308 such as shallow trench isolation (STI) is shown. N-channel MOS (NMOS) transistor 310 is shown with P-channel MOS (PMOS) transistor 320 in N-well 307.NMOS transistor 310 includes a gate stack including gate electrode 311 on gate dielectric 312 having sidewall spacers on sidewalls of the gate stack. The sidewall spacer includes a second dielectric material 315a on the first dielectric material 315b and the second dielectric material 315a is chemically bonded to the first dielectric material 315b over the transition region 315c . The second dielectric material 315a contains carbon and the first dielectric material does not contain carbon and "carbon free" as used herein refers to a weight percentage of C <3%.NMOS transistor 310 includes lightly doped extensions 321 a and 322 a, including the source 321 region and the drain 322 region beside the sidewall spacer. Silicide layer 316 is shown on gate electrode 311 and source 321 and drain 322.Similarly, the PMOS transistor 320 has a gate electrode 331 on a gate dielectric 332 having sidewall spacers on sidewalls of the gate stack (which may be the same material as the gate dielectric 312 under the gate electrode 311) Including a second dielectric material 315a on the first dielectric material 315b and the second dielectric material 315a chemically bonds to the first dielectric material 315b over the transition region 315c . The second dielectric material 315 a contains carbon and the first dielectric material does not contain carbon. The PMOS transistor 320 includes lightly doped extensions 341 a and 342 a, including the source 341 region and the drain 342 region beside the sidewall spacer. A silicide layer 316 is shown on the gate electrode 331 and on the source 341 and the drain 342.At its base of the sidewall spacers 315 a / 315 c / 315 b the total thickness at its widest point is generally ≦ 100 Å, such as 40 to 70 Å thick. For example, in one particular embodiment, the second dielectric material 315a is about 5 to 10 Angstroms thick, the transition region 315c is 15 to 25 Angstroms thick, the first dielectric material 315b is 20 to 30 Angstroms thick.FIG. 4 illustrates the composition as a function of the thickness of the exemplary surface-converted sidewall spacer 400, according to one illustrative example, showing a highly simplified chemical bond provided over the thickness of the surface-converted sidewall spacer 400 Including depiction. The surface-converted sidewall spacer 400 includes a first dielectric material 315 b on the sidewall of the gate stack material and a second dielectric material 315 c chemically bonded to the first dielectric material 315 b over the transition region 315 c And a chemically transformed top (outer) surface that includes a top surface (outer) surface that includes a top surface (outer) surface. In the illustrated embodiment, the first dielectric material 315 b comprises silicon nitride (generally Si 3 N 4), the second dielectric material 315 a comprises silicon carbide (SiC), the transition region 315 c comprises Si, N, and C where the C content decreases and the N content increases as the distance to the second dielectric material 315 a / gate stack decreases.The disclosed semiconductor die may include various elements therein and / or layers thereon. These may include active and passive elements including barrier layers, dielectric layers, device structures, source regions, drain regions, bit lines, bases, emitters, collectors, conductive lines, conductive vias, and the like. Also, semiconductor dies can be formed from various processes including bipolar, CMOS, BiCMOS, and MEMS.Those skilled in the art to which the present disclosure pertains can make variations of other embodiments and examples within the scope of the claims of the present invention and deviate from the scope of the present invention It will be appreciated that further additions, deletions, substitutions and alterations can be made to the described embodiments without.
A temperature of a component within the portable computing device (PCD) may be monitored along with a parameter associated with the temperature. The parameter associated with temperature may be an operating frequency, transmission power, or a data flow rate. It is determined if the temperature has exceeded a threshold value. If the temperature has exceeded the threshold value, then the temperature is compared with a temperature set point and a first error value is then calculated based on the comparison. Next, a first optimum value of the parameter is determined based on the first error value. If the temperature is below or equal to the threshold value, then a present value of the parameter is compared with a desired threshold for the parameter and a second error value is calculated based on the comparison. A second optimum value of the parameter may be determined based on the second error value.
CLAIMSWhat is claimed is:1. A method for optimizing operation of a portable computing device, comprising: monitoring temperature of a component within the portable computing device; monitoring a parameter associated with the temperature;determining if the temperature has exceeded a threshold value;if the temperature has exceeded the threshold value, then comparing the temperature with a temperature set point and calculating a first error value based on the comparison;determining a first optimum value of the parameter based on the first error value; if the temperature is below or equal to the threshold value, then comparing a present value of the parameter with a desired threshold for the parameter and calculating a second error value based on the comparison; anddetermining a second optimum value of the parameter based on the second error value.2. The method of claim 1, further comprising setting the component to at least one of the first and second optimum values.3. The method of claim 1, wherein the parameter associated with temperature comprises at least one of operating frequency, transmission power, and a data flow rate.4. The method of claim 1, wherein the second optimum value for the parameter is determined by an algorithm.5. The method of claim 1, wherein the second optimum value is set during manufacture of the portable computing device.6. The method of claim 1 , wherein the component comprises at least one of a central processing unit, a core of a central processing unit, a graphical processing unit, a digital signal processor, a modem, and a RF-transceiver.7. The method of claim 1, further comprising monitoring and controlling a plurality of components of the portable computing device.8. The method of claim 7, wherein the plurality of components comprise one or more of a central processing unit, a core of a central processing unit, a graphical processing unit, a digital signal processor, a modem, and a RF-transceiver.9. The method of claim 1 , wherein the portable computing device comprises at least one of a mobile telephone, a personal digital assistant, a pager, a smartphone, a navigation device, and a hand-held computer with a wireless connection or link.10. A computer system for optimizing operation of a portable computing device, the system comprising:a processor operable for:monitoring temperature of a component within the portable computing device;monitoring a parameter associated with the temperature; determining if the temperature has exceeded a threshold value;comparing the temperature with a temperature set point and calculating a first error value based on the comparison if the temperature has exceeded the threshold value;determining a first optimum value of the parameter based on the first error value;comparing a present value of the parameter with a desired threshold for the parameter and calculating a second error value based on the comparison if the temperature is below or equal to the threshold value; anddetermining a second optimum value of the parameter based on the second error value.11. The system of claim 10, wherein the processor is further operable for setting the component to at least one of the first and second optimum values.12. The system of claim 10, wherein the parameter associated with temperature comprises at least one of operating frequency, transmission power, and a data flow rate.13. The system of claim 10, wherein the second optimum value for the parameter is determined by an algorithm.14. The system of claim 10, wherein the second optimum value is set during manufacture of the portable computing device.15. The system of claim 10, wherein the component comprises at least one of a central processing unit, a core of a central processing unit, a graphical processing unit, a digital signal processor, a modem, and a RF-transceiver.16. The system of claim 10, wherein the processor is further operable for monitoring and controlling a plurality of components of the portable computing device.17. The system of claim 16, wherein the plurality of components comprise one or more of a central processing unit, a core of a central processing unit, a graphical processing unit, a digital signal processor, a modem, and a RF-transceiver.18. The system of claim 10, wherein the portable computing device comprises at least one of a mobile telephone, a personal digital assistant, a pager, a smartphone, a navigation device, and a hand-held computer with a wireless connection or link.19. A computer system for optimizing operation of a portable computing device the system comprising:means for monitoring temperature of a component within the portable computing device;means for monitoring a parameter associated with the temperature;means for determining if the temperature has exceeded a threshold value;means for comparing the temperature with a temperature set point and calculating a first error value based on the comparison if the temperature has exceeded the threshold value;means for determining a first optimum value of the parameter based on the first error value;means for comparing a present value of the parameter with a desired threshold for the parameter and calculating a second error value based on the comparison if the temperature is below or equal to the threshold value; andmeans for determining a second optimum value of the parameter based on the second error value.20. The system of claim 19, further comprising: means for setting the component to at least one of the first and second optimum values.21. The system of claim 19, wherein the parameter associated with temperature comprises at least one of operating frequency, transmission power, and a data flow rate.22. The system of claim 19, wherein the second optimum value for the parameter is determined by an algorithm.23. The system of claim 19, wherein the second optimum value is set during manufacture of the portable computing device.24. A computer program product comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code adapted to be executed to implement a method for optimizing operation of a portable computing device, said method comprising:monitoring temperature of a component within the portable computing device; monitoring a parameter associated with the temperature;determining if the temperature has exceeded a threshold value;if the temperature has exceeded the threshold value, then comparing the temperature with a temperature set point and calculating a first error value based on the comparison;determining a first optimum value of the parameter based on the first error value; if the temperature is below or equal to the threshold value, then comparing a present value of the parameter with a desired threshold for the parameter and calculating a second error value based on the comparison; anddetermining a second optimum value of the parameter based on the second error value.25. The computer program product of claim 24, wherein the program codeimplementing the method further comprises:setting the component to at least one of the first and second optimum values.26. The computer program product of claim 24, wherein the parameter associated with temperature comprises at least one of operating frequency, transmission power, and a data flow rate.27. The computer program product of claim 24, wherein the second optimum value for the parameter is determined by an algorithm.28. The computer program product of claim 24, wherein the second optimum value is set during manufacture of the portable computing device.29. The computer program product of claim 24, wherein the component comprises at least one of a central processing unit, a core of a central processing unit, a graphical processing unit, a digital signal processor, a modem, and a RF-transceiver.30. The computer program product of claim 24, wherein the portable computing device comprises at least one of a mobile telephone, a personal digital assistant, a pager, a smartphone, a navigation device, and a hand-held computer with a wireless connection or link.
METHOD AND SYSTEM FOR OPTIMIZING PERFORMANCE OF A PCD WHILE MITIGATING THERMAL GENERATIONSTATEMENT REGARDING RELATED APPLICATIONS[0001] This patent application claims priority under 35 U.S. C. §119(e) to U.S.Provisional Patent Application Serial No. 61/973,772 filed on April 1, 2014, entitled, "METHOD AND SYSTEM FOR OPTIMIZING PERFORMANCE OF A PCD WHILE MITIGATING THERMAL GENERATION," the entire contents of which is hereby incorporated by reference.DESCRIPTION OF THE RELATED ART[0002] Portable computing devices ("PCDs") are becoming necessities for people. And optimal performance is desired for these battery operated devices. To achieve optimal performance, PCDs need to manage their internal temperature constantly. PCDs are battery operated devices and therefore, most PCDs do not have any active cooling devices, like fans. So PCDs use thermal mitigation algorithms. Thermal mitigation algorithms help in cooling a PCD passively when it gets hotter than a prescribed temperature threshold.[0003] The thermal mitigation algorithms rely on embedded, on-die thermal sensors (TSENS) to obtain the instantaneous temperature of the various components (e.g., central processing unit ["CPU"] cores, graphics processing unit ["GPU"] cores, modems, etc.) present within a PCD. When any of the components heats up beyond a prescribed temperature, the thermal algorithm(s) are usually designed to throttle those components to reduce their heat generation.[0004] The thermal mitigation algorithm(s) usually must be capable of adapting the device parameters to the heating characteristics of the device to effectively cool them down. At the same time, throttling the operating frequency of a CPU or GPU of a PCD may negatively impact the overall performance of the device. Similarly, throttling data rate and/or transmit power for modems of a PCD may also negatively impact the performance of the device.[0005] Another problem experienced by PCDs includes ones caused by battery current limitations ("CLs"). CLs may occur when a particular component within a PCD draws a lot of current within a short time frame (of the order of microseconds) from the battery resulting in a voltage drop across critical components. [0006] Unfortunately, certain critical components, such as memory, CPU, etc. inside the PCD require a minimum voltage to sustain their operation. When a component suddenly draws more power than it commonly does, the resulting voltage drop may result in a device failure (which may cause data erasures from the memory, device reboot, or in worst case, an overheated or permanently damaged device).[0007] A CL situation may arise when, for instance, a CPU core of a PCD (such as a mobile phone) becomes more active when other cores are actively loaded, or a data call is initiated during a voice call, or the camera flash is activated while playing a game on the device. The CL situation may become worse when the battery charge is already low and/r when the temperature of the PCD rises.[0008] Accordingly, what is needed in the art is a method and system for one or more algorithms that may mitigate thermal issues of a PCD while also minimizing the performance degradation experienced by the components due to throttling.SUMMARY OF THE DISCLOSURE[0009] A temperature of a component within the portable computing device (PCD) may be monitored along with a parameter associated with the temperature. The parameter associated with temperature may be an operating frequency, transmission power, or a data flow rate. It is determined if the temperature has exceeded a threshold value. If the temperature has exceeded the threshold value, then the temperature is compared with a temperature set point and a first error value is then calculated based on the comparison. Next, a first optimum value of the parameter is determined based on the first error value. If the temperature is below or equal to the threshold value, then a present value of the parameter is compared with a desired threshold for the parameter and a second error value is calculated based on the comparison. A second optimum value of the parameter may be determined based on the second error value.[0010] The component of the PCD may be set to at least one of the first and second optimum values. The component may include at least one of a central processing unit, a core of a central processing unit, a graphical processing unit, a digital signal processor, a modem, and a RF-transceiver. The portable computing device may include at least one of a mobile telephone, a personal digital assistant, a pager, a smartphone, a navigation device, and a hand-held computer with a wireless connection or link. BRIEF DESCRIPTION OF THE DRAWINGS[0011] In the figures, like reference numerals refer to like parts throughout the various views unless otherwise indicated. For reference numerals with letter character designations such as "102A" or "102B", the letter character designations may differentiate two like parts or elements present in the same figure. Letter character designations for reference numerals may be omitted when it is intended that a reference numeral to encompass all parts having the same reference numeral in all figures.[0012] FIG. 1 is a functional block diagram illustrating an embodiment of a portable computing device ("PCD");[0013] FIG. 2A is a functional block diagram illustrating details of the dual proportional integral derivative ("PID") loop controller for a CPU of the PCD of FIG. 1;[0014] FIG. 2B is a logical flowchart illustrating a method for optimizingperformance of the PCD and while mitigating thermal generation within the PCD; and [0015] FIG. 3 is a functional block diagram of a generic dual PID loop controller for any component within the PCD of FIG. 1 ; and[0016] FIG. 4 is a functional block diagram of nested dual PID loop controllers that may be present within the PCD of FIG. 1.DETAILED DESCRIPTION[0017] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0018] In this description, the term "application" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, an "application" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.[0019] The term "content" may also include files having executable content, such as: object code, scripts, byte code, markup language files, and patches. In addition, "content" referred to herein, may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.[0020] As used in this description, the terms "component," "database," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device may be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components may execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal).[0021] In this description, the terms "communication device," "wireless device," "wireless telephone," "wireless communication device," and "wireless handset" are used interchangeably. With the advent of third generation ("3G") and fourth generation ("4G") wireless technology, greater bandwidth availability has enabled more portable computing devices with a greater variety of wireless capabilities.[0022] In this description, the term "portable computing device" ("PCD") is used to describe any device operating on a limited capacity power supply, such as a battery. Although battery operated PCDs have been in use for decades, technological advances in rechargeable batteries coupled with the advent of third generation ("3G") wireless technology, have enabled numerous PCDs with multiple capabilities. Therefore, a PCD may be a cellular telephone, a satellite telephone, a pager, a PDA, a smartphone, a navigation device, a smartbook or reader, a media player, a combination of the aforementioned devices, and a laptop computer with a wireless connection, among others.[0023] Referring to FIG. 1, this figure is a functional block diagram of an exemplary, non-limiting aspect of a PCD 100 in the form of a wireless telephone for implementing methods and systems for optimizing performance of the PCD 100 and while mitigating thermal generation within the PCD 100. As shown, the PCD 100 includes an on-chip system 102 that includes a multi-core central processing unit ("CPU") 110 and an analog signal processor 126 that are coupled together. The CPU 110 may comprise a zeroth core 222, a first core 224, and an Nth core 230 as understood by one of ordinary skill in the art. Instead of a CPU 110, a digital signal processor ("DSP") may also be employed as understood by one of ordinary skill in the art.[0024] The CPU 110 may also be coupled to one or more internal, on-chip thermal sensors 157A-B as well as one or more external, off-chip thermal sensors 157C-D. The on-chip thermal sensors 157A-B may comprise one or more proportional to absolute temperature ("PTAT") temperature sensors that are based on vertical PNP structure and are usually dedicated to complementary metal oxide semiconductor ("CMOS") very large-scale integration ("VLSI") circuits. The off-chip thermal sensors 157C-D may comprise one or more thermistors.[0025] The thermal sensors 157 may produce a voltage drop (and/or a current) that is converted to digital signals with an analog-to-digital converter ("ADC") (not illustrated). However, other types of thermal sensors 157 may be employed without departing from the scope of this disclosure.[0026] The PCD 100 of FIG. 1 may include and/or be coupled to a dual proportional integral derivative ("PID") loop controller 205. The dual PID loop controller 205 may comprise hardware, software, firmware, or a combination thereof. The dual PID loop controller 205 may be responsible for monitoring temperature of the PCD 100 and adjusting one or more parameters based on whether a temperature threshold or limit has been reached/achieved. Such parameters which may be adjusted include, but are not limited to, an operating frequency of a component such as the CPU 110, processor 126, and/or GPU 189; transmission power of the RF transceiver 168 which may comprise a modem; data rate or flow rates of the processor 126; as well as other parameters of the PCD 100 which may mitigate thermal generation and that may also impact operating performance of the PCD 100.[0027] The dual PID loop controller 205 comprises two controllers (see FIG. 2A) which calculate separate error values relative to each other. One controller is provided with a temperature input while the other controller is provided with an input of an adjustable parameter. One interesting aspect of the dual PID loop controller 205 is that each PID controller has output that may control/impact the same adjustable parameter, such as operating frequency.[0028] Further details of the dual PID loop controller 205 are described below in connection with FIG. 2A. The exemplary embodiment of the dual PID loop controller 205 of FIGs. 1 and 2 A show the dual PID loop controller 205 for controlling the operating frequency of the CPU 110. However, as noted above, the dual PID loop controller 205 may be coupled and/or logically connected to any component and/or a plurality of components within the PCD 100. Further, the dual PID loop controller 205 may also adjust parameters other than operating frequency of a component, such as, but not limited to, transmission power, data flow rates, etc. as mentioned above.[0029] In a particular aspect, one or more of the method steps for the dual PID loop controller 205 described herein may be implemented by executable instructions and parameters, stored in the memory 112, that may form software embodiments of the dual PID loop controller 205. These instructions that form the dual PID loop controller 205 may be executed by the CPU 110, the analog signal processor 126, or any other processor. Further, the processors, 110, 126, the memory 112, the instructions stored therein, or a combination thereof may serve as a means for performing one or more of the method steps described herein.[0030] The power manager integrated controller ("PMIC") 107 may be responsible for distributing power to the various hardware components present on the chip 102. The PMIC is coupled to a power supply 180. The power supply 180, may comprise a battery and it may be coupled to the on-chip system 102. In a particular aspect, the power supply may include a rechargeable direct current ("DC") battery or a DC power supply that is derived from an alternating current ("AC") to DC transformer that is connected to an AC power source.[0031] As illustrated in FIG. 1, a display controller 128 and a touchscreen controller 130 are coupled to the multi-core processor 110. A touchscreen display 132 external to the on-chip system 102 is coupled to the display controller 128 and the touchscreen controller 130.[0032] FIG. 1 is a schematic diagram illustrating an embodiment of a portable computing device (PCD) that includes a video decoder 134. The video decoder 134 is coupled to the multicore central processing unit ("CPU") 110. A video amplifier 136 is coupled to the video decoder 134 and the touchscreen display 132. A video port 138 is coupled to the video amplifier 136. As depicted in FIG. 1, a universal serial bus ("USB") controller 140 is coupled to the CPU 110. Also, a USB port 142 is coupled to the USB controller 140. A memory 112 and a subscriber identity module (SIM) card 146 may also be coupled to the CPU 110.[0033] Further, as shown in FIG. 1, a digital camera or camera subsystem 148 may be coupled to the CPU 110. In an exemplary aspect, the digital camera/cameral subsystem 148 is a charge-coupled device ("CCD") camera or a complementary metal-oxide semiconductor ("CMOS") camera.[0034] As further illustrated in FIG. 1, a stereo audio CODEC 150 may be coupled to the analog signal processor 126. Moreover, an audio amplifier 152 may be coupled to the stereo audio CODEC 150. In an exemplary aspect, a first stereo speaker 154 and a second stereo speaker 156 are coupled to the audio amplifier 152. FIG. 1 shows that a microphone amplifier 158 may be also coupled to the stereo audio CODEC 150.Additionally, a microphone 160 may be coupled to the microphone amplifier 158.[0035] In a particular aspect, a frequency modulation ("FM") radio tuner 162 may be coupled to the stereo audio CODEC 150. Also, an FM antenna 164 is coupled to the FM radio tuner 162. Further, stereo headphones 166 may be coupled to the stereo audio CODEC 150.[0036] FIG. 1 further indicates that a radio frequency ("RF') transceiver 168 may be coupled to the analog signal processor 126. An RF switch 170 may be coupled to the RF transceiver 168 and an RF antenna 172. As shown in FIG. 1, a keypad 174 may be coupled to the analog signal processor 126. Also, a mono headset with a microphone 176 may be coupled to the analog signal processor 126. Further, a vibrator device 178 may be coupled to the analog signal processor 126.[0037] As depicted in FIG. 1, the touchscreen display 132, the video port 138, the USB port 142, the camera 148, the first stereo speaker 154, the second stereo speaker 156, the microphone 160, the FM antenna 164, the stereo headphones 166, the RF switch 170, the RF antenna 172, the keypad 174, the mono headset 176, the vibrator 178, thermal sensors 157B, and the power supply 180 are external to the on-chip system 102.[0038] Referring now to FIG. 2 A, is a functional block diagram illustrating details of the dual proportional integral derivative ("PID") loop controller 205 for a CPU 110 of the PCD of FIG. 1. As noted above, the dual PID loop controller 205 may be implemented in software, hardware, firmware, or a combination thereof.[0039] The dual PID loop controller 205 may comprise a temperature threshold block 206, a first control loop 209, and a second control loop 212. The first control loop 209 may control the adjustable parameter of a device when a first threshold is met, such as the operating frequency for a clock (not illustrated) of the CPU 110. Meanwhile, the second control loop 212 of the dual PID loop controller 205 may control the adjustable parameter of the device when a second threshold is met. [0040] In the exemplary embodiment illustrated in FIG. 2A, the threshold condition/block 206 of the dual PID loop controller 205 is an operating temperature of a CPU 110 of the PCD 100. As noted previously, the dual PID loop controller 205 may control other devices besides a CPU 110. For example, the dual PID loop controller205 may control the GPU 189, RF transceiver 168, and/or analog signal processor 126, or any other device of the PCD 100.[0041] In the exemplary embodiment of FIG. 2A, if the temperature of the CPU 110 is greater than a predetermined threshold, then the "YES" branch of the threshold block206 is followed to the first loop 209 in which the first loop 209 controls the adjustable parameter, which in this example is an operating frequency of the CPU 110.[0042] Meanwhile, if the temperature of the CPU 110 is less than or equal to the predetermined threshold, then the "NO" branch of the threshold block 206 is followed to the second loop 212 in which the second loop 212 controls the adjustable parameter, which in this example is an operating frequency of the CPU 110.[0043] The first loop 209 of the dual PID loop controller 205 may comprise a temperature input block 157, a desired temperature setpoint/target 218, and a first PID controller 221 A. The temperature input block 157 may comprise outputs from any one or a plurality of temperature data generated and tracked by the thermal sensors 157 described above in connection with FIG. 1. The desired temperature setpoint/target 218 may comprise a desired maximum temperature for the CPU 110. This desired temperature setpoint/target 218 may be a fixed/set value and/or it may be dynamic meaning that it can be adjusted by one or more thermal mitigation algorithms/strategies which may be running concurrently relative to the dual PID loop controller 205.[0044] The data from block 157 and block 218 and produces a temperature error value (Tel) which is provided as input to a first PID controller 221A. The first PID controller 221A uses the temperature error value (Tel) to calculate a frequency value by how much the operating frequency of the CPU 110 should be adjusted to reach the desired temperature setpoint/target 218. This frequency value which is the output of the first PID controller 221 A is fed to an adjust CPU frequency block 235 where the operating frequency of the CPU 110 may be adjusted based on this frequency value. Further details of the first PID controller 221 A will be described below.[0045] Meanwhile, as noted above, if the temperature of the CPU 110 is less than or equal to the predetermined threshold, then the "NO" branch of the threshold block 206 is followed to the second loop 212 in which the second loop 212 controls the adjustable parameter, which in this example is an operating frequency of the CPU 110.[0046] The second loop 212 may comprise a frequency input block 224, a desired max operating frequency 227, and a second controller 221B. The frequency input block 224 may comprise outputs from any one or a plurality of a clock frequency sensors or the clock itself (not illustrated) of the CPU 110.[0047] As noted above, the second loop 212 may control another adjustable parameter besides frequency, such as transmission power, data flow rates, etc., which may impact thermal generation of the PCD 100. For the exemplary embodiment, the second loop 212 is designed to manage and control the operating frequency of the CPU 110 when a predetermined threshold is met.[0048] The desired max operating frequency 227 of the second loop 212 may comprise a desired maximum operating frequency for the CPU 110. This desired maximum operating frequency 227 may be a fixed/set value and/or it may be dynamic meaning that it can be adjusted by one or more thermal mitigation algorithms/strategies and/or performance enhancing algorithms, such as a Dynamic Clock Voltage Scaling ("DCVS") algorithm, which may be running concurrently relative to the dual PID loop controller 205.[0049] The data from block 224 and block 227 are compared and produce a frequency error value (Fel) which is provided as input to a second PID controller 22 IB. The second PID controller 221B uses the frequency error value (Fel) to calculate a frequency value by how much the operating frequency of the CPU 110 should be adjusted to reach the desired maximum operating frequency 227. This frequency value which is the output of the second PID controller 22 IB is fed to an adjust CPU frequency block 235 where the operating frequency of the CPU 110 may be adjusted based on this frequency value. Further details of the second PID controller 22 IB will be described below.[0050] The two loops 209, 212 forming the dual PID loop controller 205 work in tandem relative to each other. In the illustrated exemplary embodiment of FIG. 2A, the first loop 209 is responsible for maintaining device reliability by throttling the parameters when temperature is greater than desired value in threshold block 206. Meanwhile, the second loop 212 is responsible for maintaining performance by adjusting the same parameters when temperature is less than the desired value in threshold block 206. Each loop 209, 212 has its own setpoint 218, 227 and input 157, 224, but the second loop 212 is active only when the first loop 209 is not active, and vice-versa.[0051] Further, each loop 209, 212 has their own independent dynamics. This means that one loop error accumulations will not affect the other. Each PID controller 221 A, 22 IB operates according to the following equation:[0052] Where q(n) is the output of the output of the PID controller proportional to the adjustment to be made at time n; Kp is a proportional error value constant, Ki is an integral error value constant; Kd is a derivative error value constant; e(n) is the error function defined by the difference between the parameter at time n and the desired setpoint; ts is the sampling duration; and i is the integration variable. The value of the contstants ("Ks") are determined by experiments and simulations, so that the PID controller output is stable and the setpoint intended is reached as quickly as possible with limited overshoots.[0053] Equation EQI may lead to integral windup, which may comprise large overshoot in some instances. This may be avoided using the following velocity PID equation EQ2:(EQ2) q(n) - q(n-l)where:(EQ3) * ¾(¾ - I:) - ~ e( ~ 1|| - 2<s(n - i) 4-[0054] q(n) is given by EQI. Upon calculating q(n) - q(n-l) using EQI, we arrive at EQ3.[0055] FIG. 2B is a logical flowchart illustrating a method 205 for optimizing performance of the PCD 100 and while mitigating thermal generation within the PCD 100. FIG. 2B tracks the operations presented in FIG. 2A but in a more traditional, linear flow chart format.[0056] Block 305 is the first block of the method 205. In block 305 , the present temperature of a component within the PCD 100 is detected with a temperature sensor 157. As noted above, the dual PID loop controller 205 may be assigned to a single component, such as a CPU 110 or GPU 189. In other exemplary embodiments, a dual PID loop controller 205 may manage/control a plurality of components. In the embodiments in which a single component, like a CPU 110 is being managed by the dual PID loop controller 205, the temperature monitored in block 305 may be the temperature of the single component.[0057] Next, block 310, a parameter associated with the temperature, such as frequency, may be monitored for the component of interest such as a CPU 110.According to one exemplary embodiment, the parameter may comprise clock frequency. However, as noted above, other adjustable parameters associated with temperature may include transmission power of the RF transceiver 168 which includes a modem; data rate or flow rates of the processor 126; as well as other parameters of the PCD 100 which may mitigate thermal generation and that may also impact operating performance of the PCD 100.[0058] Subsequently, in decision block 315, the dual PID loop controller 205 determines if the temperature of a component or component(s) of interest have exceeded a predetermined threshold value. This predetermined threshold value may be established at manufacture of the component. For example, the threshold temperature value of a CPU 110 may have magnitude of about 90.0 degrees C. If the inquiry to decision block 315 is positive, then the "YES" branch is followed to block 320. If the inquiry to decision block 315 is negative, then the "NO" branch is followed to block 340.[0059] In block 320, the PID controller 221 A of loop 209 compares the present measured temperature (sensed in block 305) with the temperature setpoint 218 assigned to the component or components of interest. As noted previously in FIG. 2A, the temperature setpoint 218 may be a fixed value or it may change depending on one or more thermal mitigation algorithms which may be supported by the PCD 100.[0060] Next, in block 325, the PID controller 221A of loop 209 in FIG. 2A, may calculate an error value (see Tel of FIG. 2 A) based on the comparison between the temperature setpoint 218 and the present temperature provided by a sensor 157. In block 330, the PID controller 221 A of loop 209 may then determine an ideal operating frequency for the CPU 110 (the component of interest) based on the error values and equations EQ1 through EQ3.[0061] Once the ideal operating frequency is calculated in block 330, then in block 335, the PID controller 221A of loop 209 may set the component of interest, such as the CPU 110, to the desired operating frequency which minimizes thermal generation by the component of interest which is the CPU 110 in this example. The method 205 then returns.[0062] If the inquiry to decision block 315 is negative, then the "NO" branch is followed to block 340 in which the PID controller 221B of lower loop 212 compares the present value of the adjustable parameter 224, such as frequency which may be the present clock frequency, with maximum frequency 227 available for the component or components of interest. As noted above, the maximum frequency 227 may be set or it may be dynamic (changeable) depending on thermal mitigation algorithms which may be running in parallel with method 205.[0063] Next, in block 345, the PID controller 221B of loop 212 may calculate error value(s) based on the comparison in block 340. Subsequently, the method continues to block 335 where the second PID controller 221B issues commands to the CPU 110 to adjust its operating frequency to the calculated ideal operating frequency. The method 205 then returns.[0064] As noted previously, the dual PID loop controller 205 is not limited to the adjustable parameter of frequency. Other adjustable parameters include, but are not limited to, transmission power of the RF transceiver 168 which includes a modem; data rate or flow rates of the processor 126; as well as other parameters of the PCD 100 which may mitigate thermal generation and that may also impact operating performance of the PCD 100.[0065] Referring now to FIG. 3, this figure is a functional block diagram of a generic dual PID loop controller 205' for any component within the PCD 100 of FIG. 1. In this exemplary embodiment, the dual PID loop controller 205' has a first loop 209' and a second loop 212' which are coupled together by a threshold condition 206'. In the earlier example, the threshold condition 206' may comprise temperature of a component or a plurality of components 301 controlled by the dual PID loop controller 205'.[0066] Both the first loop 209' and second loop 212' may control as output an adjustable parameter 235', such as, but not limited to, an operating frequency. That adjustable parameter 235' is fed into a single component 301 or a plurality of components 301.[0067] In the exemplary embodiment illustrated in FIG. 3, the first loop 209' of the dual PID loop controller 205' may comprise software, hardware, and/or firmware. Similarly, the second loop 212' of the dual PID loop controller 205' may comprise software, hardware, and/or firmware. Each loop 209, 212 may comprise a different structure which means that one loop may comprise software while the second loop comprises hardware, or vice-versa. In other embodiments, each loop 209, 212 may comprise the same structure, i.e. hardware-hardware, software-software, etc.[0068] For some conditions, hardware embodiments of both loops 209, 212 may be the most practical design. For example, response times usually must be minimal to detect and to respond to electrical current limitations ("CLs"). For these conditions, both loops 209, 212 may comprise hardware. Exemplary hardware includes, but is not limited to, First-In/First-Out (FIFOs) type devices.[0069] Meanwhile, component 301 may comprise a single component such as a CPU 110, a GPU 189, an analog signal processor 126, a digital signal processor, and other similar/like processing entities as understood by one of ordinary skill in the art. The component 301 may also comprise a plurality of devices in some exemplaryembodiments instead of a device/component.[0070] FIG. 4 is a functional block diagram of nested dual PID loop controllers 205 A, 205B, 205C that may be present within the PCD of FIG. 1. This diagram illustrates how multiple dual PID loop controllers 205 A, 205B, 205C may be coupled to individual components 301 A.[0071] For example, a first component 301A may be controlled by dual (two) PID loop controllers 205 A, 205B. Similarly, a second component 301B may be controlled by dual (two) PID loop controllers 205 A, 205 C. Other ways of nesting/grouping dual PID loop controllers 205A are possible and are included within the scope of this disclosure.[0072] The dual PID loop controller 205 may maximize performance and reliability: reliability may be maintained by keeping an operating temperature below a setpoint, while performance may be achieved by allowing for a higher operational value once temperature is maintained below desired value. The dual PID controller 205 provides a flexible design where the algorithm design may be extended to any component in a PCD 100 - by just varying the controlled and adjustable parameter, such as frequency.[0073] The dual PID loop controller is adaptable: each of the PID loops 209, 212 may be tuned independently to achieve the level of aggressiveness desired in the control of the component 301. Dual PID loop controllers 205 offer stable operation in most operating conditions. The algorithm of the dual PID loop controllers 205 may achieve quicker convergence to desired temperatures and operational levels compared to single loop control of the conventional art. [0074] Certain steps in the processes or process flows described in this specification naturally precede others for the invention to function as described. However, the invention is not limited to the order of the steps described if such order or sequence does not alter the functionality of the invention. That is, it is recognized that some steps may performed before, after, or parallel (substantially simultaneously with) other steps without departing from the scope and spirit of the invention. In some instances, certain steps may be omitted or not performed without departing from the invention. Further, words such as "thereafter", "then", "next", etc. are not intended to limit the order of the steps. These words are simply used to guide the reader through the description of the exemplary method.[0075] Additionally, one of ordinary skill in programming is able to write computer code or identify appropriate hardware and/or circuits to implement the disclosed invention without difficulty based on the flow charts and associated description in this specification, for example.[0076] Therefore, disclosure of a particular set of program code instructions or detailed hardware devices is not considered necessary for an adequate understanding of how to make and use the invention. The inventive functionality of the claimed computer implemented processes is explained in more detail in the above description and in conjunction with the Figures which may illustrate various process flows.[0077] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted as one or more instructions or code on a computer-readable medium.[0078] In the context of this document, a computer-readable medium is an electronic, magnetic, optical, or other physical device or means that may contain or store a computer program and data for use by or in connection with a computer-related system or method. The various logic elements and data stores may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a "computer-readable medium" may include any means that may store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. [0079] The computer-readable medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic) having one or more wires, a portable computer diskette (magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), an optical fiber (optical), and a portable compact disc readonly memory (CDROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, for instance via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.[0080] Computer-readable media include both computer storage media andcommunication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise any optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer.[0081] Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line ("DSL"), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.[0082] Disk and disc, as used herein, includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.Combinations of the above should also be included within the scope of computer- readable media.[0082] Therefore, although selected aspects have been illustrated and described in detail, it will be understood that various substitutions and alterations may be made therein without departing from the spirit and scope of the present invention, as defined by the following claims.
A high-performance, high I/O ball grid array substrate, designed for integrated circuit flip-chip assembly and having two patterned metal layers, comprising: an insulating layer having a first surface, a second surface and a plurality of vias filled with metal. Said first surface having one of said metal layers attached to provide electrical ground potential, and having a plurality of electrically insulated openings for outside electrical contacts. An outermost insulating film protecting the exposed surface of said ground layer, said film having a plurality of openings filled with metal suitable for solder ball attachment. Said second surface having the other of said metal layers attached, portions thereof being configured as a plurality of electrical signal lines, further portions as a plurality of first electrical power lines, and further portions as a plurality of second electrical power lines, selected signal and power lines being in contact with said vias. Said signal lines being distributed relative to said first power lines such that the inductive coupling between them reaches at least a minimum value, providing high mutual inductances and minimized effective self-inductance. Said signal lines further being electromagnetically coupled to said ground metal such that cross talk between signal lines is minimized. And an outermost insulating film protecting the exposed surfaces of said signal and power lines, said film having a plurality of openings filled with metal suitable for contacting selected signal and power lines and chip solder bumps.
1. A high-performance, high I/O ball grid array substrate, designed for integrated circuit flip-chip assembly and having two patterned metal layers, comprising:an insulating layer having a first surface, a second surface and a plurality of vias filled with metal; said first surface having one of said metal layers attached to provide electrical ground potential, and having a plurality of electrically insulated openings for outside electrical contacts; an outermost insulating film protecting the exposed surface of said ground layer, said film having a plurality of openings filled with metal suitable for solder ball attachment; said second surface having the other of said metal layers attached, portions thereof being configured as a plurality of electrical signal lines, further portions as a plurality of first electrical power lines, and further portions as a plurality of second electrical power lines, selected signal and power lines being in contact with said vias; said signal lines being distributed relative to said first power lines such that the inductive coupling between them reaches at least a minimum value, providing high mutual inductances and minimized effective self-inductance; said signal lines further being electromagnetically coupled to said ground metal such that cross-talk between signal lines is minimized; and an outermost insulating film protecting the exposed surfaces of said signal and power lines, said film having a plurality of openings filled with metal suitable for contacting selected signal and power lines and chip solder bumps. 2. The substrate according to claim 1 wherein the number of said I/O's ranges from about 100 to about 600.3. The substrate according to claim 1 wherein the thickness of said substrate is in the range from about 150 to 300 [mu]m.4. The substrate according to claim 1 wherein said signal lines have a width between about 25 to 60 [mu]m and are spaced to an adjacent line by insulating material of about 20 to 50 [mu]m width.5. The substrate according to claim 1 wherein said first power lines have a width from about 200 to 500 [mu]m.6. The substrate according to claim 1 wherein said signal lines are positioned in a proximity of about 20 to 50 [mu]m to said first power lines, thus providing strong electromagnetic coupling, high mutual inductance and minimized effective self-inductance.7. The substrate according to claim 1 wherein said signal lines are positioned to provide strong electromagnetic coupling to power and ground lines and thus minimal coupling, or cross-talk, between said signal lines.8. The substrate according to claim 1 wherein said patterned metal layers are selected from a group consisting of copper, brass, aluminum, silver, or alloys thereof, and have a thickness in the range from about 7 to 15 [mu]m.9. The substrate according to claim 1 wherein said insulating layer is made of organic material and is selected from a group consisting of polyimide, polymer strengthened by glass fibers, FR-4, FR-5, and BT resin;said insulating layer having a thickness between about 70 and 150 [mu]m. 10. The substrate according to claim 1 wherein said vias are filled with copper, tungsten, or any other electrically conductive material.11. The substrate according to claim 1 wherein said second power lines art structured as distributed areas having wide geometries for minimizing self-inductance and merging into a central area supporting said chip.12. The substrate according to claim 1 wherein said outermost insulating films are glass-filled epoxies, polyimides, acrylics or other photo-imageable materials, suitable as solder masks and have a thickness between about 50 and 100 [mu]m.13. The substrate according to claim 1 wherein said openings for solder bump and solder ball attachments are made of copper including a flash of gold or palladium, or other wettable and solderable metals.14. A high-performance, high I/O ball grid array package comprising:a substrate having two patterned metal layers, comprising; an insulating layer having a first surface, a second surface and a plurality of vias filled with metal; said first surface having one of said metal layers attached to provide electrical ground potential, and having a plurality of electrically insulated openings for outside electrical contacts; an outermost insulating film protecting the exposed surface of said ground layer, said film having a plurality of openings filled with metal suitable for solder ball attachment; said second surface having the other of said metal layers attached, portions thereof being configured as a plurality of electrical signal lines,further portions as a plurality of first electrical power lines, and further portions as a plurality of second electrical power lines, selected signal and power lines being in contact with said vias; said signal lines being distributed relative to said first power lines such that the inductive coupling between them reaches at least a minimum value, providing high mutual inductances and minimized effective self-inductance; said signal lines further being electromagnetically coupled to said ground metal such that cross-talk between signal lines is minimized; and an outermost insulating film protecting the exposed surfaces of said signal and power lines, said film having a plurality of openings filled with metal suitable for contacting selected signal and ground lines and chip solder bumps; an integrated circuit chip having an active surface including solder bumps, said solder bumps adhered to said plurality of openings in said outermost insulating film protecting said signal and power lines; and solder balls attached to said plurality of openings in said outermost insulating film protecting said ground layer. 15. The package according to claim 14 further comprising a polymeric encapsulant filling any gaps between said chip and said substrate, left void after said chip solder bumps are adhered to said plurality of openings in said outermost insulating film protecting said signal and power lines.16. The package according to claim 15 wherein said polymeric encapsulant is a polymeric precursor made of an epoxy base material filled with silica and anhydrides, requiring thermal energy for curing to form a polymeric encapsulant.17. The package according to claim 14 further comprising an encapsulation material surrounding said chip.18. The package according to claim 17 wherein said encapsulation material is a polymeric material selected from a group consisting of epoxy-based molding compounds suitable for adhesion to said chip, and fluoro-dielectric compounds supporting high-speed and high-frequency package performance.19. The package according to claim 17 further comprising an optional heat spreader positioned on the outer surface of said encapsulation material.20. The package according to claim 14 wherein said chip solder bumps comprise attach materials selected from a group consisting of tin, lead/tin alloys, indium, indium/tin alloys, solder paste, and conductive adhesive compounds.21. The package according to claim 14 wherein said solder balls comprise attach materials selected from a group consisting of tin/lead, tin/indium, tin/silver, tin/bismuth, solder paste, and conductive adhesive compounds.22. The package according to claim 14 wherein the thickness of said package is in the range from about 250 to 800 [mu]m, excluding the thickness of the heat slug.23. A packaged integrated circuit, comprising:a substrate having first and second opposing surfaces, said substrate having; a plurality of signal lines, a plurality of first power lines coupleable to a first power source, and a plurality of second power lines coupleable to a second power source, all on said second surface, at least one of said plurality of signal lines disposed between a pair of said plurality of first power lines, and said signal lines between said pair of said plurality of first power lines and said pair of said plurality of first power lines disposed between a pair of said second power lines; an integrated circuit chip mounted on said substrait; and wherein said signal lines are of a first width, said first power lines are of a second width different from said first, and said second lines are of a third width different from said first and second widths; wherein said third width is wider than said second width, and said second width is wider than said first width. 24. The integrated circuit of claim 23, further comprising a ground plane on said first surface of said substrate.25. A packaged integrated circuit, comprising:a substrate having first and second opposing surfaces, said substrate having thereon: a plurality of groups of lines, said plurality of groups of lines including groups of lines of at least three different widths disposed on said second surface of said substrate, said groups of lines arranged such that one or more lines in a first group of lines of a first width are disposed between lines of a second group of lines of a second width and lines in a said second group of lines of said second width are disposed between lines of said third width; and an integrated circuit chip mounted on said substrate and coupled to at least some of said lines; and wherein said lines of said first width are signal lines, said lines of said second width are power lines coupled to a first voltage potential, and said lines of said third width are power lines coupled to a second voltage potential; wherein said third width is wider than said second width, and said second width is wider than said first width. 26. The integrated circuit of claim 25, further comprising a ground plane on said first surface of said substrate.27. A packaged integrated circuit, comprising:a substrate having first and second opposing surfaces, said substrate having: a plurality of signal lines, a plurality of first power lines coupled to a first power source, and a plurality of second power lines coupled to a second power source, all on said second surface, at least one of said plurality of signal lines disposed between a pair of said plurality of first power lines, and said signal lines between said pair of said plurality of first power lines and said pair of said plurality of first power lines disposed between a pair of said second power lines; and an integrated circuit chip mounted on said substrate; and wherein said signal lines are of a first width, said first power lines are of a second width different from said first, and said second lines are of a third width different from said first and second widths; wherein said third width is wider than said second width, and said second width is wider than said first width.
This application claims priority under 35 USC [section] 119 based upon Provisional Patent Application No. 60/147,596, filed Aug. 6, 1999.FIELD OF THE INVENTIONThe present invention is related in general to the field of semiconductor devices and processes and more specifically to structure, materials and fabrication of high performance plastic ball-grid array packages designed for flip-chip assembly.DESCRIPTION OF THE RELATED ARTBall Grid Array (BGA) packages have emerged as an excellent packaging solution for integrated circuit (IC) chips with high input/output (I/O) count. BGA packages use sturdy solder balls for surface mount connection to the "outside world" (typically plastic circuit boards, PCB) rather sensitive package leads, as in Quad Flat Packs (QFP), Small Outline Packages (SOP), or Tape Carrier Packages (TCP). Some BGA advantages include ease of assembly, use of surface mount process, low failure rate in PCB attach, economic use of board area, and robustness under environmental stress. The latter used to be true only for ceramic BGA packages, but has been validated in the last few years even for plastic BGAs. From the standpoint of high quality and reliability in PCB attach, BGA packages lend themselves much more readily to a six-sigma failure rate fabrication strategy than conventional devices with leads to be soldered.A BGA package generally includes an IC chip, a multi-layer substrate, and a heat spreader. The chip is generally mounted on the heat spreader using a thermally conductive adhesive, such as an epoxy. The heat spreader provides a low resistance thermal path to dissipate thermal energy, and is thus essential for improved thermal performance during device operation, necessary for consistently good electrical performance. The heat spreader is generally construed of copper and may include gold plating-representing an expensive part of the package. Further, the heat spreader provides structural and mechanical support by acting as a stiffener, adding rigidity to the BGA package, and may thus be referred to as a heat spreader/stiffener.One of the substrate layers includes a signal "plane" that provides various signal lines, which can be coupled, on one end, to a corresponding chip bond pad using a wire bond (or to a contact pad using flip-chip solder connection). On the other end, the signal lines are coupled with a solder "ball" to other circuitry, generally through a PCB. These solder bails form the array referred to in a BGA. Additionally, a ground plane will generally be included on one of the substrate layers to serve as an active ground plane to improve overall device performance by lowering the inductance, providing controlled impedance, and reducing cross talk. These features become the more important the higher the BGA pin count is.In contrast to the advantages of the BGA packages, prevailing solutions in BGA packages have lagged in performance characteristics such as power dissipation and the ability to maintain signal integrity in high speed operation necessary for devices such as high speed digital signal processors (DSP) and mixed signal products (MSP) Electrical performance requirements are driving the need to use multi-layer copper-laminated resin substrates (previously ceramic). As clock frequencies and current levels increase in semiconductor devices, the packaging designs are challenged to provide acceptable signal transmission and stable power and ground supplies. Providing stable power is usually achieved by using multiple planes in the package, properly coupled to one another and to the signal traces. In many devices, independent power sources are needed for core operation and for output buffer supply but with a common ground source.As for higher speeds, flip chip assembly rather than wire bonding has been introduced. Compared to wire bonding within the same package outline, flip chip assembly offers greatly reduced IR drop to the silicon core circuits; significant reduction of power and ground inductances; moderate improvement of signal inductance; moderate difference in peak noise; and moderate reduction in pulse width degradation.In order to satisfy all these electrical and thermal performance requirements, packages having up to eight metal layers have been introduced. The need, however, of high numbers of layers is contrary to the strong market emphasis on total semiconductor device package cost reduction. This emphasis is driving an ongoing search for simplifications in structure and materials, of course with the constraint that electrical, thermal and mechanical performances should be affected only minimally.The complexity and cost of the BGA packages are also influenced by the number of interconnections or vias that must be fabricated in the substrate layers to provide a path to connect each of the solder balls to either the ground plane, the power planes, or desired signal lines of the signal plane. Each via requires the formation of an electrically conductive layer on the internal walls of the via, to ensure a complete electrical path. Generally, the metallization of the internal walls of each via increases the overall complexity. Consequently, multiple vias and multiple substrate layers result not only in higher BGA fabrication costs, but also lower yields.Analyzing the total package cost shows that the cost of the substrate dominates (usually more than 50%), followed by the heat slug (usually at least 30%). In order to reduce the substrate cost, however, the number of layers should be reduced. This approach, in turn, seems to greatly endanger the electrical and thermal package performance.An urgent need has therefore arisen to break this vicious cycle and conceive a concept for a low-cost, yet high performance BGA package structure. Preferably, this structure should be based on a fundamental design concept flexible enough to be applied for different semiconductor product families and a wide spectrum of design and assembly variations. It should not only meet high electrical and thermal performance requirements, but should also achieve improvements towards the goals of enhanced process yields and device reliability. Preferably, these innovations should be accomplished using the installed equipment base so that no investment in new manufacturing machines is needed.SUMMARY OF THE INVENTIONAccording to the present invention, a high-performance, high input/output ball grid array substrate is provided, which is designed for integrated circuit flip-chip assembly and has two patterned metal layers and an intermediate insulating layer.The insulating layer has a plurality of vias filled with metal, and one of the metal layers attached to each surface. Positioned between the two metal layers, the insulating layer has a thickness and material characteristics suitable for strong electromagnetic coupling between the signal lines and the first metal layer. In this manner, a predetermined impedance to ground is provided, and cross-talk between signal lines is minimized.The first metal layer provides the electrical ground potential and has a plurality of electrically insulated openings for outside electrical contacts.The second metal layer has three portions: The first portion is configured as a plurality of signal lines; the second portion is configured as a plurality of first electrical power lines operable at a first potential; and the third portion is configured as a plurality of second electrical power lines operable at a second potential. The first power lines are configured so wide that their combined inductances approximate the inductance of a metal having the size of the total substrate. The second power lines are configured to serve as distributed areas having wide geometries for minimizing self-inductance and merging into a central area supporting the IC chip.It is an aspect of the invention that the signal lines are distributed relative to the first power lines such that the inductive coupling between them reaches at least a minimum value, providing high mutual inductances and close to zero effective self-inductance. Further, the signal lines are electromagnetically coupled to the ground metal such that cross-talk between signal lines is minimized.Another aspect of the invention is to provide an outermost insulating layer protecting the exposed surface of the ground layer. This insulating film has a plurality of openings filled with metal suitable for solder ball attachment.Another aspect of the invention is to provide another outermost insulating layer protecting the exposed surfaces of the signal and power lines. This insulating film has a plurality of openings filled with metal suitable for contacting selected signal and ground lines and chip solder bumps.Another aspect of the invention is to provide the modeling guidelines for designing the substrate structures and materials such that they are flexible enough to be applied for different semiconductor high-performance device families and a wide spectrum of high speed, high power design and assembly variations.Another aspect of the invention is to utilize existing semiconductor fabrication processes and to reach the substrate and device goals without the cost of equipment changes and new capital investment, by using the installed fabrication equipment.Another aspect of the invention is to reduce the thickness of the BGA substrate substantially so that the BGA device can readily be employed in a variety of new products requiring thin semiconductor components.Another aspect of the invention is to improve the inherent thermal dissipation to a degree that the use of a heat slug is no longer mandatory to achieve the required thermal characteristics.These aspects have been achieved by the computer-implemented method for modeling a high-performance, high I/O ball grid array substrate, and by a method for fabricating this substrate for integrated circuit flip-chip assembly, suitable for mass production.The technical advances represented by the invention, as well as the aspects thereof, will become apparent from the following description of the preferred embodiments of the invention, when considered in conjunction with the accompanying drawings and the novel features set forth in the appended claims.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a schematic and simplified cross section of the Ball Grid Array device having a substrate according to the invention.FIG. 2 is a simplified perspective view of the first and the second metal layers.FIG. 3 is a simplified perspective view of the first metal layer as viewed from the bottom.FIG. 4 is a simplified top view of a portion of the second metal layer, showing the structure of the signal lines.FIG. 5 is a simplified top view of a portion of the second metal layer, showing the structure of the first power lines.FIG. 6 is a simplified top view of a portion of the second metal layer, showing the structure of the second power lines.FIG. 7 is a simplified top view of the second metal layer showing the combined structures of signal lines, first power lines, and second power lines.FIG. 8 is a flowchart illustrating an exemplary computer-implemented method for electrically modeling the structure of the metal and power lines according to the teachings of the present invention.FIG. 9 is a flowchart illustrating an exemplary method for forming a ball grid array package substrate according to the teachings of the present invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSFIG. 1 is a simplified and schematic cross sectional view of a portion of the high-performance, high input/output (I/O) Ball Grid Array (BGA) package of the invention, generally designated 100. Using solder bumps 102 in flip-chip technology, the active surface 10la of the integrated circuit chip 101 is attached to openings in the outermost insulating film 111 of substrate 110, facing the active chip surface 101a. Chip 101 is commonly made of silicon and has a thickness typically in the range of about 200 to 375 [mu]m. The number of I/O's typically is in the range from about 100 to 600; approximately one half of these I/O's serve signal lines, the other half is dedicated to power and ground potentials.The solder bumps 102 connecting the chip I/O's to the substrate 110 are usually small in diameter, typically about 100 to 120 [mu]m with a range of ±10 [mu]m, and comprise attach materials selected from a group consisting of tin, lead/tin alloys, indium, indium/tin alloys, solder paste, and conductive adhesive compounds. Following the flip-chip attachment, any gaps between chip 101 and substrate 110, and also between the solder bumps 102, are filled with a polymeric encapsulant 103. This encapsulant typically is a polymeric precursor mad of an epoxy base material filled with silica and anhydrides, requiring thermal energy for curing to form a polymeric encapsulant.The encapsulation material 104, surrounding the chip 101 after flip-chip attachment, serves the protection of the mounted chip. Commonly, it is a polymeric material selected from a group consisting of epoxy-based molding compounds suitable for adhesion to the chip, and fluoro-dielectric compounds supporting high-speed and high-frequency package performance. For molding compounds, standard transfer molding processes are the preferred method of encapsulation in mass fabrication. Over the passive surface 101b of the chip, the molded material 104a may have a thickness typically in the range from 300 to 500 [mu]m, between the substrate and the heat slug from about 500 to 800 [mu]m.The heat spreader 105, positioned on the outer surface of the encapsulation material 104, is optional. Its thickness is typically in the range from about 150 to 300 [mu]m. It enhances heat spreading and heat dissipation and thus the overall thermal performance of the device significantly, but is usually made of copper and thus a substantial cost contributor. However, based on the outstanding thermal characteristics of the BGA substrate of the present invention, the desired thermal device performance can be achieved even without an additional heat spreader.Solder balls 106 are attached to the plurality of openings in the outermost insulating film 112 of substrate 110. As defined herein, the term solder "balls" does not imply that the solder contacts are necessarily spherical. They may have various forms, such as semispherical, half-dome, truncated cone, or generally bump. The exact shape is a function of the deposition technique (such as evaporation, plating, or prefabricated units) and reflow technique (such as infrared or radiant heat), and the material composition. The solder balls usually have a diameter in the range from about 0.1 to 0.4 mm. Several methods are available to achieve consistency of geometrical shape by controlling amount of material and uniformity of reflow temperature. The solder balls 106 comprise attach materials selected from a group consisting of tin/lead, tin/indium, tin/silver, tin/bismuth, solder paste, and conductive adhesive compounds.The two outermost insulating films 111 and 112 of the substrate serve as protection for the substrate metal patterns and as solder masks. The films preferably are glass-filled epoxies, polyimides, acrylics or other photo-imageable materials suitable as solder masks in the thickness range from about 50 to 100 [mu]m. The openings for solder bump and solder ball attachments are made of copper including a flash of gold, palladium or platinum, or other wettable and solderable metals.As FIG. 1 schematically shows, the substrate 110 consists of an insulating layer 113 having a first surface 113a, a second surface 113b, and a plurality of vias 114 filled with metal. The preferred metal is copper, but tungsten or any other electrically conductive materials are suitable. The insulating layer 113 has preferably a thickness in the range from about 70 to 150 [mu]m and is made of organic material selected from a group consisting of polyimide, polymer strengthened by glass fibers, FR-4, FR-5, and BT resin. The dielectric constant is preferably between 4 and 5.Attached to the first substrate surface 113a is a metal layer 115, configured to provide electrical ground potential. Attached to the second surface 113b is a metal layer 116, configured to provide a plurality of electrical signal lines, further a plurality of first electrical power lines, and further a plurality of second electrical power lines. The total thickness of the substrate 110 is preferably in the range from about 150 to 300 [mu]m.The two metal layers 115 and 116 have a thickness preferably in the range of about 7 to 15 [mu]m, and are made, for example, of copper, brass, aluminum, silver, or alloys thereof. Metal layer 115, herein called the "first metal layer", is designed to provide the electrical ground potential. It has a plurality of openings, each having an electrically insulated ring and metal in the core for outside electrical contacts. This core metal is solderable and connects to the solder balls 106.Metal layer 116, herein called the "second metal layer", is designed so that a portion is configured as a plurality of electrical signal lines, a further portion as plurality of first electrical power lines, and a final portion as a plurality of second electrical power lines. These portions are illustrated in more detail in FIGS. 4 to 7.The relation and position of the two metal layers are shown in perspective view in FIGS. 2 and 3. Layer 210 is the first metal layer, providing the electrical ground potential. The plurality of openings is designated 211. When layer 210 is viewed perspectively from the underside, as illustrated in FIG. 3, a plurality of solder balls 311 is attached to the plurality of openings. Solder balls 311 establish the connections of the BGA to the outside world.Referring now to FIG. 2, layer 220 is the second metal layer, providing the plurality of signal lines 221, first power lines 222 and second power lines 223. In the center of the second metal layer 220 is the flip-chip attach area 224, with the larger portion of the metal belonging to the second power lines. More detail is displayed in FIGS. 4 to 7.FIG. 4 shows one quadrant, generally designated 400, of the signal line portion of the second metal layer 116. The total signal line portion has three additional quadrants of similar configuration. An individual signal line 401 has a width between 25 and 60 [mu]m. One signal line is spaced to the adjacent signal line by insulating material of a width from about 20 to 50 [mu]m. As FIG. 4 shows, the signal lines terminate at inner endpoints 402 close to the periphery of the chip-to-be-attached, preferably in two staggered rows 402a and 402b of staggered endpoints. The outer endpoints 403 fan out wide in order to serve a distributed array of solder ball connections.FIG. 5 shows one quadrant, generally designated 500, of the portion of the first power lines of the second metal layer 116. The total portion of the first power lines has three additional quadrants of similar configuration. An individual power line 501 has a width from about 200 to 500 [mu]m. It is an important aspect of the present invention that that the first power lines are configured so wide that their combined inductances approximate the inductance of a metal which would have the size of the total substrate. As FIG. 5 shows, the first power lines terminate at inner endpoints 502 close to the periphery of the chip-to-be-attached. The outer endpoints 503 fan out wide in order to serve a distributed array of solder ball connections. By way of example, the first power lines may be at an applied potential of 3.0 V.It is further an important aspect of the present invention that the signal lines of FIG. 4 are positioned in a proximity of about 20 to 50 [mu]m to the first power lines of FIG. 5, thus providing strong electromagnetic coupling, high mutual inductance and minimized effective self-inductance.It is further an important aspect of the present invention that the signal lines are positioned to provide strong electromagnetic coupling to power and ground lines and thus minimal coupling, or cross-talk, between the signal lines.It is further an important aspect of the present invention that the signal lines are distributed relative to the first power lines such that the inductive coupling between them reaches at least a minimum value, providing high mutual inductances and minimized effective self-inductance.FIG. 6 shows all four quadrants, generally designated 600, of the portion of the second power lines of the second metal layer. These second power lines are structured as distributed areas 601 having wide geometries for minimizing self-inductance; these areas may, for instance, utilize the four corners of the package. The second power lines merge into a central area 602 supporting a large number of chip solder bumps. By way of example, the second power lines may be at a 1.8 V applied potential.In FIG. 7, the three portions of the second metal layer, detailed in FIGS. 4, 5 and 6, are combined and displayed for one quadrant in order to illustrate the complex interrelated positioning of the signal lines, first power lines, and second power lines according to the invention. The three remaining quadrants of the first metal structures, not shown in FIG. 7, are analogous to the one quadrant shown relative to the combination of signal and first power lines. FIG. 8 is a flowchart illustrating an exemplary computer-implemented method 800 for modeling a high-performance, high I/O ball grid array substrate for IC flip-chip assembly according to the teachings of the present invention. Method 800 begins at step 802 by collecting the inputs of a first and a second metal layer, all of substantially equal areas. The first metal layer provides electrical ground potential.The majority of the modeling concerns the three portions of the second metal layer. The method proceeds next to step 804 where the I/O count of the signal lines of the second metal layer is determined. With the I/O input at step 806, the widths and layout of the signal lines are selected. Based on this selection, the resulting impedance levels of the signal lines are modeled at step 808. Further, the signal lines are electromagnetically coupled to the ground potential applied to the first metal layer; using this coupling, the cross talk between the signal lines is modeled with the goal of minimizing the cross talk.At step 810, details of the first power lines (operable at a first electrical potential, for instance 3.0 V) of the second metal layer are added to the modeling. The plurality of the signal lines is routed in conjunction with the plurality of the first power lines with the goal of providing at least a minimum inductive coupling between signal and power lines. This goal strives to obtain high mutual inductance and to minimize effective self-inductance. If the result of this modeling step is not satisfactory, the widths of the signal lines are modified in step 809. They are fed back as improved inputs to step 808 in order to repeat the impedance modeling, and then to step 810 in order to repeat the signal and power lines routing and distribution.After completing the relative positioning of signal and first power lines, achieving high mutual inductances and minimized effective self-inductance, the widths of the first power lines are maximized in step 812. The goal is to configure the first power lines so wide that their combined inductances approximates the inductance of a metal having the area size of the total substrate.At steps 814 and 816, the coupling between signal lines and first power lines is further modeled, especially by simulating electrical noise. If the relative line distribution does not exhibit enough insensitivity or suppression of noise, the first power lines are rerouted relative to the signal lines to reduce noise (step 815). The rerouted line distribution is fed back to step 812 as a revised input for maximizing the widths of the first power lines.At step 818, the plurality of second power lines (operable at a second electrical potential, for instance 1.8 V) of the second metal layer is added to the modeling. The second power lines are modeled to serve as distributed areas having wide geometries so that self-inductance is minimized. The second power lines merge into a central area, which serves to support the IC chip. The maximized widths of the second power lines are used as inputs for step 820, the modeling and simulation of the total package.Additional inputs for step 820 are the structure, thickness, and material characteristics of the insulating layer positioned between the first and second metal layers. The goal of the modeling is to provide strong electromagnetic coupling between the signal lines and the first metal layer in order to reach a predetermined impedance to ground (for instance, 50 ohms) and to minimize cross talk between signal lines.If these goals are not achieved satisfactorily, the layout of signal lines and first and second power lines are modified in step 822 and the new layout is fed back as improved input to the modeling of the total package in step 820. The final output of the electrical modeling is displayed in step 824, which ends method 800.FIG. 9 is a flowchart illustrating an exemplary method 900 for fabricating a high-performance, high I/O ball grid array substrate for IC flip-chip assembly, having two patterned metal layer and one intermediate insulating layer, according to the teachings of the present invention. Method 900 begins at step 902 and proceeds next to step 904 where an insulating layer is provided that has a first surface and a second surface. Suitable materials include polyimides, epoxy glass (FR-4, FR-5, or BT), or other flexible electrically non-conductive materials; thickness usually in range 70 to 150 [mu]m.At step 906, the insulating layer of the substrate is patterned to form a plurality of via holes using mechanical drilling or a laser beam technique. At step 908, the via holes are filled with metal, such as copper, or other electrically conductive material, creating a plurality of electrically conductive vias through the insulating layer of the substrate.At step 910, one of the two metal layers (preferably copper, thickness between 7 to 15 [mu]m) is attached to the first surface of the insulating layer (using typically a roll-on process). This metal layer is intended to provide electrical ground potential in the BGA. The patterning of this metal layer, using standard photo-lithographic techniques, to form a plurality of electrically insulated openings intended for outside electrical contacts, such as solder balls, is performed at step 912.At step 914, the other of the two metal layers (preferably copper, thickness between 7 to 15 [mu]m) is attached to the second surface of the: insulating layer (using typically a roll-on process). This metal layer is intended to provide three functions in three patterned portions. The patterning of this metal layer in step 916, using standard photo-lithographic techniques, creates the plurality of signal lines; the plurality of first power lines, providing a specific electrical potential; and the plurality of second power lines, providing another specific electrical potential. Selected signal and power lines are in electrical contact with the vias in the insulating layer.At step 918, insulating protective films are formed over the exposed surface of the ground layer and over the exposed surfaces of the signal and power lines. At step 920, pluralities of openings are formed in both insulating films; these openings are then filled with solderable metal (for instance, copper with gold flash), creating attachment sites for outside solder balls used in board attach, and for chip solder bumps used in flip-chip assembly. The fabrication of the BGA substrate is thus completed.In order to finish the fabrication of the BGA package, method 900 continues at step 922 by attaching an IC chip to the substrate. The chip has an active surface including solder bumps. These bumps are adhered to the plurality of metal-filled openings in the outermost insulating film protecting the signal and power lines. The solder reflow typically involves the temperature of the eutectic tin/lead mixture.The process flow chart continues at step 924 or, if needed, at step 923. At step 923, any gaps are filled between the substrate and the chip left void after the chip solder bumps have been adhered to the plurality of openings in the outermost insulating film protecting the signal and power lines. As filling material, a polymeric encapsulant is commonly used made of an epoxy-based precursor filled with silica and anhydrides, requiring elevated temperatures for curing.At step 924, the chip (more precisely, the passive surface of the chip and its four edge sides) is surrounded with a polymeric encapsulation compound; preferably, a transfer molding process is used.Due to the short thermal paths for heat dissipation, the thermal characteristics of the BGA of the invention are excellent. If further improvement is required, a heat slug can be attached in step 925; it is preferably positioned on the outer surface of the cured encapsulation material.At step 926, solder balls are attached to the plurality of metal-filled openings in the outermost insulating film protecting the ground layer. This process provides external electrical and mechanical connections to the BGA package. Generally, the solder balls will be arrayed in a rectangular pattern around the periphery of the BGA package; a multitude of balls may also be positioned in the center of the package. Method 900 ends at step 928.While this invention has been described in reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. As an example, the material of the semiconductor chip may comprise silicon, silicon germanium, gallium arsenide, or any other semiconductor material used in manufacturing. As another example, the BGA may have an encapsulation made by overmolding or another technique, or may have no encapsulation of the flip-soldered chip at all. As another example, instead of the encapsulation using molding compounds, a thermally conductive lead may be attached over the flip-soldered chip for physical protection and thermal enhancement. As another example, the two metal layers may be attached to the surfaces of the insulating layer concurrently and then patterned individually, rather than being attached and patterned sequentially. It is therefore intended that the appended claims encompass any such modifications or embodiments.
PROBLEM TO BE SOLVED: To provide adequate control and/or synchronization capabilities to applications.SOLUTION: Provided are techniques for a processor temperature control interface. In one embodiment, a processor includes a bidirectional interface and output logic to assert a first signal indicating an internal high temperature on the bidirectional interface. Throttling logic throttles operations of the processor either if the internal high temperature is indicated or if an external signal is received on the bidirectional interface.
Bi-directional interface, output logic for asserting a first signal indicating internal high temperature on the bi-directional interface to logic external to the processor, and providing the first signal to suppression logic, both A processor having: suppression logic coupled to a directional interface, wherein the first signal indicates the internal high temperature, or the bidirectional signal causes an external signal from the logic external to the processor In any case where it is received, the suppression logic suppresses the operation of the processor, and one or more signal lines couple the bidirectional interface to the logic external to the processor, the one or more Signal lines, the suppression logic responds to both the external signal and the first signal. Controlled to be bi-directional in tweak mode, the one or more signal lines being unidirectional in another mode in which the suppression logic suppresses operation of the processor in response to the first signal only Processor to be controlled.The processor of claim 1, wherein the logic external to the processor manages power consumption of a plurality of processors.The processor of claim 1, wherein the bi-directional interface is a single interface contact.A first path for the first signal, a second path for the external signal, a first path for ignoring the external signal in a unidirectional mode, and a bidirectional path in a bidirectional mode The processor of claim 1, further comprising: select logic to select any of the second paths to determine an external signal.The bi-directional interface comprises a first interface node and a second interface node, the second interface node is an input, and the selection logic is a third in the bi-directional mode. 5. The processor of claim 4, further selecting a path.The third path comprises a path of an internal signal for the first signal having a first delay, and a path of an external signal for the external signal having a second delay, 6. The processor of claim 5, wherein a delay of 1 corresponds to the second delay plus an external delay.
Processor temperature control interfaceThe disclosure of the present invention is in the field of electronic components. In particular, the disclosure of the present invention belongs to a temperature control interface for an electronic component such as a processor.Controlling the temperature of the electronic components is a continuous effort as the components continue to shrink but consume more power. The microprocessor uses sophisticated techniques to enable power maintenance and suppress itself when the temperature reaches a specific temperature reference.For example, one prior art processor includes a stop clock pin that allows the system to stop the processor's clock for a variety of reasons. One known use of the pin is to provide a periodic waveform on the stop clock pin, whereby the processor periodically shuts down and restarts the processor. (See, eg, US Pat. No. 5,560,001). Said clock suppression effectively reduces the operating rate of the processor, thereby generally reducing power consumption and temperature.Furthermore, prior art processors may themselves have temperature sensors to perform internally initiated suppression. When an internally initiated suppression is used for temperature reasons, an external signal may be asserted to alert the system (see, for example, the Pentium 4 processor PROCHOT # output signal).However, such an arrangement may not provide adequate control and / or synchronization performance for some applications.The processor of the present invention is a processor comprising: a bidirectional interface; output logic for asserting a first signal indicative of internal high temperature at the bidirectional interface; and suppression logic coupled to the bidirectional interface. The suppression logic suppresses the operation of the processor if the first signal indicates the internal high temperature, or if an external signal is received at the bi-directional interface, the internal high temperature being the internal As indicated by the signal transmitted at the bi-directional interface, indicating whether the high temperature of the secondary processor has reached an unacceptable level, whether the high temperature outside the secondary processor has reached an unacceptable level. Is indicated by a signal from the second processor.Figure 1 illustrates one embodiment of a system having a bi-directional processor hot interface.FIG. 2 is a flow diagram illustrating the operation of the system shown in FIG. 1 according to one embodiment.Figure 1 illustrates one embodiment of a multiprocessor system using a processor hot interface.FIG. 4 is a flow diagram illustrating the operation of the system shown in FIG. 3 according to one embodiment.The invention is illustrated by way of example and is not limited to the figures of the accompanying drawings.The following description describes techniques for a processor's temperature control interface. In the following description, in order to provide a better understanding of the present invention, such as logic implementation, clock, signal names, type and interrelationship of system elements, selection of logic division / integration, etc. Many specific details are shown. However, it will be appreciated by those skilled in the art that the present invention may be practiced without the use of the specific details described above. Besides, control structures and gate level circuits are not shown in detail in order not to obscure the present invention.In one embodiment, a bi-directional processor hot (PROCHOT #) interface is provided that allows both system monitoring and system control of processor temperature conditions. Such an interactive interface may be useful, for example, in desktop and mobile systems where a limited amount of control and monitoring performance is balanced with additional pins. In another embodiment, the two pins PROCHOT # and Forced Processor Hot (FORCEPH #) interface allow the system to monitor and control the assertion mechanism mechanism.The "processor" may be formed as a single integrated circuit in one embodiment. In other embodiments, multiple integrated circuits may form a processor together. In still other embodiments, hardware and software routines (e.g., binary conversion routines) may together configure the processor. Many different types of integrated circuits and other electronic components can benefit from the use of the temperature control techniques described above. For example, processor 100 may be a general purpose processor (eg, a microprocessor) or may be a special purpose processor or device. For example, digital signal processors used in the system, graphic processors, network processors, and other types of special purpose components that may be used in the system may benefit from the system of recognizable controllable suppression.FIG. 1 illustrates one embodiment of a processor 100 having a bi-directional processor hot interface (PROCHOT # interface node 117). The interface may be a pin, a ball, another type of connector, or a set thereof that may provide at least one interface node that interfaces to other components. Processor 100 includes temperature monitoring logic 110 that monitors the temperature of the processor itself. Various known or available temperature monitoring techniques may be used. For example, built-in circuits that monitor temperature may be used. Alternatively, external sensors or power consumption estimation techniques (e.g. activity measuring / monitoring devices, current monitoring devices etc) may be used. Temperature monitor 110 is coupled to output driver 115 which drives interface node 117 via signal line 112 for the overheat signal. The overheating signal is also routed to suppression logic 120 through multiplexer 130. The multiplexer is controlled by a fuse 140 which, in the illustrated embodiment, selects between unidirectional and bidirectional modes of operation.In the example of FIG. 1, system logic 150 may interface with processor 100 and drive the PROCHOT # signal through driver 155 or may receive the PROCHOT # signal through input buffer 160. The system logic may itself include several temperature sensors that determine when the entire system has reached unacceptable temperature levels and drive the PROC HOT # signal accordingly.The operation of one embodiment of the system of FIG. 1 is illustrated in FIG. At block 200, different operating modes are separated. In one embodiment, the semiconductor fuse may be blown and an operating mode may be selected. Other selection techniques, such as configuration registers, etc. may also be used to select the operating mode. In the output only mode, fuse 140 causes multiplexer 130 to select overheating as an input to suppression logic 120. Thus, the external state of the PROC HOT # signal is not determined, as shown in block 205, only the effective output of PROC HOT # is provided.In bi-directional single pin mode, system logic 150 and processor 100 may drive PROCHOT # to control the suppression. As shown in blocks 215 and 225, the processor 100 monitors its temperature and monitors the PROCHOT # interface. If the temperature does not exceed the selected criteria, the processor continues to monitor the temperature, as indicated at block 220. Similarly, if the PROCHOT # signal is not asserted, then the processor continues to monitor the 100 interface, as shown in block 230. If the PROCHOT # signal is asserted, or if the temperature exceeds the selected reference, then as indicated at block 240, the suppression logic 120 suppresses processor operation.The suppression performed by the suppression logic may be any suitable known technique or other available suppression technique. For example, the clock to the device can be periodically stopped. Besides, processing throughput can be reduced by limiting the throughput at certain stages of the pipeline. Besides, the clock frequency may be changed. These or other techniques that effectively reduce the amount of processing by the processor may be used for the suppression logic.In the third mode, as shown in block 210, a bi-directional dual pin PROCHOT # implementation may be used. Figures 3 and 4 provide further details of one embodiment using a dual pin implementation. The dual pin implementation may allow both monitoring of the temperature measurement device internal to the processor and assertion of suppression instructions. At a single pin, asserting the suppression instruction masks assertions on the same pin of the processor. In the example of FIG. 3, two processors are shown for purposes of illustration, but additional processors may be added. Both processor 300 and processor 350 have FORCEPH # and PROCHOT # pins. Signal lines 364 and 362 couple the FORCEPH # signal driven by system logic to processors 300 and 350, respectively, and signal lines 302 and 352 carry the PROCHOT # signal driven by processors 300 and 350 to system logic, respectively. .Processor 300 includes a monitoring device 310 that detects when processor 300 has overheated (in one embodiment, when overpower is consumed). Each numbered block represents a delay element such as a latch. The driver 305 is coupled to receive an overheat signal from the monitoring device 310 and to drive the PROCHOT # signal on signal line 302. The first path to multiplexer 330 incorporates the overheat signal at the "w" input of multiplexer 330 through delay block 313-1 and delay block 313-2. The second path to multiplexer 330 passes through output block 305 through delay block 313-1 (so that it gets some external assert signal on signal line 302) and through inversion driver 307 to block delay block 314-2. And an overheat signal at the "b" input of multiplexer 330 through 314-3.The third path to the multiplexer includes both inputs from signal line 302 (PROCHOT #) and from signal line 364 and is driven by system logic 360. Signal line 364 may be a forced processor hot (FORCEPH #) signal line that allows external considerations to be used to determine when to suppress operation. In one embodiment, it may be desirable for the system to initiate suppression of multiple processors simultaneously (ie, during the same clock period of the external bus clock), even if both processors are not suppressing themselves at the same time. In this embodiment, it is desirable to match the delay of the overheat signal to the suppression logic 320 to the expected delay through the path of the system logic. For example, in the embodiment of FIG. 3, the overheat signal enters the system logic 370 through the delay block 313-1 and the output driver 305, through the delay block 316-2, the combinational logic 363 and the delay block 316-3, and the delay block 316. 4 through combinational logic 371 and delay logic 316-5, the system logic 360 is returned to and the second processor 350 is entered through delay block 316-6, combinational logic 367 and delay block 316-7. Assuming that the second processor has the same logic as shown for processor 300, the path is: input buffer 309, two delay blocks 316-8 and 316-9, OR gate 311, multiplexer 330 Continue through the elements corresponding to the f "input.Similarly, the path of the overheat signal internal to processor 300 includes nine delay blocks and an OR gate 311. In the dual pin mode, internally, the overheat signal passes through the delay blocks 313-1 and 313-2 to the OR gate 311 through the delay blocks 315-1 to 315-9. When system logic 360 and 370 assert FORCEPH # on signal line 364 or if monitoring device 310 indicates that suppression is to be performed, the OR gate provides an indication to multiplexer 330 that suppression is to be performed. System logic components 360 and 370 may be local (360) or global (370) application specific integrated circuits (ASICs). However, whether any or all of the logic is separate or integrated is not important to the disclosed technology. Logic may be included in the processor itself, or in other system elements such as a bus bridge, or in an ASIC or the like. Furthermore, the absolute number or length of the various delays is not important. However, providing a delay match is desirable in certain embodiments.In the embodiment of FIG. 3, two control inputs (fuseBiDirProcHotEn and fuseMPdecode) to the multiplexer control which mode is selected. When the fuse fuseMPdecode indicates that a multi-processor (dual pin) PROC HOT # / FORCE PH # implementation is desired, the path "f" to the multiplexer is selected. When fuse fuseBiDirProcHotEn indicates that only bidirectional mode is desired, the input "b" to the multiplexer is selected. If the fuse indicates that both the bidirectional mode and the multiprocessor (dual pin) mode are undesirable, then the output only mode is used and the path "w" to the multiplexer is selected.FIG. 4 illustrates the operation of a multiprocessor system in which the dual pin mode (e.g., path "f" of multiplexer 330 in the embodiment of FIG. 3) is selected. At block 400, a high temperature is detected (eg, by the monitoring device 310). At block 410, the PROCHOT # signal is asserted to the system logic. As indicated at block 420, the internal overheating signal is delayed. In the embodiment of FIG. 3, the paths through delays 313-1 and 313-2 and 315-3 through 315-9 provide delays. As indicated at block 425, the asserted PROCHOT # signal also propagates through the system logic and is delayed, resulting in the generation of the FORCEPH # signal to the other processor (s) of the system. For example, in the embodiment of FIG. 3, the FORCEPH # signal is asserted to processor 350 on signal line 362.Because of the delay in the first processor designed to match the delay in the path through the system logic, in addition to any internal delay, the processor starts to suppress synchronously, as shown in blocks 430 and 435. In some systems, it is desirable to keep such suppression synchronization synchronized with the processor's operation at a constant rate, thereby making the progression and temperature / power relationship roughly equal. Thus, the processor may be forced into the suppression state even if the processor does not otherwise enter the suppression state.Thus, a processor temperature control interface technique is disclosed. Although specific illustrative embodiments are described and illustrated in the accompanying drawings, the embodiments are merely illustrative of a broad range of inventions and not limiting, and the present invention is illustrated and described. It is understood that the present invention is not limited to the structure or configuration of, and that various other improvements will occur to those skilled in the art upon reviewing this disclosure.Further, the following items will be disclosed regarding the above embodiments.A processor comprising: (1) a bi-directional interface; output logic for asserting a first signal indicative of internal high temperature on the bi-directional interface; and suppression logic coupled to the bi-directional interface A processor wherein the suppression logic suppresses operation of the processor either when the first signal indicates the internal high temperature or when an external signal is received at the bi-directional interface.(2) The processor according to (1), wherein the bidirectional interface is a single interface node.(3) The processor according to (1), wherein a first path for the first signal, a second path for the external signal, and the external signal in unidirectional mode are ignored. A processor further comprising: the first path; and selection logic to select any of the second paths for determining the external signal in a single bidirectional interface mode.(4) The processor according to (3), wherein the bi-directional interface includes a first interface node and a second interface node, and the second interface node is an input. A processor wherein the selection logic further selects a third path in a bi-directional dual interface mode.(5) The processor according to (4), wherein the third path includes a path of an internal signal for the first signal having a first delay, and the external signal having a second delay. And a path of an external signal for the processor, wherein the first delay matches the second delay plus an external delay.(6) The processor according to (1), wherein the bidirectional interface receives a first interface node that outputs the first signal when a dual pin mode is possible, and an external signal. A processor having a second interface node and a single bidirectional interface node, if bidirectional mode is possible.(7) The processor according to (6), further comprising: a first delay in a first path of the first signal; and a second delay in a second path of the second signal. A processor, wherein the first delay of the first path matches the second delay of the second path plus an external delay.(8) A first interface node outputting an internal signal indicating high temperature, a second interface node receiving an external signal, and a suppression logic suppressing the first processor according to the internal signal or the external signal A system comprising: a first processor having: and system logic to assert the external signal.(9) The system according to (8), further comprising: a first interface node of the second processor that outputs a second processor internal signal indicating a high temperature of the second processor; and a second external signal A second interface node of the second processor, and suppression logic of the second processor that suppresses the second processor in response to the second processor internal signal or the second external signal. And the system logic is responsive to the second processor outputting an internal signal of the second processor indicative of the high temperature of the second processor to the first processor. A system for asserting the external signal.(10) The system according to (9), wherein the first processor comprises: a first delay in a first path of the internal signal to the suppression logic; and an external signal to the suppression logic. Further comprising: a second delay in a second path, wherein the first delay matches the second delay plus a delay of system logic.(11) The system according to (10), wherein the first processor and the second processor start suppression synchronously in response to an internal signal of the second processor.(12) The system according to (11), wherein the first processor and the second processor start suppression in the same single clock cycle.(13) exciting a first signal indicating a high temperature measured internally at the bidirectional interface, exciting the first signal, or receiving an external signal at the bidirectional interface A method comprising suppressing the action in any case.(14) The method according to (13), wherein the excitation verifies that a selected temperature reference has been reached, and the first signal is detected when the selected temperature reference is reached. A method comprising exciting.(15) The method according to (13), wherein the interface node is a single bidirectional interface node.(16) The method according to (13), further comprising delaying the first signal and the external signal through different delay paths.(17) The method according to (13), wherein either the first mode using a single bidirectional interface node as the interface node or the second mode using two interface nodes is selected. A method further comprising:(18) The method according to (17), further comprising, in the second mode, delaying the first signal that causes suppression simultaneously with another processor.(19) A method comprising indicating a high temperature measured inside a first processor, comprising synchronizing the suppression according to the high temperature measured inside the first processor with a suppression of the second processor.(20) The method according to (19), wherein indicating comprises exciting the first signal of the first interface node.(21) The method according to (20), wherein synchronizing comprises: receiving the first signal, asserting a second signal to the second processor, and operating at least the first processor. Delaying the initiation of suppressing and enabling the first processor and the second processor to synchronously suppress operation.(22) The method according to (21), wherein delaying prevents the first processor from suppressing until the same clock cycle that the second processor starts suppressing.
Technologies for verifying the integrity of regions of physical memory allocated among multiple domains are described. In embodiments the technologies include or cause: the generation of a first integrity value in response to a write command from a first domain; the generation of a second integrity value in response to a read command; and verifying the integrity of read data targeted by the read command at least in part by comparing the first integrity value to the second integrity value.
1.A method for verifying the integrity of data stored in a main memory of a host device includes using a memory controller of the host device to:Generating a first integrity value in response to a write command from a first domain, the write command targeting a first physical address of a first allocated area of the main memory;Generating a second integrity value in response to a read command from the first domain, the read command targeting read data stored to the first physical address; andVerifying the integrity of the read data at least in part by comparing the first integrity value with the second integrity value;among them:Generating the first integrity value includes:Performing a first integrity operation on the plaintext of the write data targeted by the write command to generate a first output;Performing a second integrity operation on the ciphertext of the write data to be written in response to the write command to generate a second output;Combining the first and second outputs to generate a first integrity value; andA first integrity value is written to a first allocated area of the main memory.2.The method of claim 1, wherein:The first integrity operation includes performing a cyclic redundancy check (CRC) to generate a first CRC value according to the plain text of the written data;The second integrity operation includes generating a first message authentication code (MAC) according to the cipher text of the written data;The first output is the first CRC value, and the second output is the first MAC.3.The method of claim 1, wherein generating a second integrity value comprises:Read the ciphertext of the read data targeted by the read command from the first physical address;Decrypt the ciphertext of the read data to obtain the plaintext read data;Performing a third integrity operation on the plaintext read data to obtain a third output; andPerform a fourth integrity operation on the ciphertext read data to obtain a fourth output; andThe third and fourth outputs are combined to generate a second integrity value.4.The method of claim 3, wherein:The third integrity operation includes performing a cyclic redundancy check (CRC) on the plaintext read data to generate a second CRC value;The fourth integrity operation includes generating a second message authentication code (MAC) from the ciphertext read data; andThe third output is the second CRC value, and the fourth output is the second MAC.5.A method according to any one of claims 1-4, wherein:Verifying the integrity of the read data when the first integrity value and the second integrity value are the same; andWhen the first integrity value and the second integrity value are different, verifying the integrity of the read data fails.6.A method according to any of claims 1-4, further comprising using said memory controller to:Isolating the first allocated area of the main memory from a second allocated area of the main memory;The first allocated area is associated with a first domain of the host device, and the second allocated area is associated with a second domain of the host device.7.The method of claim 6, wherein the memory controller is configured to:Isolate the first allocated area from the second allocated area by using range-based control; andThe first domain-specific encryption key is used to encrypt the data of the first allocated area to be written to the main memory, and the second domain-specific encryption key is used to encrypt the data to be written to the main memory. The data of the second allocated area is encrypted.8.A method according to any one of claims 1-4, further comprising using the memory controller to:Causing the ciphertext of the written data to be written to a first data storage bit in a first allocated area of the main memory; andCausing a first integrity value to be written to a first metadata bit in a first allocated area of the main memory;The first metadata bit is associated with the first data storage bit.9.A memory controller for enabling integrity verification of data stored in a main memory of a host device includes a circuit configured to perform the following:Generating a first integrity value in response to a write command from a first domain, the write command targeting a first physical address of a first allocated area of the main memory;Generating a second integrity value in response to a read command from the first domain, the read command targeting read data stored to the first physical address; andVerifying the integrity of the read data at least in part by comparing the first integrity value with the second integrity value;among them:The circuit is configured to generate a first integrity value at least in part by:Performing a first integrity operation on the plaintext of the write data targeted by the write command to generate a first output;Performing a second integrity operation on the ciphertext of the write data to be written in response to the write command to produce a second output; andCombining the first and second outputs to generate a first integrity value; andThe circuit is further configured to:A first integrity value is written to a first allocated area of the main memory.10.The memory controller of claim 9, wherein:The first integrity operation includes performing a cyclic redundancy check (CRC) to generate a first CRC value according to the plain text of the written data;The second integrity operation includes generating a first message authentication code (MAC) according to the cipher text of the written data;The first output is the first CRC value, and the second output is the first MAC.11.The memory controller of claim 9, wherein the circuit is configured to generate a second integrity value at least in part by:Read the ciphertext of the read data targeted by the read command from the first physical address;Decrypt the ciphertext of the read data to obtain the plaintext read data;Performing a third integrity operation on the plaintext read data to obtain a third output; andPerform a fourth integrity operation on the ciphertext read data to obtain a fourth output; andThe third and fourth outputs are combined to generate a second integrity value.12.The memory controller of claim 11, wherein:The third integrity operation includes performing a cyclic redundancy check (CRC) on the plaintext read data to generate a second CRC value;The fourth integrity operation includes generating a second message authentication code (MAC) from the ciphertext read data; andThe third output is the second CRC value, and the fourth output is the second MAC.13.The memory controller of claim 9, wherein:Verifying the integrity of the read data when the first integrity value and the second integrity value are the same; andWhen the first integrity value and the second integrity value are different, verifying the integrity of the read data fails.14.The memory controller of claim 9, wherein:The circuit is further configured to isolate the first allocated area of the main memory from a second allocated area of the main memory; andThe first allocated area is associated with a first domain of the host device, and the second allocated area is associated with a second domain of the host device.15.The memory controller of claim 14, wherein the circuit is used to:Isolate the first allocated area from the second allocated area by using range-based control; andThe first domain-specific encryption key is used to encrypt the data of the first allocated area to be written to the main memory, and the second domain-specific encryption key is used to encrypt the data to be written to the main memory. The data of the second allocated area is encrypted.16.The memory controller of claim 9, wherein the circuit is further configured to:Causing the ciphertext of the written data to be written to a first data storage bit in a first allocated area of the main memory; andCausing a first integrity value to be written to a first metadata bit in a first allocated area of the main memory;The first metadata bit is associated with the first data storage bit.17.A system for verifying the integrity of data stored in a main memory of a host device includes using a memory controller of the host device to:Means for generating a first integrity value in response to a write command from a first domain, the write command targeting a first physical address of a first allocated area of the main memory;Means for generating a second integrity value in response to a read command from a first domain, the read command targeting read data stored to a first physical address; andMeans for verifying the integrity of the read data at least in part by comparing the first integrity value with the second integrity value;among them:The means for generating a first integrity value includes:Means for performing a first integrity operation on the plaintext of the write data targeted by the write command to generate a first output;Means for performing a second integrity operation on the ciphertext of the write data to be written in response to the write command to generate a second output;Means for combining the first and second outputs to generate a first integrity value; andMeans for writing a first integrity value to a first allocated area of the main memory.18.The system of claim 17, wherein:The apparatus for performing a first integrity operation on a plaintext of write data targeted by a write command to generate a first output further includes performing a cyclic redundancy check (CRC) based on the plaintext of the write data. ) Means for generating a first CRC value;The apparatus for performing the second integrity operation further includes means for generating a first message authentication code (MAC) according to the cipher text of the written data.19.The system of claim 17, wherein the means for generating a second integrity value comprises:A device for reading the ciphertext of the read data targeted by the read command from the first physical address;A device for decrypting the ciphertext of the read data to obtain the plaintext read data;Means for performing a third integrity operation on the plaintext read data to obtain a third output; andMeans for performing a fourth integrity operation on the ciphertext read data to obtain a fourth output; andMeans for combining the third and fourth outputs to generate a second integrity value.20.The system of claim 19, wherein:The means for performing a third integrity operation on the plaintext read data to obtain a third output further includes means for performing a cyclic redundancy check (CRC) on the plaintext read data to generate a second CRC value ;The means for performing a fourth integrity operation on the ciphertext read data to obtain a fourth output further includes means for generating a second message authentication code (MAC) based on the ciphertext read data.21.The system of claim 17, wherein the means for verifying the integrity of the read data at least in part by comparing the first integrity value and the second integrity value further comprises:Means for verifying the success of the integrity of the read data when the first integrity value and the second integrity value are the same; andMeans for failing to verify the integrity of the read data when the first integrity value and the second integrity value are different.22.The system of claim 17, further comprising:Means for isolating the first allocated area of the main memory from a second allocated area of the main memory;The first allocated area is associated with a first domain of the host device, and the second allocated area is associated with a second domain of the host device.23.The system of claim 17, further comprising:Means for isolating the first allocated area from the second allocated area by using range-based control; andMeans for encrypting data of a first allocated area to be written to main memory using a first domain-specific encryption key, andMeans for encrypting data to be written to a second allocated area of the main memory using a second domain-specific encryption key.24.The system of claim 17, further comprising:Means for causing a ciphertext of written data to be written to a first data storage bit in a first allocated area of said main memory; andMeans for causing a first integrity value to be written to a first metadata bit in a first allocated area of the main memory;The first metadata bit is associated with the first data storage bit.25.An at least one machine-readable medium comprising a plurality of instructions, in response to being executed on a computing device, the instructions causing the computing device to perform a method according to any one of claims 1-8.
Techniques for verifying memory integrity across multiple memory regionsTechnical fieldThe present disclosure relates to techniques for verifying memory integrity across multiple memory regions. In particular, the present disclosure relates to systems, devices, methods, and computer-readable media for verifying memory integrity in a context that is allocated to one or more domains, such as a physical memory area of one or more virtual machines.Background techniqueVirtualization in information processing systems allows multiple instances of one or more operating systems (OS) to run on a single information processing system (such as a computer, server, etc.), although each OS is designed to be fully and directly controlled System and its resources. Virtualization is typically implemented using software, firmware, or hardware, such as a virtual machine monitor (VMM; also known as a supervisor), which is configured to present a virtual machine (VM) to each OS The virtual machine has virtual resources (such as one or more virtual processors, virtual main memory (RAM), virtual storage devices, etc.) that the OS can control. The VMM can be configured to maintain a system environment—that is, a virtualization environment—that implements a strategy for allocating system physical resources among virtual machines. Each OS and any other software executing on a virtual machine can be referred to as "guest" or "guest software", and "host" or "host software" can refer to outside the virtualized environment (that is, outside any virtual machine ) Execute VMM or other software.Some systems supporting virtualization include a memory controller configured to translate a virtual memory address (which is associated with virtual memory allocated to a virtual machine) into a physical memory address (of the host system). The memory controller may also be configured to isolate and protect regions of host physical main memory (such as random access memory) allocated to different virtual machines supported by the host device. Isolation and protection of physical memory areas can be maintained through the use of range-based control, cryptographic methods, or other means.The memory controller can use range-based control to associate regions of the physical memory of the host system with guest domains (such as virtual machines) executing on the host. In such an instance, when a read or write command enters the memory controller, the memory controller can determine which domain (virtual machine) has generated the command and which areas of the host's physical memory are targeted by the command (or more Specifically, which area of the physical memory includes the physical memory address of the virtual address of the virtual machine mapped to the domain issuing the command). If the domain is authorized to read from / write to the requested physical memory area, the processor may cause the memory controller to execute the command, or if the domain is not authorized to access the physical memory area Memory area, the processor may refuse to execute the command.The content of the host's physical memory can also be protected by using cryptographic methods such as Multi-Key Total Memory Encryption (MKTME). In MKTME, the memory controller is configured to protect data stored in the physical memory space allocated to one domain (virtual machine) from unauthorized access by another domain (virtual machine) by: An encryption key to encrypt data to be written to physical storage, the encryption key being specific to the guest who is requesting the write (ie, "encryption key by domain"). This can prevent unauthorized reading of data allocated to the first physical memory area of the first domain (virtual machine) through the second domain (virtual machine) because the content of the first physical memory area uses the first One domain's encryption key to be encrypted-the second domain cannot access the encryption key of the first domain.Although range-based control and MKTME can effectively isolate and protect physical memory areas allocated to different domains, they do not provide a mechanism for checking the integrity of data stored in the physical memory of the host system. Therefore, such methods can be exposed to attacks in which an unauthorized domain (attack domain) causes an unauthorized write to a memory area allocated to another domain (victim domain), resulting in Unauthorized modification of the contents of the physical memory of the victim domain. The victim domain may also be unaware that the contents of its allocated physical memory have been altered by unauthorized writes.Technologies such as INTEL® Secure Enclave (for example, implemented using Intel® Software Guard Extensions (SGX)) technology utilize a memory encryption engine (MEE) that can maintain confidentiality by using a single key, Integrity and hardware replay protection. For example, MEE used in secure enclave implementations can build a metadata tree on top of a protected area of physical memory. The integrity of the data read from the protected area of the memory can be verified by traversing the metadata tree. Although valid for verifying the integrity of the data stored in the enclave, traversing the metadata tree can cause multiple accesses to the memory whenever the data stored in the enclave is accessed. As a result, secure enclaves provide strong data protection and integrity verification capabilities. However, for some applications, a slightly reduced level of security assurance (as opposed to that provided by a secure enclave) may be considered acceptable-especially in instances where reduced integrity verification latency is desired. Integrity verification techniques to protect against physical attacks are also interesting.In light of the foregoing, there is growing interest in technologies that provide lightweight mechanisms to ensure memory integrity across isolated memory domains and that can be protected from physical attacks such as external modifications to DRAM content.BRIEF DESCRIPTION OF THE DRAWINGSThe features and advantages of embodiments of the claimed subject matter will become apparent as the following detailed description proceeds, and when reference is made to the accompanying drawings, in which like numerals depict like parts, and wherein:FIG. 1 is a block diagram illustrating one example of a system for maintaining isolation between allocated regions of a physical main memory of a host device;FIG. 2 is a block diagram illustrating one example of a system for maintaining the integrity of an allocated area of a physical main memory of a host device consistent with the present disclosure.FIG. 3 is a block diagram illustrating one example of an integrity engine consistent with the present disclosure.FIG. 4A is a block diagram illustrating one example of a data write stream consistent with the present disclosure.4B is a flowchart of an example operation of one example of a method for writing data consistent with the present disclosure.FIG. 5A is a block diagram illustrating one example of a data read stream consistent with the present disclosure.5B is a flowchart of an example operation of one example of a method for verifying the integrity of data stored in a region of a physical main memory of a host device consistent with the present disclosure.detailed descriptionThe techniques of this disclosure are described herein with reference to illustrative embodiments for a particular application. For illustration and ease of understanding, the techniques described herein are discussed in the context of a virtualized system in which the physical main memory (eg, random access memory) of the host device is implemented on the host device A number of domains (such as virtual machines) that are executed within the context of a virtualized environment are allocated. Such discussion is for example only, and all or part of the techniques described herein may be used in other contexts. For example, the techniques described herein can be used in the context of any memory system in which memory integrity between isolated memory areas is desired, such as, but not limited to, non-virtualized systems. Those skilled in the relevant art who have access to the teachings provided herein will recognize additional modifications, applications, and embodiments that are within the scope of this disclosure, and that there will be practical additional areas in which the embodiments of this disclosure will be useful.The terms "host" and "host device" are used interchangeably herein to refer to a wide range of electronic devices, which may be configured to include a memory architecture, in which An area that allocates memory (such as RAM) between the domains. To illustrate, a domain is described in the context of a virtualized system and can therefore be understood as a virtual machine. However, as noted above, the techniques described herein may be implemented in any context in which verification of the integrity of the content of the physical memory of the host system is desired. Non-limiting examples of suitable host devices include cameras, cell phones, computer terminals, desktop computers, distributed computing systems, e-readers, fax machines, kiosks, netbook computers, notebook computers, Internet devices, payment terminals, Personal digital assistants, media players and / or recorders, servers, set-top boxes, smartphones, tablet personal computers, televisions, ultra-mobile personal computers, wired telephones, combinations thereof, and so on. Such devices may be portable or stationary. Without limitation, the host device described herein is preferably in the form of a desktop computer, a server, a distributed computing system, and the like.The term "main memory" is used herein to refer to memory that is available to the CPU through load / store instructions (as used to communicate to, for example, a host device (physical main memory) or a domain / virtual machine (virtual main memory) The use of input / output controllers and drives for storage read / write is contrasted). Examples of main memory that can be used include (eg, volatile or non-volatile) random access memory (RAM) such as, but not limited to, double data rate (DDR) RAM (eg, DDR2, DDR3, DDR4, DDR5, low power DDR (LPDDR)) three-dimensional crosspoint memory, INTEL® OPTANE® memory, or any other memory currently or in the future. Without limitation, in embodiments, the main memory described herein is in the form of a DDR or three-dimensional cross-point memory that includes integrity value bits (eg, metadata bits) and data storage bits. In contrast, the terms "disk", "storage", and "storage device" are used interchangeably herein to refer to one or more non-volatile memories that can be used to provide non-volatile data storage. device. Non-limiting examples of storage devices that can be used herein include magnetic storage devices (eg, magnetic hard drives, magneto-optical drives, thermally assisted magnetic recording devices, magnetic disks, etc.), solid-state storage devices (eg, using non-volatile solid-state drives) Storage devices of NAND or NOR memory), memory sticks and / or cards including non-volatile memory, combinations thereof, and the like.Where appropriate, the phrase "encryption operation" is used herein to refer generally to encryption of plaintext to ciphertext, decryption of ciphertext to plaintext, or some combination thereof. The term "encryption operation" should therefore be understood to include both encryption and decryption of data, where appropriate explanations are given based on the context in which the phrase is used.The term "module" is used herein to refer to software, firmware, and / or circuitry configured to perform one or more operations consistent with the present disclosure. Software may be embodied as software packages, code, instructions, instruction sets, and / or data, which are recorded on non-transitory computer-readable storage media. Firmware may be embodied as code, instructions, or instruction sets and / or data, which are hard-coded (eg, non-volatile) in a memory device. A "circuit", as used in any of the embodiments herein, may, for example, include hard-wired circuits individually, or in any combination, programmable circuits, such as field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs) ), A general-purpose computer processor including one or more separate instruction processing cores, a state machine circuit, software and / or firmware storing instructions executed by a programmable circuit. The modules described herein may be collectively or individually embodied as circuits that form all or part of one or more host devices (eg, embodied as logic that is at least partially implemented in hardware).Modern computing systems are typically configured as host devices that support multiple domains, such as, but not limited to, one or more virtual machines. Each domain may be assigned a virtual resource that maps or otherwise corresponds to all or a portion of the physical resources of the host device (eg, via a virtual machine monitor VMM or a supervisor). For example, each domain may be allocated all or a portion of the host device's physical processing resources, main memory resources (physical RAM), physical storage, input / output device capabilities, and so on. The allocation of host resources to the domain (s) may be based on one or more virtualization policies implemented by the VMM / Supervisor. For security and other reasons, different areas of the physical main memory of the host device may be allocated to different domains and may be isolated from each other or otherwise protected from each other. Isolation of physical memory areas can be maintained through the use of range-based control, and the contents of physical memory areas can be cryptographically, for example, by using domain-specific encryption keys (i.e., unique to each domain and only by the ones to which they are associated) Domain-accessible encryption key).FIG. 1 is a block diagram of one example of a system for maintaining memory isolation in the context of virtualization. In this example, the system 100 is in the form of a host device that includes shared hardware 102 and a virtual machine monitor (VMM) 112. VMM 112 may be implemented as host software (eg, it executes on shared hardware 102), as will be understood by one of ordinary skill in the art. In any case, the VMM 112 (and the system 100 as a whole) is configured to support virtualization so that multiple domains (eg, virtual machines) can be executed within a virtualized environment established by the VMM 112 or otherwise supported . The concept is illustrated in FIG. 1, which depicts the system 100 as hosting the first domain D1 and the Nth domain DN, although any suitable number of domains can be hosted by the system 100.The shared hardware 102 generally includes physical resources of the host device. For clarity and ease of understanding, FIG. 1 depicts shared hardware 102 as including memory controller 103, processor 110, and main memory (eg, RAM) 107, but omits various aspects of system 100 that can be included in shared hardware 102 Other physical resources. For example, shared hardware 102 may also include other resources, such as storage resources (eg, from one or more non-volatile storage devices), communication resources (eg, wired and / or wireless networking resources), video processing resources (eg, from a Or multiple dedicated video processing components, such as graphics cards, combinations thereof, and more. A simplified representation of the shared hardware 102 is maintained until the other figures, especially FIG. 2.The system 100 (or more specifically the VMM 112) is configured to allocate virtual resources to the domains D1, DN, where those virtual resources correspond to all or a portion of the physical resources provided by the shared hardware 102. In this way, shared hardware 102 can be allocated between D1 and DN. For example, VMM 112 may allocate virtual storage to D1 and DN. The processor 110 may be configured to map the virtual memory of D1 to the first region 109 of the main memory 107 and to map the virtual memory of the DN to the second region 111 of the main memory 109 of the DN. During operation of the system 100, one or both of D1 and DN may be active and execute code, such as on one or more virtual processors. As their virtual processors execute code, D1 and DN can issue read and write commands to their corresponding virtual main memory. The memory controller 103 may be configured to determine from which domain a read / write command is issued, and perform corresponding actions by reading / writing data to corresponding physical memory addresses of the areas 109, 111. The memory controller 103 may also be configured to isolate the memory areas 109, 111 from each other (eg, using range-based control).It is noted that although FIGS. 1 and 2 depict the use of the memory controller (103, 203) as separate components of shared hardware, such a configuration is not required. Indeed, in an embodiment, the memory controller described herein may be a stand-alone device or another component of hardware that may share hardware with a host device, such as one or more processors (eg, processor 110), dedicated memory Controller (for example, on the motherboard), its combination, and so on. Without limitation, in an embodiment, the memory controller described herein is integrated with one or more physical processors of the host device. For example, in an embodiment, the memory controller described herein is integrated with the processor 110. In this regard, the processor 110 may be any suitable general purpose processor or application specific integrated circuit. Without limitation, in the embodiment, the memory controller 103 is integrated with the processor 110, and the processor 110 is one or more single-core or multi-core processors, which are owned by INTEL®, APPLE®, AMD®, SAMSUNG ® Corporation, NVIDIA® Corporation, their combinations, and more.Although the memory controller 103 may implement range-based control or other techniques to restrict access to zones 109, 111 through unauthorized domains, such techniques may be compromised by imaginary future hardware attacks. Thus, the memory controller 103 may also be configured to protect the contents of the memory areas 109, 111 from unauthorized access. For example, the memory controller 103 may password protect the contents of the memory areas 109, 111. In this aspect, the memory controller 103 may include a key management and encryption engine (KMEE) 105 that uses a domain-specific encryption key to encrypt the contents of the zones 109, 111 Or decrypt. Domain-specific encryption keys are encryption keys that are specific to the domain to which they are associated and are not accessible to another domain. Thus, the KMEE 105 can utilize the first encryption key specific to the area 109 and the second encryption key specific to the area 111. The first encryption key may be accessible only to D1, and the second encryption key may be accessible only to DN. In any case, the KMEE 105 (or more generally the memory controller 103) may use a domain-specific encryption key to encrypt the data before writing it to areas 109, 111 of the main memory 107. KMEE 105 (or more generally memory controller 103) may also use a domain-specific decryption key (for example, it may be the same as or derived from a corresponding domain-specific encryption key), It is used to decrypt the data when it is read from the allocated area of the main memory 107. In examples where the encryption and decryption keys are the same, KMEE 105 can be understood as implementing a symmetric encryption operation.More specifically, when D1 is active, the virtual processor of D1 may cause a write command to be issued for writing to the virtual memory allocated to D1. In response, the memory controller 103 may determine that a write command was issued by D1. The memory controller 103 may then use the D1-specific first encryption key to encrypt the data ("write data") that is the target of the request, and store the resulting encrypted write data to the main memory 107 District 109. In contrast, when the DN is active, the virtual processor of the DN can cause the issue of a write command for causing a write to the virtual memory allocated to the DN. The memory controller 103 may determine that a write command has been issued by the DN. The memory controller may then encrypt the write data associated with the write command by using a second DN-specific encryption key, and store the resulting encrypted write data to area 109 of the main memory 107 .In a read context, D1 and DN can cause the issue of a read command that targets the virtual address of its corresponding virtual memory. In response, the memory controller 103 may determine from which domain the read command was issued. The memory controller 103 may then read the data targeted by the read command from the identified physical address, such as from the areas 109, 111, as appropriate. Since the contents of the areas 109 and 111 are encrypted by a domain-specific encryption key, the data read by the memory controller 103 will be in the form of ciphertext. Therefore, the memory controller 103 may further serve the read command by decrypting the ciphertext read from the areas 109, 111 using the corresponding decryption key.In the case of the ciphertext read from the area 109, the decryption key may be obtained from the first encryption key (ie, the domain-specific encryption key associated with the area 109). Alternatively, in the case of a ciphertext read from the area 111, the decryption key may be obtained from a second encryption key (ie, a domain-specific encryption key associated with the area 111). Since domains D1 and DN can only access their corresponding encryption keys, the contents of zone 109 can be protected from unauthorized read commands from DN (targeting zone 109), and the contents of zone 111 are protected Protected from unauthorized read commands from D1 (targeting zone 111). More specifically, although the content of zone 109 is potentially available to the DN, the content will be ciphertext and will not be known to the DN because it does not have the encryption key of D1. Likewise, although D1 could potentially obtain the content of zone 111, the content will be ciphertext and will not be known to D1 because it does not have the encryption key of the DN.In summary, the system 100 is configured to maintain the isolation of memory domains 109, 111 by using range-based control, and cryptographically protects the contents of zones 109, 111 from unauthorized reading using domain-specific encryption and decryption keys. take. However, the system 100 does not provide a mechanism for checking the integrity of the data stored into the allocated memory areas 109, 111. Thus, such methods may be exposed to attacks where an unauthorized (attack) domain causes an unauthorized write to a memory area allocated to another (victim) domain.For example, a malicious entity executing within the DN may cause the issuance of an unauthorized write command targeted to the physical memory allocated to D1, that is, the area 109. Assuming that the mechanism (s) implemented by the system 100 to isolate region 109 from other regions of main memory 107 (such as range-based control) has not been compromised, execution of unauthorized write commands targeting region 109 may be rejected . However, if such a mechanism has been compromised, an unauthorized write command issued by the DN may cause the memory controller 103 to write unauthorized data to the area 109-potentially compromising the integrity of the data stored therein .Various attacks have been developed to circumvent the isolation of memory areas using range-based control. One such attack is a so-called "hammering" attack. By using a row hammer attack, a malicious entity executing within the DN can cause the memory controller 103 to repeatedly and quickly hit the row buffer of the main memory 107, thereby causing a random bit flip error to occur in the area 109. Such a bit flip can potentially create an opportunity for a malicious entity executing within the DN to cause unauthorized data to be written to the area 109. This unauthorized data cannot be detected by D1 because the system 100 does not provide for verifying the integrity of the data written to the main memory 107 (or more specifically to its allocated areas 109, 111) Mechanisms.With the foregoing in mind, aspects of the present disclosure relate to techniques for maintaining the integrity of regions of physical memory allocated between multiple domains. The techniques described herein include a device, system, method, and computer-readable medium that cause a first integrity value to be generated in response to a write command issued by a first domain, where the write command is assigned to the first The area (allocated memory area) of the physical main memory of the host device (for example, RAM) of a domain is targeted (that is, mapped to this area). In an embodiment, the first integrity value is a data structure generated at least in part according to, for example, a memory controller: a plaintext of the data to be written in response to a write command (hereinafter "plaintext write (Optionally truncated) output of the first integrity operation on incoming data "), and by encrypting the write data using an encryption key (hereinafter" encrypted write data ") (Optionally truncated) output of the second integrity operation on the ciphertext. In response to the write command, the first integrity value may be written to the allocated memory area along with the encrypted write data. For example, the integrity value may be written to the allocated memory area as metadata associated with the encrypted write data. In an embodiment, the first integrity value is written to a metadata bit in the allocated memory area, and the encrypted write data is written to a data storage bit in the allocated memory area.In response to a read command issued by the first domain, the integrity of data (eg, read data) read from the allocated memory area can be verified by: according to the plaintext and The ciphertext (ie, the plaintext read data and the encrypted read data) determines a second integrity value, and compares the second integrity value with the first integrity value. For example, the memory controller may receive a read command from the first domain, where the read command targets a physical memory address in the first region of the physical main memory allocated to the host device of the first domain (eg, mapped to The physical memory address). In response to the read command, the memory controller may read the encrypted read data from the physical memory address targeted by the read command, and the first integrity value associated with the encrypted read data. The encrypted read data may be stored in a data storage bit of the allocated memory area, and the first integrity value may be stored in a metadata bit associated with the data storage bit.The memory controller can decrypt the encrypted read data to obtain the plain text read data. The memory controller may also generate a second integrity value based at least in part on the encrypted read data and the plaintext read data. In an embodiment, the second integrity value is a data structure generated based at least in part on the (optionally truncated) output of the first integrity operation performed on the plaintext read data, and in (Optionally truncated) output of a second integrity operation performed on the encrypted read data. The integrity of the data targeted by the read command can then be verified by comparing the second integrity value with the first integrity value. If the first and second integrity values are the same (or different by less than a threshold amount-although this will cause a reduced level of security), then integrity verification will pass. However, if the first and second integrity values are different (or different by more than a threshold amount-again, at a reduced level of security), the integrity verification may fail. In the latter example, operations may be performed to mitigate the impact of the changed content of the allocated area of the host's physical memory on the operation of the first domain and / or on the host system.Reference is now made to FIG. 2, which depicts one example of a system 200 for verifying the integrity of data stored to an allocated area of physical main memory consistent with the present disclosure. As with system 100, system 200 is depicted in FIG. 2 in the context of virtualization. Thus, the system 200 can be understood as a host device or system that includes shared hardware 202 and a virtual machine monitor (VMM) 212. The VMM 212 may be implemented in hardware, firmware, or software, and may be configured to establish a virtualized environment for hosting one or more domains (virtual machines) according to one or more virtualization policies. For example, the VMM 212 may allocate all or a portion of the shared hardware among one or more virtual domains, such as the domains D1, DN shown in FIG.The shared hardware 202 may include many of the same components as the shared hardware 102 of FIG. 1. For example, shared hardware 202 may include shared physical processing resources (eg, processor 110), shared physical storage devices, shared communication resources, combinations thereof, and so on. The shared hardware 202 also includes a memory controller 203. Like the memory controller 103, the memory controller 203 may be configured to receive read and write commands that target virtual memory addresses assigned to the virtual memories of D1, DN, and from shared Read / write data to / from the corresponding physical address of the shared main memory 207. Also like the memory controller 103, the memory controller 203 can be configured to isolate areas 209 and 211 from each other (eg, using range-based access control), and cryptographically protect the contents of areas 209, 211 (eg, by using Managed domain-specific encryption / decryption keys). Thus, as shown in FIG. 2, the memory controller 203 may be configured to isolate the first and second regions 209, 211 from each other, and store a ciphertext (D1 ciphertext) of data associated with D1 in the region 209. And the ciphertext (DN ciphertext) associated with the DN is stored to the area 211.Like the memory controller 103, the memory controller 203 may be a separate component or may be integrated with another component of the shared hardware 202. For example, the memory controller 203 may be integrated with one or more physical processors, such as the processor 110, a motherboard, a plug-in card, or other components of the shared hardware 202, as described above in connection with the memory of FIG. 1 The controller 103 is as described. Without limitation, in an embodiment, the memory controller 203 is integrated with the processor 110, where the processor 110 is one or more physical single-core or multi-core processors.In addition to isolating the allocated memory areas of the main memory 207 from each other and protecting their contents via encryption, the memory controller 203 is also configured to enable verification of the integrity of data stored in the allocated areas of the main memory 207 Sex. In this regard, the memory controller 203 may include an integrity engine 205. Generally, the integrity engine 205 may be in the form of hardware (circuitry), firmware, and / or software configured to perform integrity verification operations consistent with the present disclosure. In a non-limiting embodiment, the integrity engine 205 is in the form of a circuit configured to perform integrity verification operations consistent with the present disclosure. Alternatively or additionally, the integrity engine 205 may include or be in the form of a processor configured to execute on a computer-readable storage medium (eg, a drive, a memory controller, or dedicated integrity) Instructions stored in embedded firmware executed by the processor, etc.) to cause the memory controller 203 to perform integrity verification operations consistent with the present disclosure. It should also be understood that although the integrity engine 205 is shown within the memory controller 203, such a configuration is not required. For example, the integrity engine 205 may be a separate component, or it may be incorporated into other shared hardware, such as the processor 110.The integrity verification operation generally includes comparing a second integrity value in response to a read command issued by the first domain with a first integrity value generated in response to a write command issued by the first domain. In an embodiment, the write command targets the first physical address of the first allocated area of the main memory of the host device, and the read command targets the data stored to those first physical addresses. In response to the write command, the memory controller 203 may store the first integrity value and the encrypted write data (ie, the cipher text of the data to be written) to the first physical address of the first allocated area of the main memory . The memory controller 203 may further cause the first integrity value to be stored as metadata in a first allocated area associated with the encrypted write data, for example, as shown in FIG. 2.The memory controller 203 may generate the first integrity value in any suitable manner. In an embodiment, the memory controller is configured to generate the first integrity value in response, at least in part, by executing the first on the plaintext of the data targeted by the write command (ie, plaintext write data). Integrity operation to produce a first output; performing a second integrity operation on the ciphertext of the written data (ie, encrypted write data) to produce a second output; and combining at least the first and second Output to generate a first integrity value. The memory controller 203 may then cause the encrypted write data and the first integrity value to be written to the first allocated area of the main memory 207. For example and as noted above, the memory controller may cause the encrypted write data to be written to a data storage bit within an allocated area (eg, area 209) of the main memory 207, and cause the first integrity value to be Stored in metadata bits associated with the data storage bits to which the encrypted write data was written.In an embodiment and as will be further described in conjunction with FIGS. 4A and 4B, the first integrity operation may include performing a cyclic redundancy check (CRC) on the plaintext of the written data to generate a first CRC value as a first output. In such an embodiment, the second integrity operation may include calculating a first message authentication code (MAC) based at least in part on the ciphertext of the written data, where the first MAC is used as the second output. The first MAC may be generated based at least in part on one or more of the encrypted write data, the first physical address targeted by the write command, and the integrity key. The integrity key used to generate the MAC can be a domain-specific integrity key, or it can be shared across all domains hosted by the host system. Without limitation, in embodiments, the integrity key is shared among all domains hosted by the host system. As can be appreciated, the use of shared integrity keys avoids the need to manage and protect domain-specific integrity keys.In an embodiment, the integrity key is not fixed in the system and can be altered or changed in many different ways. For example, the integrity key may be changed or changed in response to one or more system events, such as system boot, system shutdown, waking from a hibernated system, a combination thereof, and so on.Several methods for calculating the message authentication code are known, and any suitable method can be used to generate the first MAC. In an embodiment, the message authentication code is a hash-based message authentication code (HMAC; also known as a key hash message authentication code). HMAC is a type of MAC that is determined by using a cryptographic hash function and a secret cryptographic key. Any suitable cryptographic hash function can be used to determine the HMAC, such as one or more variants of the secure hash algorithm (SHA), such as SHA-2, SHA-3 (eg, SHA-256, etc.), and so on. Without limitation, in an embodiment, the memory controller 203 is configured to generate a first MAC by using a SHA-256 HMAC algorithm and an integrity key. For example, the memory controller 203 may calculate the first MAC as shown in the following equation (I):Where MAC_FN is the MAC function (such as SHA256-HMAC), Ikey is the integrity key, ciphertext is the encrypted write data, and the physical address is the physical memory address (s) targeted by the write command. Data is any other data that can also be included. Of course, other methods for generating the first MAC can also be used.As noted above, the memory controller 203 may be configured to generate a first integrity value by combining a first output (for a first integrity operation) and a second output (for a second integrity operation). In the example where the first output is the first CRC value and the second output is the first MAC, the first integrity value may be retrieved by the memory controller 203 by combining the first CRC value and the first MAC in any suitable manner. generate. Without limitation, in an embodiment, the memory controller 203 is configured to generate a first integrity value by performing an exclusive-or (XOR) operation using the first CRC value and the first MAC. The memory controller 203 may also truncate the first CRC value and / or the first MAC and then perform a combination thereof, for example, to limit the amount of memory required to store the first integrity value.As noted above, the integrity verification operation includes comparing the second integrity value to the first integrity value in response to a read command issued by the first domain. Generally, the second integrity value is generated in substantially the same manner as the first integrity value, but is determined by using the plaintext and ciphertext of the read data targeted by the read command instead of the write command. The plain text and cipher text of the target's written data are determined. More specifically, in response to a read command issued by the first domain, the memory controller 203 reads data (eg, encrypted read data) targeted by the read command from the first area of the main memory, and A first integrity value associated with the encrypted read data. The memory controller 203 then decrypts the encrypted read data to generate clear text read data. The memory controller may then generate the second integrity value at least in part by performing a third integrity operation on the plaintext read data to produce a third output; performing a fourth integrity operation on the encrypted read data To generate a fourth output; and combining the third and fourth outputs to generate a second integrity value. As with the first and second outputs, the third and fourth outputs may be truncated before their combination to limit the amount of memory required to store the second integrity value.Except for the data used, the third integrity operation is the same as the first integrity operation used in the generation of the first integrity value; the fourth integrity operation is the same as that used in the generation of the first integrity value. The second integrity operation used is the same; and the third and fourth outputs are combined in the same manner as the first and second outputs in the generation of the first integrity value. In other words, the same operation is performed in the first and third integrity operations and the second and fourth integrity operations, but is performed on (potentially) different data. More specifically, the first and second integrity operations used in the generation of the first integrity value are in clear text and encrypted write data (that is, the object of the write command and will be written to main memory Data on the first allocated area), while the third and fourth operations operate on plaintext and encrypted read data (ie, data on the first allocated area previously stored in main memory). Thus, in an embodiment, the third integrity operation includes generating a CRC value based on the plaintext write data; and generating a MAC based on the ciphertext write data, such as the same manner described above with regard to the generation of the first integrity value .As noted above, the memory controller may verify the integrity of the data stored in the allocated area of the main memory by comparing the first integrity value with the second integrity value. It is assumed that the data (encrypted read data) targeted by the read command has not been tampered (changed) after it is written to the first allocated area in response to the write command, the first integrity value and the first The two integrity values will be the same or may differ by less than a threshold amount (eg, in less secure embodiments). In this example, the memory controller 203 may determine that integrity verification has passed (ie, the integrity of the read data is verified). However, if the encrypted read data targeted by the read command has been altered (for example, by an unauthorized write to the first allocated area or another attack), the first and second integrity The value will be different or may be different by more than a threshold amount (again, in embodiments that reduce security). In this case, the memory controller 203 may determine that the integrity verification has failed (ie, the integrity of the read data has been compromised).Returning to FIGS. 2 and 3, as noted above, the memory controller 203 may include an integrity engine 205, or the integrity engine may be a separate component or integrated into another component, such as the processor 110. Regardless, the integrity engine 205 may be implemented in hardware, firmware, or software, and may be configured to perform or cause integrity operations consistent with the present disclosure. In an embodiment and as shown in FIG. 3, the integrity engine 205 may include a cyclic redundancy check module (CRCM) 205, a MAC generation module (MGM) 305, an optional truncation module 307, and an integrity value generation module ( IVGM309) and integrity verification module (IVM) 311. In embodiments, such modules may operate individually or in combination with each other to perform or cause integrity operations consistent with the present disclosure.For the sake of example, the specific operations that can be performed by the modules of FIG. 3 will now be described in connection with the virtualization system 200 shown in FIG. 2. As a baseline, assume that the system 200 hosts multiple domains (D1, DN), each of which is allocated a virtual memory that is mapped to a corresponding area of the physical main memory 207. More specifically, the memory controller 203 in these embodiments is configured to serve read / write commands targeted to the virtual addresses of D1, DN, which are mapped to the virtual addresses of the main memory 207 Physical addresses of the first and second areas 209, 211. Further, the memory controller 203 in this embodiment is configured to isolate the areas 209, 211 by using range-based control, and cryptographically protect the contents of the areas 209, 211 by using a domain-specific encryption key. For the sake of this example, assume that D1 is the active domain, which can withstand attacks through a DN. Thus, D1 can be understood as the "victim domain" and DN can be understood as the "attack domain".When it is active, D1 can issue a write command that targets a virtual address in its virtual memory. In response, the memory controller 203 may determine that the write command originated from D1. As discussed above, the memory controller 203 (or more specifically, KMEE 105) can perform data (ie, write data) targeted by a write command by using a D1-specific domain-specific encryption key. Encryption. After that, the memory controller 203 may cause the resulting encrypted write data to be written to a physical address in the area 209.The integrity engine 205 may generate a first integrity value before, after, or concurrently with the writing of the encrypted write data. Generally, the integrity engine 205 is configured to generate a first integrity value in the same manner as described above. For example, the CRCM 301 may be configured to perform or cause a first integrity operation to be performed on clear text write data. The first integrity operation may be or include performing a CRC operation on the plaintext written data to generate a CRC value. At the same or different times, KMEE 105 can encrypt the plaintext write data to produce encrypted write data.Simultaneously or at different times, the MGM 305 may perform or cause a second integrity operation to be performed to produce a second output. The second integrity operation may be the same as those previously described. For example, in an embodiment, the MGM 305 is configured to perform or cause generation of a MAC, which is based at least in part on encrypted write data and an integrity key (Ikey).The CRC value and the MAC value can then be used as first and second outputs for generating a first integrity value, as explained previously. For example, the IVGM 309 may be configured to combine or cause the CRC value and the MAC value to be combined to produce a first integrity value in any suitable manner. In an embodiment, the IVGM 309 is configured to perform an XOR operation using the CRC value and the MAC value to generate a first integrity value. The memory controller 203 may then cause the first integrity value to be stored in the first region 209 associated with the encrypted write data (eg, as metadata). The concept is shown in FIG. 2, which depicts the first area 209 of the main memory as including D1 ciphertext and D1 metadata, where D1 ciphertext is encrypted write data and D1 metadata is A first integrity value associated with the encrypted write data. Since the concept is extended to other domains (such as DN), Figure 2 also depicts area 211 as including DN ciphertext and DN metadata, where the DN ciphertext is written in response to a write command issued by the DN The encrypted write data, and the DN metadata is the first integrity value generated and written in the same manner as described above, and is associated with the DN ciphertext.In an embodiment, the main memory 207 includes a plurality of bits, wherein the plurality of bits may be allocated (eg, by the memory controller 203 or the processor 110) into metadata bits and data storage bits, and wherein the metadata bits are Mapped to or otherwise associated with a corresponding data storage bit. In such an example, the encrypted write data may be written to a data storage bit of the main memory 207, and the first integrity value may be written to a data store to which the encrypted write data is written. Bit associated metadata bits. In response to the read operation, the memory controller 203 may cause the encrypted read data targeted by the read command to be read from the data storage bit. Simultaneously or at another time, the memory controller 203 may also cause the first integrity value to be read from a metadata bit associated with a data storage bit in which the encrypted read data is stored.Referring again to FIGS. 2 and 3, the integrity engine 205 may be configured to perform integrity operations for verifying the integrity of the encrypted read data targeted by the read command. For example, the KMEE 105 may generate the plaintext read data by decrypting the encrypted read data using a domain-specific decryption key obtained from the domain-specific encryption key associated with D1. The CRCM 301 may then perform a third integrity operation on the plaintext read data to generate a third output. In an embodiment, the third integrity operation includes performing a CRC operation on the plaintext read data to generate a CRC value as a third output. Notably, the CRC operation performed in accordance with the third integrity operation is the same as the CRC operation performed by CRCM 301 in accordance with the first integrity operation indicated above. At the same time or another time, the MGM 305 may be configured to perform a fourth integrity operation by using encrypted read data. In an embodiment, the fourth integrity operation includes generating a MAC based at least in part on the encrypted read data and the integrity key (Ikey). Notably, the operations performed in accordance with the fourth integrity operation are the same as those performed in accordance with the execution of the second integrity operation indicated above. Optional truncation module 307 can truncate the third and fourth outputs (ie CRC value and MAC) to save memory (for example in the same way as the first and second outputs), or the third and fourth outputs can not be truncated (For example, if the first and second outputs are not truncated).The IVGM 309 may then use the (optionally truncated) CRC value and the (optionally truncated) MAC to generate or cause a second integrity value to be generated. In an embodiment, the IVGM 309 may perform an exclusive-or (XOR) operation by using a CRC value and a MAC value to generate a second integrity value. In summary, the operation performed to generate the second integrity value may be substantially the same as the operation performed to generate the first integrity value, but may potentially operate on different starting data.The IVM 311 is generally configured to perform or cause an integrity verification operation to be performed on the encrypted read data. In an embodiment, the IVM 311 may verify the integrity of the encrypted read data by comparing the first integrity value and the second integrity value. As noted above, if the first and second integrity values are the same or different by less than a threshold amount, the IVM311 may determine that the integrity of the encrypted read data has been maintained (ie, passed). However, if the first and second integrity values are different or different by more than a threshold amount, the IVM 311 can determine that the integrity of the ciphertext read data has been compromised, such as an unauthorized attempt by a DN (as an attack domain) Write or be damaged by another type of attack.Another aspect of the present disclosure relates to a method for enabling verification of the integrity of data stored in a region of main memory allocated to a domain. In this regard, reference is made to Figures 4A-5B. FIG. 4A is a block diagram illustrating an example of a data write stream consistent with the present disclosure in which a first integrity value is generated in accordance with the data write stream. FIG. 4B is a flowchart of an example operation of one example of a method 400 for writing data consistent with the present disclosure.As shown in FIG. 4B, the method 400 begins at block 401. The method may then proceed in parallel with blocks 403 and 407. According to block 403, the CRC value may be calculated based on the plaintext of the data that is the target of the write command issued by the first domain (D1) (ie, the plaintext write data). Pursuant to block 407, the clear text write data is encrypted using a domain-specific encryption key. These operations are illustrated in Figure 4A, which illustrates D1 plaintext (ie, plaintext write data) as the calculation of the CRC value and the input to the encryption frame, where the D1 encryption key is domain-specific associated with D1 Encryption key. The calculation of the CRC value may be performed by a memory controller (such as the memory controller 203) and / or one or more modules thereof (such as the CRCM 301 of FIG. 3).After calculating the CRC in accordance with block 403, the method may proceed to optional block 405, in which the CRC value may optionally be truncated. This operation is illustrated in Figure 4A by an output arrow extending from the calculation of the CRC box to the optional truncation box. Consistent with the foregoing description, truncation of the CRC value may be performed by a memory controller (such as the memory controller 203) or a module thereof (such as the optional truncation module 307 in FIG. 3).Encrypting the plaintext write data in accordance with block 407 produces encrypted write data. The method may then proceed from block 407 to block 409, in accordance with which the MAC may be generated based at least in part on the encrypted write data. For example and as shown in FIG. 4A, a MAC may be generated based on the following: encrypted write data, a physical address of a memory area targeted by a write command, and may be domain-specific or executed thereon The integrity key (Ikey) shared between the domains hosted by the system of the method. As discussed above, the MAC can be generated by any suitable method, such as by using Ikey to perform operations consistent with the SHA-256 HMAC algorithm on the encrypted write data. Such operations may be performed, for example, by MGM 305 as previously described in connection with FIG. 3.The method may then proceed to optional block 411, in which the MAC may optionally be truncated. This operation is illustrated in Figure 4A by an output arrow extending from the calculation MAC box to the optional truncation box. Consistent with the foregoing description, truncation of the MAC may be performed by a memory controller (such as the memory controller 203) or a module thereof (such as the optional truncation module 307 in FIG. 3).After generating the (optionally truncated) CRC value and optionally the truncated MAC, the method may proceed to block 413, and in accordance with the block 413, a first integrity value may be generated. For example and as described above, the first integrity value may be generated by combining the CRC value and the MAC in any suitable manner. In an embodiment, the first integrity value is generated by XORing the CRC value with the MAC, as explained previously. This concept is illustrated in Figure 4A by an output arrow from the optional truncation box to the generation of the IV1 (first integrity value) box.The method may then proceed to block 415, where the first integrity value and the encrypted write data may be written to an allocated area of the main memory of the host device. As discussed above, the memory controller may cause encrypted write data to be written to data storage blocks in the allocated memory area, and cause the first integrity value to be written to be associated with those data storage blocks. Metadata blocks. The method may then proceed to decision block 417, according to which a decision may be made as to whether the method continues. If so, the method may loop back to block 401 and repeat. But if this is not the case, the method may proceed to block 419 and end.FIG. 5A is a block diagram illustrating one example of a data read stream consistent with the present disclosure, and FIG. 5B is consistent with the present disclosure for verifying data written to an allocated area of a memory A flowchart of an example operation of the method 500 of integrity. As shown in FIG. 5B, the method 500 begins at block 501. The method may then proceed in parallel with blocks 503 and 507.In accordance with block 503, the second MAC may be determined based at least in part on the encrypted read data and the integrity key (Ikey) that is the target of the read domain issued by the first domain (eg, D1). The second MAC may be generated in the same manner as the first MAC generated in accordance with block 409 of FIG. 4B. After the second MAC is generated, the method may proceed to optional block 505, according to which the second MAC may be optionally truncated. These operations are shown in Figure 5A, which depicts encrypted read data (ciphertext), the physical address targeted by the read command, and the integrity key (Ikey) as inputs to the MAC box , Where the output arrow of the MAC box leads to the optional truncation box.According to block 507, the encrypted read data may be described by the memory controller or one or more modules thereof, as previously described. For example, a domain-specific decryption key obtained from a domain-specific encryption key associated with D1 may be used to decrypt the encrypted read data. These operations are illustrated in Figure 5A, which shows a domain-specific encryption key (D1 encryption key) and ciphertext (encrypted read data) as inputs to the decryption box. The result of such an operation is read data in plain text.The method may then proceed to block 509, in accordance with block 509, generating a second CRC value based on the read data plaintext. The calculation of the second CRC value may be performed by a memory controller (such as the memory controller 203) and / or one or more modules thereof (such as the CRCM 301 of FIG. 3). After the calculation of the second CRC value according to block 509, the method may proceed to optional block 511, which may optionally truncate the second CRC. This operation is illustrated in Figure 5A by an output arrow extending from the calculation of the CRC box to the optional truncation box. Consistent with the foregoing description, truncation of the second CRC value may be performed by a memory controller (such as the memory controller 203) or a module thereof (such as the optional truncation module 307 in FIG. 3).After generating a (optionally truncated) second CRC value and optionally a truncated MAC, the method may proceed to block 513, and in accordance with the block 513, a second integrity value may be generated. For example and as described above, the second integrity value may be generated in the same manner as the first integrity value. For example, a second integrity value may be generated by XORing the second CRC value with the second MAC, as explained previously. This concept is illustrated in Figure 5A by an output arrow from the optional truncation box to the generation of an IV2 (second integrity value) box.The method may then proceed to block 515, and in accordance with block 515, an integrity check may be performed. As explained previously, an integrity check may be performed by comparing the second integrity value with the first integrity value. After such a comparison, the method may proceed to decision block 517, according to which a decision may be made as to whether the first and second integrity values match. If the first and second integrity values are the same or different by less than the threshold amount, the result of block 517 is "yes", and the method may proceed to block 519, and according to block 519, the integrity check may be reported as successful (Ie, the integrity of the encrypted read data is confirmed). If the first and second integrity values are different or different by more than a threshold amount, the result of block 517 is "No", and the method may proceed to block 521, where the integrity check may be reported as a failure (That is, the integrity of the encrypted read data has been compromised).In either case, the method may proceed from block 519 or 521 to block 523, and according to block 523, a determination may be made as to whether the method continues. If so, the method may loop back to block 501. But if this is not the case, the method may proceed to block 525 and end.Another aspect of the present disclosure relates to a computer-readable storage medium containing computer-readable instructions that, when executed by a processor, cause the processor (or device containing the processor) to perform in accordance with the present disclosure. Integrity operation. When used, the computer-readable storage medium may be in the form of an article of manufacture. In some examples, the computer-readable storage medium may be a non-transitory computer-readable medium or a machine-readable storage medium, such as, but not limited to, an optical, magnetic, or semiconductor storage medium. In any case, the storage medium may store various types of computer-executable instructions, such as instructions for operation of the method of one or more of FIGS. 4B and 5B. Non-limiting examples of suitable computer-readable storage media that can be used include any tangible media capable of storing electronic data, including volatile or non-volatile memory, removable or non-removable memory, erasable Removable or non-erasable memory, writable or rewritable memory, etc. Examples of computer-executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and so on .ExampleThe following examples pertain to further embodiments. The following examples of the present disclosure may include subject matter such as: systems, devices, methods, computer-readable storage media storing instructions that, when executed, cause a machine to perform actions based on the methods, and / Or means for performing actions based on the method, as provided below.Example 1: According to this example, there is provided a method for verifying the integrity of data stored in a main memory of a host device, including using a memory controller of the host device to: respond to a write command from a first domain, Generating a first integrity value, the write command targets a first physical address of a first allocated area of main memory; and generating a second integrity value in response to a read command from a first domain, the read The fetch command targets the read data stored to the first physical address; and verifies the integrity of the read data at least in part by comparing the first integrity value with the second integrity value; wherein: the first integrity is generated Values include: performing a first integrity operation on the plaintext of the write data targeted by the write command to produce a first output; executing on the ciphertext of the write data to be written in response to the write command A second integrity operation to produce a second output; combining the first and second outputs to generate a first integrity value; and the method further includes utilizing a memory controller: making the first integrity The value is written to the first allocated area of the main memory.Example 2: This example includes any or all of the features of Example 1, wherein: the first integrity operation includes performing a cyclic redundancy check (CRC) to generate a first CRC value based on the plaintext of the written data; the second The integrity operation includes generating a first message authentication code (MAC) from the ciphertext of the written data; and the first output is a first CRC value and the second output is a first MAC.Example 3: This example includes any or all of the features of Example 2, wherein the first MAC is generated based at least in part on the ciphertext of the written data, one or more of the first physical address, and Integrity key.Example 4: This example includes any or all of the features of any of Examples 1 to 3, wherein generating a second integrity value includes reading from a first physical address a read targeted by a read command Ciphertext of the data; decrypting the ciphertext of the read data to obtain the plaintext read data; performing a third integrity operation on the plaintext read data to obtain a third output; and performing a fourth on the ciphertext read data Integrity operations to obtain a fourth output; and combining the third and fourth outputs to generate a second integrity value.Example 5: This example includes any or all of the features of Example 4, wherein: the third integrity operation includes performing a cyclic redundancy check (CRC) on the plaintext read data to generate a second CRC value; the fourth The integrity operation includes generating a second message authentication code (MAC) from the ciphertext read data; and a third output is a second CRC value and a fourth output is a second MAC.Example 6: This example includes any or all of the features of Example 5, wherein the second MAC is generated based at least in part on the ciphertext read data, one or more of the first physical address, and the complete Sex key.Example 7: This example includes any or all of the features of any of Examples 1 to 6, wherein the method further includes using a memory controller to: simultaneously read the first complete from a first allocated area of main memory Data and ciphertext.Example 8: This example includes any or all of the features of any of Examples 1 to 7, wherein: verifying the integrity of the read data is successful when the first integrity value and the second integrity value are the same; and When the first integrity value and the second integrity value are different, verifying the integrity of the read data fails.Example 9: This example includes any or all of the features of Example 8, wherein: verifying the integrity of the read data is based solely on the comparison of the first integrity value with the second integrity value.Example 10: This example includes any or all of the features of any of Examples 1 to 9, wherein the method further includes utilizing a memory controller to: isolate a first allocated area of main memory from a second area of main memory Allocated area; wherein a first allocated area is associated with a first domain of a host device, and a second allocated area is associated with a second domain of the host device.Example 11: This example includes any or all of the features of Example 10, wherein the memory controller is used to isolate the first allocated area from the second allocated area by using range-based control.Example 12: This example includes any or all of the features of Example 10 or 11, wherein the method further includes using a memory controller to: use a first domain-specific encryption key to pair the data to be written to the main memory The data of the first allocated area is encrypted, and the data of the second allocated area to be written to the main memory is encrypted using a second domain-specific encryption key.Example 13: This example includes any or all of the features of Example 1, wherein the method further includes using a memory controller to cause the ciphertext of the written data to be written to a first allocated area of the main memory. A first data storage bit; and a first metadata bit that causes a first integrity value to be written into a first allocated area of the main memory; wherein the first metadata bit is associated with the first data storage bit.Example 14: According to this example, there is provided a non-transitory computer-readable storage medium including instructions that, when executed by a processor of a host system, cause the following operations to be performed, including using a memory controller of the host device to: Generating a first integrity value in response to a write command from a first domain, the write command targeting a first physical address of a first allocated area of main memory; in response to a read command from a first domain To generate a second integrity value, the read command targets the read data stored to the first physical address; and verify the read data at least in part by comparing the first integrity value with the second integrity value Where the generating the first integrity value includes: performing a first integrity operation on the plaintext of the write data targeted by the write command to produce a first output; Performing a second integrity operation on the written ciphertext of the written data to produce a second output; combining the first and second outputs to generate a first integrity value; and the method Using the outer memory controller comprising: a first region such that the integrity of the first assignment value is written into main memory.Example 15: This example includes any or all of the features of Example 14, wherein: the first integrity operation includes performing a cyclic redundancy check (CRC) to generate a first CRC value based on the plaintext of the written data; The integrity operation includes generating a first message authentication code (MAC) from the ciphertext of the written data; the first output is a first CRC value, and the second output is a first MAC.Example 16: This example includes any or all of the features of Example 15, wherein the first MAC is generated based at least in part on the ciphertext of the written data, one or more of the first physical address, and Integrity key.Example 17: This example includes any or all of the features of any of Examples 14 to 16, wherein generating a second integrity value includes reading from a first physical address a read targeted by a read command Ciphertext of the data; decrypting the ciphertext of the read data to obtain the plaintext read data; performing a third integrity operation on the plaintext read data to obtain a third output; and performing a fourth on the ciphertext read data Integrity operations to obtain a fourth output; and combining the third and fourth outputs to generate a second integrity value.Example 18: This example includes any or all of the features of Example 17, wherein: the third integrity operation includes performing a cyclic redundancy check (CRC) on the plaintext read data to generate a second CRC value; the fourth The integrity operation includes generating a second message authentication code (MAC) from the ciphertext read data; and a third output is a second CRC value and a fourth output is a second MAC.Example 19: This example includes any or all of the features of Example 18, wherein the second MAC is generated based at least in part on the ciphertext read data, one or more of the first physical address, and complete Sex key.Example 20: This example includes any or all of the features of one of Examples 14 to 19, wherein the instructions, when executed by a processor, further cause the following operations to be performed, including using a memory controller to: An allocated area reads both the first integrity value and the ciphertext read data.Example 21: This example includes any or all of the features of any of Examples 14 to 20, wherein: verifying the integrity of the read data is successful when the first integrity value and the second integrity value are the same; and When the first integrity value and the second integrity value are different, verifying the integrity of the read data fails.Example 22: This example includes any or all of the features of Example 21, wherein verifying the integrity of the read data is based solely on the comparison of the first integrity value with the second integrity value.Example 23: This example includes any or all of the features of any of Examples 14 to 22, wherein the instructions, when executed by a processor, further cause the following operations to be performed, including using a memory controller to isolate the main memory The first allocated area is associated with the second allocated area of the main memory; wherein the first allocated area is associated with the first domain of the host device, and the second allocated area is associated with the second domain of the host device.Example 24: This example includes any or all of the features of Example 23, wherein the memory controller is used to isolate the first allocated area from the second allocated area by using range-based control.Example 25: This example includes any or all of the features of any of Examples 23 and 24, wherein the instructions, when executed by a processor, further cause the following operations to be performed, including using a memory controller: using a first A domain-specific encryption key to encrypt data of a first allocated area to be written to main memory, and a second domain-specific encryption key to use a second domain-specific encryption key to The data in the zone is encrypted.Example 26: This example includes any or all of the features of Example 14, wherein the instructions, when executed by a processor, further result in performing the following operations, including using a memory controller to: cause the ciphertext of the written data to be written A first data storage bit that is entered into a first allocated area of the main memory; and a first metadata bit that causes a first integrity value to be written to a first allocated area of the main memory; wherein the first The metadata bit is associated with a first data storage bit.Example 27: According to this example, there is provided a memory controller for enabling integrity verification of data stored in a main memory of a host device, including a circuit configured to: A domain write command generates a first integrity value, the write command targets a first physical address of a first allocated area of the main memory; a second command is generated in response to a read command from the first domain An integrity value, the read command is targeted at the read data stored to the first physical address; and verifying the integrity of the read data at least in part by comparing the first integrity value with the second integrity value, Wherein: the circuit is used to generate the first integrity value at least in part by performing a first integrity operation on the plaintext of the write data targeted by the write command to generate a first output; and in response Performing a second integrity operation on the ciphertext of the write data to which the write command is written to generate a second output; and combining the first and second outputs to generate a first integrity value; and Further the circuit is configured such that a first region of the first allocated integrity value is written into main memory.Example 28: This example includes any or all of the features of Example 27, wherein: the first integrity operation includes performing a cyclic redundancy check (CRC) to generate a first CRC value based on the plaintext of the written data; the second The integrity operation includes generating a first message authentication code (MAC) from the ciphertext of the written data; the first output is a first CRC value, and the second output is a first MAC.Example 29: This example includes any or all of the features of Example 28, wherein the circuit is used to generate the first MAC based at least in part on one of: a ciphertext of written data, a first physical address Or multiple, and integrity keys.Example 30: This example includes any or all of the features of any of Examples 27 to 29, wherein the circuit is used to generate a second integrity value at least in part by reading from a first physical address The ciphertext of the read data targeted by the read command; decrypting the ciphertext of the read data to obtain the plaintext read data; performing a third integrity operation on the plaintext read data to obtain a third output; and Performing a fourth integrity operation on the ciphertext read data to obtain a fourth output; and combining the third and fourth outputs to generate a second integrity value.Example 31: This example includes any or all of the features of Example 30, wherein: the third integrity operation includes performing a cyclic redundancy check (CRC) on the plaintext read data to generate a second CRC value; the fourth The integrity operation includes generating a second message authentication code (MAC) from the ciphertext read data; and a third output is a second CRC value and a fourth output is a second MAC.Example 32: This example includes any or all of the features of Example 31, wherein the circuit is used to generate a second MAC based at least in part on: ciphertext read data, one of the first physical address, or Multiple, and integrity keys.Example 33: This example includes any or all of the features of any of Examples 27 to 32, wherein the circuit is further configured to simultaneously read the first integrity value from the first allocated area of the main memory and Read data in cipher text.Example 34: This example includes any or all of the features of any of Examples 27 to 33, wherein: verifying the integrity of the read data is successful when the first integrity value and the second integrity value are the same; and When the first integrity value and the second integrity value are different, verifying the integrity of the read data fails.Example 35: This example includes any or all of the features of Example 34, wherein the circuit is used to verify the integrity of the read data based solely on the comparison of the first integrity value with the second integrity value.Example 36: This example includes any or all of the features of any of Examples 27 to 35, wherein the circuit is further configured to isolate a first allocated area of the main memory from a second allocated area of the main memory And the first allocated area is associated with a first domain of the host device, and the second allocated area is associated with a second domain of the host device.Example 37: This example includes any or all of the features of Example 36, wherein the circuit is used to isolate the first allocated area from the second allocated area by using range-based control.Example 38: This example includes any or all of the features of Example 35 or 36, wherein the circuit is further configured to utilize a first domain-specific encryption key for a first allocation to be written to main memory The data in the area of is encrypted, and the data of the second allocated area to be written to the main memory is encrypted using the second domain-specific encryption key.Example 39: This example includes any or all of the features of Example 27, wherein the circuit is further configured to cause the ciphertext of the written data to be written to the first data in the first allocated area of the main memory A storage bit; and a first metadata bit that causes a first integrity value to be written into a first allocated area of the main memory; wherein the first metadata bit is associated with a first data storage bit.As can be appreciated, the techniques of this disclosure provide a relatively lightweight mechanism for enabling verification of the integrity of memory allocated across multiple domains. Because the techniques described in this article take advantage of the use of per-domain encryption keys, but do not require the use of per-domain integrity keys, the complexity of managing per-domain integrity keys is avoided. Further, the first integrity value generated during a write operation may be stored in a metadata bit associated with a data storage bit that stores encrypted write data. Therefore, during a read operation, the encrypted read data and the first completeness associated with the encrypted data can be read simultaneously (or nearly simultaneously) from an allocated area of the memory in response to a read command. Sexual value. This can reduce or eliminate excessive memory accesses, resulting in a corresponding reduction in the overhead required to implement integrity verification.Terms and expressions that have been adopted herein are used as descriptive terms rather than limiting terms, and there is no intention in the use of such terms and expressions to exclude any equivalents of the features (or portions thereof) shown and described And, it is recognized that various modifications are possible within the scope of the claims. The claims are therefore intended to cover all such equivalents.
A method is described. The method includes recognizing different latencies and/or bandwidths between different levels of a system memory and different memory access requestors of a computing system. The system memory includes the different levels and different technologies. The method also includes allocating each of the memory access requestors with a respective region of the system memory having an appropriate latency and/or bandwidth.
Claims1. A method, comprising:recognizing different latencies and/or bandwidths between different levels of a system memory and different memory access requestors of a computing system, the system memory comprising the different levels and different technologies; and,allocating each of the memory access requestors with a respective region of the system memory having an appropriate latency and/or bandwidth.2. The method of claim 1 wherein the different technologies comprise DRAM and an emerging non volatile memory technology.3. The method of claim 2 wherein the emerging non volatile memory technology comprises chalcogenide.4. The method of claim 1 wherein the different latencies and/or bandwidths further comprise different latencies and/or bandwidths between a read operation and a write operation.5. The method of claim 1 wherein the different levels of the system memory comprises a level that is integrated in a same semiconductor chip package as a processor having CPU cores.6. The method of claim 1 wherein the recognizing further comprises analyzing a attributes of the different levels of the system memory from a record kept in BIOS of the computing system.7. The method of claim 6 wherein the attributes are compatible with any of the following standards:ACPI;NVDIMM.8. A machine readable storage medium having contained thereon program code that when processed by a computing system causes the computing system to perform a method, comprising:recognizing different latencies and/or bandwidths between different levels of a system memory and different memory access requestors of a computing system, the system memory comprising the different levels and different technologies; and,allocating each of the memory access requestors with a respective region of the system memory having an appropriate latency and/or bandwidth.9. The machine readable storage medium of claim 8 wherein the different technologies comprise DRAM and an emerging non volatile memory technology.10. The machine readable storage medium of claim 9 wherein the emerging non volatile memory technology comprises chalcogenide.11. The machine readable storage medium of claim 8 wherein the different latencies and/or bandwidths further comprise different latencies and/or bandwidths between a read operation and a write operation.12. The machine readable storage medium of claim 8 wherein the different levels of the system memory comprises a level that is integrated in a same semiconductor chip package as a processor having CPU cores.13. The machine readable storage medium of claim 8 wherein the recognizing further comprises analyzing a attributes of the different levels of the system memory from a record kept in BIOS of the computing system.14. The machine readable storage medium of claim 13 wherein the attributes are compatible with any of the following standards:ACPI;NVDIMM.15. A computing system, comprising:a processor comprising a plurality of computing cores;a memory control hub;a system memory coupled to the memory control hub, the system memory comprising different levels and different technologies;a non volatile storage component that stores BIOS information of the computing system, the BIOS information further comprising respective latency and/or bandwidth attributes of the different levels of the system memory;a machine readable medium containing program code that when processed by the computing system causes the computing system to perform a method, comprising:recognizing different latencies and/or bandwidths between the different levels of the system memory and different memory access requestors of the computing system; and,allocating each of the memory access requestors with a respective region of the system memory having an appropriate latency and/or bandwidth based on the BIOS information.16. The computing system of claim 15 wherein the different technologies comprise DRAM and an emerging non volatile memory technology.17. The computing system of claim 16 wherein the emerging non volatile memory technology comprises chalcogenide.18. The computing system of claim 15 wherein the different latencies and/or bandwidths further comprise different latencies and/or bandwidths between a read operation and a write operation.19. The computing system of claim 15 wherein the different levels of the system memory comprises a level that is integrated in a same semiconductor chip package as a processor having CPU cores.20. The computing system of claim 15 wherein the attributes are compatible with any of the following standards:ACPI;NVDIMM.
TECHNIQUES TO ALLOCATE REGIONS OF A MULTI-LEVEL, MULTI- TECHNOLOGY SYSTEM MEMORY TO APPROPRIATE MEMORY ACCESSINITIATORS Field of InventionThe field of invention pertains generally to computing systems, and, more specifically, to techniques to allocate regions of a multi-level, multi-technology system memory to appropriate memory access initiators.BackgroundA pertinent issue in many computer systems is the use of system memory. Here, as is understood in the art, a computing system operates by executing program code stored in system memory and reading/writing data that the program code operates on from/to system memory. As such, system memory is heavily utilized with many program code and data reads as well as many data writes over the course of the computing system' s operation. Finding ways to improve system memory accessing performance is therefore a motivation of computing system engineers.Currently, the Advanced Configuration and Power Interface (ACPI) provides for a System Locality Information Table (SLIT) that describes distance between nodes in a multi-processor computer system, and, a Static Resource Affinity Table (SRAT) that associates each processor with a block of memory. The SLIT and SRAT are ideally used to couple processors with appropriately distanced memory banks so that desired performance levels for the applications that run on the processors can be achieved.However, new system memory advances are introducing not only different system memory technologies but also different system memory architectures into a same comprehensive system memory. The current SLIT and SRAT tables do not take into account these specific newer system memory features.Brief Description of the DrawingsA better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:Fig. 1 shows a multi-level memory implementation;Fig. 2 shows a multi -processor computer system;Fig. 3a shows different memory levels organized by latency from the perspective of a requestor;Figs. 3b(i) and 3b(ii) show breakdowns for different 2LM components of the system memory of the system of Fig. 2; Fig. 4 shows different configurations of different applications on different platforms with different system memory levels;Fig. 5a and 5b show a root complex of attributes to align system memory requestors with appropriate system memory domains;Fig. 6 shows a method to configure a computing system;Fig. 7 shows an embodiment of a computing system.Detailed Description1.0 Multi-Level System MemoryOne of the ways to improve system memory performance is to have a multi-level system memory. Fig. 1 shows an embodiment of a computing system 100 having a multi-tiered or multi-level system memory 112. According to various embodiments, a smaller, faster near memory 113 may be utilized as a cache for a larger far memory 114.The use of cache memories for computing systems is well-known. In the case where near memory 113 is used as a cache, near memory 113 is used to store an additional copy of those data items in far memory 114 that are expected to be more frequently called upon by the computing system. By storing the more frequently called upon items in near memory 113, the system memory 112 will be observed as faster because the system will often read items that are being stored in faster near memory 113. For an implementation using a write -back technique, the copy of data items in near memory 113 may contain data that has been updated by the CPU, and is thus more up-to-date than the data in far memory 114. The process of writing back 'dirty' cache entries to far memory 114 ensures that such changes are not lost.According to various embodiments, near memory cache 113 has lower access times than the lower tiered far memory 114 region. For example, the near memory 113 may exhibit reduced access times by having a faster clock speed than the far memory 114. Here, the near memory 113 may be a faster (e.g., lower access time), volatile system memory technology (e.g., high performance dynamic random access memory (DRAM)) and/or SRAM memory cells co- located with the memory controller 116. By contrast, far memory 114 may be either a volatile memory technology implemented with a slower clock speed (e.g., a DRAM component that receives a slower clock) or, e.g., a non volatile memory technology that is slower (e.g., longer access time) than volatile/DRAM memory or whatever technology is used for near memory.For example, far memory 114 may be comprised of an emerging non volatile random access memory technology such as, to name a few possibilities, a phase change based memory, a three dimensional crosspoint memory, "write-in-place" non volatile main memory devices, memory devices that use chalcogenide, multiple level flash memory, multi-threshold level flash memory, a ferro-electric based memory (e.g., FRAM), a magnetic based memory (e.g., MRAM), a spin transfer torque based memory (e.g., STT-RAM), a resistor based memory (e.g., ReRAM), a Memristor based memory, universal memory, Ge2Sb2Te5 memory, programmable metallization cell memory, amorphous cell memory, Ovshinsky memory, etc. Any of these technologies may be byte addressable so as to be implemented as a main/system memory in a computing system.Emerging non volatile random access memory technologies typically have some combination of the following: 1) higher storage densities than DRAM (e.g., by being constructed in three-dimensional (3D) circuit structures (e.g., a crosspoint 3D circuit structure)); 2) lower power consumption densities than DRAM (e.g., because they do not need refreshing); and/or, 3) access latency that is slower than DRAM yet still faster than traditional non- volatile memory technologies such as FLASH. The latter characteristic in particular permits various emerging non volatile memory technologies to be used in a main system memory role rather than a traditional mass storage role (which is the traditional architectural location of non volatile storage).Regardless of whether far memory 114 is composed of a volatile or non volatile memory technology, in various embodiments far memory 114 acts as a true system memory in that it supports finer grained data accesses (e.g., cache lines) rather than larger based "block" or "sector" accesses associated with traditional, non volatile mass storage (e.g., solid state drive (SSD), hard disk drive (HDD)), and/or, otherwise acts as an (e.g., byte) addressable memory that the program code being executed by processor(s) of the CPU operate out of.Because near memory 113 acts as a cache, near memory 113 may not have formal addressing space. Rather, in some cases, far memory 114 defines the individually addressable memory space of the computing system's main memory. In various embodiments near memory 113 acts as a cache for far memory 114 rather than acting a last level CPU cache. Generally, a CPU cache is optimized for servicing CPU transactions, and will add significant penalties (such as cache snoop overhead and cache eviction flows in the case of cache hit) to other system memory users such as Direct Memory Access (DMA)-capable devices in a Peripheral Control Hub. By contrast, a memory side cache is designed to handle, e.g., all accesses directed to system memory, irrespective of whether they arrive from the CPU, from the Peripheral Control Hub, or from some other device such as display controller.In various embodiments, system memory may be implemented with dual in-line memory module (DIMM) cards where a single DIMM card has both volatile (e.g., DRAM) and (e.g., emerging) non volatile memory semiconductor chips disposed in it. The DRAM chips effectively act as an on board cache for the non volatile memory chips on the DIMM card.Ideally, the more frequently accessed cache lines of any particular DIMM card will be accessed from that DIMM card's DRAM chips rather than its non volatile memory chips. Given that multiple DIMM cards may be plugged into a working computing system and each DIMM card is only given a section of the system memory addresses made available to the processing cores 117 of the semiconductor chip that the DIMM cards are coupled to, the DRAM chips are acting as a cache for the non volatile memory that they share a DIMM card with rather than as a last level CPU cache.In other configurations DIMM cards having only DRAM chips may be plugged into a same system memory channel (e.g., a DDR channel) with DIMM cards having only non volatile system memory chips. Ideally, the more frequently used cache lines of the channel are in the DRAM DIMM cards rather than the non volatile memory DIMM cards. Thus, again, because there are typically multiple memory channels coupled to a same semiconductor chip having multiple processing cores, the DRAM chips are acting as a cache for the non volatile memory chips that they share a same channel with rather than as a last level CPU cache.In yet other possible configurations or implementations, a DRAM device on a DIMM card can act as a memory side cache for a non volatile memory chip that resides on a different DIMM and is plugged into a different channel than the DIMM having the DRAM device. Although the DRAM device may potentially service the entire system memory address space, entries into the DRAM device are based in part from reads performed on the non volatile memory devices and not just evictions from the last level CPU cache. As such the DRAM device can still be characterized as a memory side cache.In another possible configuration, a memory device such as a DRAM device functioning as near memory 113 may be assembled together with the memory controller 116 and processing cores 117 onto a single semiconductor device or within a same semiconductor package. Far memory 114 may be formed by other devices, such as slower DRAM or non-volatile memory and may be attached to, or integrated in that device.In still other embodiments, at least some portion of near memory 113 has its own system address space apart from the system addresses that have been assigned to far memory 114 locations. In this case, the portion of near memory 113 that has been allocated its own system memory address space acts, e.g., as a higher priority level of system memory (because it is faster than far memory) rather than as a memory side cache. In other or combined embodiments, some portion of near memory 113 may also act as a last level CPU cache. In various embodiments when at least a portion of near memory 113 acts as a memory side cache for far memory 114, the memory controller 116 and/or near memory 113 may include local cache information (hereafter referred to as "Metadata") 120 so that the memory controller 116 can determine whether a cache hit or cache miss has occurred in near memory 113 for any incoming memory request.In the case of an incoming write request, if there is a cache hit, the memory controller 116 writes the data (e.g., a 64-byte CPU cache line or portion thereof) associated with the request directly over the cached version in near memory 113. Likewise, in the case of a cache miss, in an embodiment, the memory controller 116 also writes the data associated with the request into near memory 113 which may cause the eviction from near memory 113 of another cache line that was previously occupying the near memory 113 location where the new data is written to. However, if the evicted cache line is "dirty" (which means it contains the most recent or up-to- date data for its corresponding system memory address), the evicted cache line will be written back to far memory 114 to preserve its data content.In the case of an incoming read request, if there is a cache hit, the memory controller 116 responds to the request by reading the version of the cache line from near memory 113 and providing it to the requestor. By contrast, if there is a cache miss, the memory controller 116 reads the requested cache line from far memory 114 and not only provides the cache line to the requestor (e.g., a CPU) but also writes another copy of the cache line into near memory 113. In various embodiments, the amount of data requested from far memory 114 and the amount of data written to near memory 113 will be larger than that requested by the incoming read request. Using a larger data size from far memory or to near memory increases the probability of a cache hit for a subsequent transaction to a nearby memory location.In general, cache lines may be written to and/or read from near memory and/or far memory at different levels of granularity (e.g., writes and/or reads only occur at cache line granularity (and, e.g., byte addressability for writes/or reads is handled internally within the memory controller), byte granularity (e.g., true byte addressability in which the memory controller writes and/or reads only an identified one or more bytes within a cache line), or granularities in between.) Additionally, note that the size of the cache line maintained within near memory and/or far memory may be larger than the cache line size maintained by CPU level caches.Different types of near memory caching implementation possibilities exist. Examples include direct mapped, set associative, fully associative. Depending on implementation, the ratio of near memory cache slots to far memory addresses that map to the near memory cache slots may be configurable or fixed. 2.0 Multiple Processor Computing Systems With Multi-Level System MemoryFig. 2 shows an exemplary architecture for a multi-processor computing system. As observed in Fig. 2, the multi-processor computer system includes two platforms 201_1, 201_2 interconnected by a communication link 212. Both platforms include a respective processor 202_1, 202_2 each having multiple CPU cores 203_1, 203_2. The processors 202_1, 202_2 of the exemplary system of Fig. 2 each include an I/O control hub 205_1, 205_2 that permit each platform to directly communicate with some form of I/O such as a network 206_1, 206_2 or a mass storage device 207_1, 207_2 (e.g., a block/sector based disk drive, solid state drive, non volatile storage device, or some combination thereof). As with the system in Fig. 1, an I/O control hub is free to issue a request directly to its local memory control hub. Platforms 205_1, 205_2 may be designed such that I/O control hubs 205_1 , 205_2 are directly coupled to their local CPU cores 203_1, 203_2 and/or their local memory control hub (MCH) 204_1, 204_2.Note that a wide range of different systems can loosely or directly fit the exemplary architecture of Fig. 2. For example, platforms 201_1 and 201_2 may be different multi-chip modules that plug-into same sockets on a same motherboard. Here, link 212 corresponds to a signal trace in the motherboard. By contrast, platform 201_1 may be a first multi-chip module that plugs into a first mother board and platform 201_1 may be a second multi-chip module that plugs into a second, different mother board. In this case, the system includes, e.g., multiple motherboards each having multiple platforms and link 212 corresponds to a backplane connection or other motherboard-to-motherboard connection within a same hardware box chassis. In yet another embodiment, platforms 201_1, 201_2 are within different hardware box chassis and link 212 corresponds to a local area network link or even a wide area network link (or even an Internet connection).The multi-processor system of Fig. 2 is also somewhat simplistic in that only two platforms 201_1, 201_2 are depicted. In various implementations, a multi-processor computing system may include many platforms where link 212 is replaced by an entire network that communicatively couples the various platforms. The network could be composed of various links of all kinds of different distances (e.g., any one or more of intra-motherboard, backplane, local area network and wide area network). Multi-processor systems may also include platforms that are functionally decomposed as compared to the platforms observed in Fig. 2. For example, some platforms may only include CPU cores, other platforms may only include a memory control hub and system memory slice, whereas other platforms may include an I/O control hub (in which case, e.g., an I/O hub can communicate directly with a processing core). Various combinations of these sub components may also be combined in various ways to form other types of platforms. In various implementations, however, the various platforms areinterconnected through a network as described just above. For simplicity, the remainder of the discussion will largely refer to the multi-processor system of Fig. 2 because pertinent points of the instant application can largely be described from it.Each platform 201_1, 201_2 also includes a "slice" of system memory 208_1, 208_2 that is coupled to a memory control hub 204_1, 204_2 within its respective platform's processor 202_1, 202_2. As is known in the art, the storage space of system memory is defined by its address space. Here, as a simple example, system memory component 208_1 may be allocated a first range of system memory addresses and system memory component 208_2 is allocated a second, different range of system memory addresses.With the understanding that applications running on any CPU core in the system can potentially refer to any system memory address, an application that is running on a CPU core within processor 202_1 may not only refer to instructions and/or data in system memory component 208_1 but may also refer to instructions and/or data in system memory component 208_2. In the case of the latter, a system memory request is sent from processor 202_1 to processor 202_2 over link 212. The memory control hub 204_2 of processor 201_1 services the request (e.g., by reading/writing from/to the system memory address within system memory slice 208_2). In the case of a read request, the instruction/data to be returned is sent from processor 202_2 to processor 202_1 over communication link 212.As observed in Fig. 2, each system memory slice 208_1 is a multi-level system memory solution. For the sake of example, the multi-level system memory of both slices 208_1, 208_2 is observed to include: 1) a first level of system memory 209_1, 209_2; 2) a second level of system memory that may have its own unique address space and/or behave as a memory side cache within system memory 210_1, 210_2; and, 3) a lowest non volatile emerging system memory technology based system memory level 211_1 , 211_2.As just one possible physical implementation of this particular architecture, for instance, first level memory 209_1, 209_2 may be implemented as DRAM devices that are stacked on top of or otherwise integrated in the same semiconductor chip package as their respective processor 202_1, 202_2.By contrast, second level memory 210_1, 210_2 may reside outside the semiconductor chip of their respective processor 202_1, 202_2. For example, second level memory 201_1, 202_2 may be implemented as DRAM devices disposed on DIMM cards that plug into memory channels that are coupled to their respective processor's memory control hub 204_1, 204_2. Here, the DRAM devices may be given their own system memory address space and therefore act as a second priority region of system memory beneath levels 209_1, 209_2. In this case, the DRAM devices of the second level 210_1, 210_2 being located outside the package of their respective processor 202_1, 202_2 are apt to have longer latencies and will therefore be a slower level of system memory than the first level 209_1, 209_2.Alternatively, DRAM devices within the second level 210_1, 210_2 may behave as a memory side cache for their respective lower non volatile system memory level 211_1, 211_2. As a further alternative possibility, some portion of the DRAM devices in the second level 210_1, 210_2 may be allocated their own unique system memory address space while another portion of the memory devices in the second level 210_1, 210_2 may be configured to behave as a memory side cache for the lower non volatile system memory level 211_1, 211_2.3.0 Different Performance of Different Memory LevelsIn general, the latency of a system memory component from the perspective of a requestor that issues read and/or write requests to the system memory component (such as an application or operating system instance that is executing on a processing core) is a function of the physical distance between the requestor and the memory component and the technology of the physical memory component. Fig. 3 elaborates on this general property in more detail.Here, Fig. 3a elaborates on this general property. Column 301 depicts a ranking, in terms of observed speed, of the different system memory components discussed above with respect to Fig. 2 from the perspective of an application that executes on processor 202_1. By contrast, column 302 depicts a ranking, again in terms of observed speed, of the different system memory components discussed above with respect to Fig. 2 from the perspective of an application that executes on processor 202_2. In both columns 301, 302 a higher system memory component will exhibit smaller access times (i.e., will be observed by an application as being faster) than a lower system memory component.As such, referring to column 301, note that all system memory components 209_1, 210_1,211_11 , 211_12 that are integrated with the platform 301_1 having processor 302_1 are observed to be faster for an application that executes on processor 302_1 than any of the system memory components 309_2, 310_2, 311_21, 311_22 that are integrated with the other platform 301_2. Likewise, referring to column 302, note that all system memory components 209_2, 210_2, 211_21, 211_22 that are integrated with the platform 301_2 having processor 302_2 are observed to be faster for an application that executes on processor 302_2 than any of the system memory components 309_1, 310_1, 311_11, 311_12 that are integrated with the other platform 301_2. Here, the observed decrease in performance of a system memory component from an off platform application is largely a consequence of link 212. In various embodiments link 212 may correspond to a large physical distance which significantly adds to the propagation delay time of issued requests. Even in the case, however, where the physical distance associated with link 212 is not appreciably large there may nevertheless exist on average noticeable queuing delays associated with placing traffic on the link 212 or receiving traffic from the link 212. Thus, as a general observation, local system memory components will tend to be faster from the perspective of a requestor than more remote system memory components.This same general trend is also observable with the observed performance rankings within a same platform. That is, within both platforms, the internal DRAM level 209 is higher than the external DRAM level 210. That is, recall that the internal DRAM 209 was integrated in a same semiconductor chip package as its processor 202 whereas the external DRAM 210 was physically located outside the package. Because reaching the external DRAM 210 requires signaling that traverses a longer physical distance, the internal DRAM 210 will exhibit smaller access times than an external DRAM device on the same platform.Fig. 3a also shows that technology and system architecture can also affect observed latencies of the system memory components and that different latencies may even be observed for read requests and write requests issued to a same memory technology.With respect to technology, note that the non volatile memory components 211 are slower than the DRAM memory components 209, 210, and, moreover, that with respect to non volatile memory components 211_1, 211_2, write operations can be noticeably slower than read operations. For example, as depicted in Fig. 3a, NVRAM region having a memory side cache 211_11_X (where X can be R or W) exhibits faster speed for reads (depicted with box211_11_R) than writes (depicted as box 211_1_W). Because reads and writes are targeted to a same memory space, the system address space S AR_4 that is allocated for the NVRAM component having a memory side cache 211_11_X is drawn as being associated with both of its READ and WRITE depictions in Fig. 3a. A similar construction is observed throughout Fig. 3a for NVRAM memory component 211_2.Although only exemplary, note that reads for an NVRAM technology that does not have a memory side cache (e.g., as represented by box 211_12_R) can be faster than writes to an NVRAM technology having a memory side cache (e.g., as represented by box 211_11_W).Unlike the NVRAM technology components of Fig. 3a, note that DRAM demonstrates approximately same speed for reads and writes and, as such, the DRAM components of Fig. 3a do not break down into separate boxes for reads and writes. Apart from generally representing latency, a diagram like Fig. 3a, or one similar to it, can also stand to represent bandwidth as opposed to latency. Here, latency corresponds to the average time (e.g., in micro-seconds) it takes for a request to complete. By contrast, bandwidth corresponds to the average throughput (e.g., in Megabytes/sec) that a particular memory component can support if a constant stream of requests were to be directed to it. Both are directed to the concept of speed but measure it in different ways.Thus, a system can potentially be characterized with two sets of diagrams that demonstrate the general trends observed in Fig. 3a, a first diagram that delineates based on latency and another diagram that delineates based on bandwidth. For simplicity Fig. 3 a only presents one diagram when in reality two separate diagrams could be presented. In practice different applications may be more concerned with one over the other. For example, a first application that does not generate a lot of requests to system memory but whose performance remains very sensitive to how fast its relatively few memory requests will be serviced will be very dependent on latency but not bandwidth so much. By contrast, an application that streams large amounts of requests to system memory will perhaps be as concerned with bandwidth as will latency.With respect to architecture, note that a non volatile memory component that also has a memory side cache 211_X1 will be comparatively faster than a non volatile memory component that does not have a memory side cache 211_X2_X. That is, reads of a non volatile memory component having a memory side cache will be faster than reads of a non volatile memory component that does not have a memory side cache. Likewise, writes to a non volatile memory component having a memory side cache will be faster than writes to a non volatile memory component that does not have a memory side cache.Here, Fig. 3a assumes, e.g., that some portion of the external DRAM 209 is given its own unique system memory address space whereas another portion of the external DRAM 209 is used to implement a memory side cache for a portion of the non volatile system memory 211. This particular system memory component level is labeled 211_X1 in Fig. 3a (where X can be 1 or 2).Another portion of the non volatile system memory 211, labeled in Fig. 3a as 211_X2, does not have any memory side cache service. Thus, whereas requests directed to a 211_X1 memory level are handled according to the near- memory/far- memory semantic behavior described above in the preceding section, by contrast, requests directed to a 211_X2 level are serviced directly from the non volatile memory 211 without any look-up into a near memory. Because the 211_X2 level does not receive any performance speed up from a near memory cache, the 211_X2 level will be observed to be slower than the 211_X1 level. Figs. 3b(i) and 3b(ii) elaborate on two other architectural features that can further compartmentalize the different memory components. Referring to Fig. 3b(i), level 211_21 (which exhibits near memory/far memory behavior on platform 201_1) can be further compartmentalized by allocating more or less near memory cache space per amount of far memory space.Here, as just an example, level 311 provides twice as much near memory cache space per unit of far memory storage space than does level 312. This arrangement can be achieved, as just one example, by having the DRAM DIMMs provide near memory service only to those non volatile memory DIMMs that are plugged into the same memory channel. By having a first memory channel configured with more DRAM DIMMs than a second memory channel where both memory channels have the same number of non- volatile memory DIMMs (or, alternatively, both channels have the same number of DRAM DIMMs but different numbers of non volatile memory DIMMs), different ratios of near memory cache space to far memory space can be effected. Because level 312 has less normalized cache space than level 311, level 312 will be observed as being slower than level 311 and is therefore placed beneath it in the visual hierarchy of Fig. 3b(i).A second architectural feature is that different near memory cache eviction policies may be instantiated for either of the memory levels 311, 312 of Fig. 3b(i). Here, for instance, the memory control hub 204_1 of platforms 201_1 is designed to implement the near memory for both of levels 311, 312 as a set associative cache or fully associative cache and can therefore evict cache lines from a particular set based on different criteria. For example, if a set is full and a next cache line needs to be added to the set, the cache line that is chosen for eviction may either be the cache line that has been least recently used (accessed) in the set or the cache line that has been least recently added to the set (the oldest cache line in the set).Fig. 3b(i) therefore shows the already compartmentalized non volatile memory with near memory cache level 211_11 being further compartmentalized into a least recently used (LRU) partition 313 and a least recently added (LRA) partition 314. Note that different software applications may behave differently based on which cache eviction policy is used. That is, some applications may be faster with LRU eviction whereas other applications may be faster with LRA eviction. As described above at the end of section 1.0, various forms of caching may be implemented by the hardware. Some of these, such as direct mapped, may impose a particular type of cache eviction policy such that varying flavors of cache eviction policy are not readily configurable within a same system. In this case, e.g., the breakdown of SAR_4_1 and SAR_4_2 into further sub-levels as depicted in Fig. 3b(i) may not be realizable. For simplicity the remainder of the discussion will assume that different cache eviction policies can be configured.Fig. 3b(ii) shows that the non volatile memory component having near memory cache 211_21 of the second platform can also be broken down according to the same scheme as observed in Fig. 3b(i).Figs. 3a and 3b(i)/(ii) indicate that each of the different system memory levels/partitions can be allocated their own system memory address range.For example, as depicted in Fig. 3 a, the system memory address space of the slice of system memory 208_1 associated with the first platform 201_1 corresponds to a first system address range SARO that is allocated to the internal DRAM 209_1 of the first platform 201_1, a second system memory address range SAR2 that is allocated to the portion of the external DRAM 210_1 that is allocated unique system memory address space, a third system memory address range SAR4 that is allocated to the portion of non volatile memory 211_11 that receives near memory cache service and a fourth system memory address range SAR6 that is allocated to the portion of non volatile memory 212_12 that does not receive near memory cache service.Likewise, the system memory address space of the slice of system memory 208_2 associated with the first platform 201_2 corresponds to a fifth system address range SARI that is allocated to the internal DRAM 209_2 of the second platform 201_2, a sixth system memory address range SAR3 that is allocated to the portion of the external DRAM 210_2 that is allocated unique system memory address space, a seventh system memory address range S AR5 that is allocated to the portion of non volatile memory 211_21 that receives near memory cache service and an eighth system memory address range SAR7 that is allocated to the portion of non volatile memory 211_22 that does not receive near memory cache service.As observed in Fig. 3b(i), the SAR4 portion 211_11 can further be divided into two more ranges SAR4_1 and SAR4_2 to accommodate the two different levels having different normalized caching space. The SAR4_1 and SAR4_2 levels can also each be further divided into two more system memory address ranges (i.e., SAR4_1 can be divided into SAR4_11 and SAR4_21, and, SAR4_2 can be divided into SAR4_21 and SAR4_22) to accommodate the different cache eviction partitions of levels 211_11 and 211_21, respectively.For ease of drawing, neither of Figs. 3b(i) and 3b(ii) distinguish between read speed and write speed. Here, for instance, for the same address space, regions 311 and 312 of Fig. 3b(i) could be further split to show different speeds for reads and writes. A similar enhancement could be made to Fig. 3b(ii). 4.0 Exposing Different System Memory Levels/Partitions To Software To EnableConfiguration Of Different Performance Levels For Different Software ApplicationsWith all the different levels/partitions that the system memory can be broken down into, and all the different performance dependencies (e.g., reads vs. writes) different software applications can be assigned to operate out of the different system memory levels/partitions in accordance with their actual requirements or objectives. For instance, if a first application (e.g., a video streaming application) would better serve its objective by executing faster, then, the first application can be allocated a memory address space that corresponds to a lower latency read time and higher read bandwidth system memory portion, such as the internal and/or external DRAM portions 209, 210 of the same platform that the application executes from (i.e., the higher ranked memory components in Fig. 3a), or, perhaps one or both the NVRAM levels (with memory side cache and without memory side cache).By contrast, if a second application (e.g., an archival data storage application) does not necessarily need to operate with the fastest of speeds, the second application can be allocated a memory address space that corresponds to a higher latency read or write time and lower read or write bandwidth system memory portion, such as one of the non volatile memory portions of its local platform or even a remote platform.Fig. 4 shows a general approach to assigning certain applications (or other software components) that execute on the system of Fig. 2 to certain appropriate system memory levels/partitions in view of the applications' desired performance level. For simplicity, Fig. 4 and the example described herein does not contemplate different speed metrics (e.g., latency vs. bandwidth) nor differences in read or write performance.Here, the applications that run on platform 201_1 can, e.g., be ranked in terms of desired performance level. Fig. 4, shows a simplistic continuum of the applications that run on platform 201_1 based on their desired performance level. Here, application XI has a highest desired performance level, application Yl has a medium desired performance level and application Zl has a lowest desired performance level.As such, application XI is allocated memory address ranges SAR0 and/or SAR2 to cause application XI to execute out of either or both of the memory components 209_1, 210_1 that have the lowest latency for an application that runs on platform 201_1. By being configured to operate out of the fastest memory available to application XI, application XI should demonstrate higher performance.By contrast, application Yl is allocated memory address ranges SAR4 and/or SAR6 to cause application Yl to execute out of either or both of the memory components 211_11, 211_12 that have modest latency for an application that runs on platform 201_1. By being configured to operate out of a modest latency memory that is available to application Yl, application Yl should demonstrate medium performance.Further still, application Zl is allocated memory address ranges SAR5 and/or SAR7 to cause application Zl to execute off platform out of either or both of memory components 209_2, 210_2 which not only reside on platform 201_2 but are also the higher latency memories on platform 201_2. By being configured to operate out of the slowest memory available to application Zl, application Zl should demonstrate lowest performance.An analogous configuration is also observed in Fig. 4 for applications X2, Y2 and Z2 that execute from platform 201_1. Note that the configurations depicted in Fig. 4 are somewhat simplistic in that each application is configured to operate out of no more than two different memory components, and, both memory components are contiguous on the memory latency scale. Other embodiments may configure an application to execute out of more than two memory components. Further still, such memory components need not be contiguous on the memory latency scale. Fig. 4 is also simplistic in that either of applications Yl and Y2 could be configured to operate out of less than all of the narrower system memory addresses discussed in Figs. 3b(i) and 3b(ii), respectively.Here, an application's execution from a particular platform may actually be implemented by executing its program code on a particular processing core of the platform. As such, the application's software thread and its associated register space is physically realized on the core even though its memory accesses may be directed to some other platform. Multi-threaded applications can execute on a same core, different cores of a same platform or possibly even different cores of different platforms.In order to configure a computing system such that its applications will execute out of an appropriate one or more levels of system memory, an operating system instance and/or virtual machine monitor will need some visibility into the different system memory levels and their latency relationship with the different processing cores of the system.It is pertinent to point out, however, that the above configuration examples could be enhanced to contemplate difference speed metrics (such as latency v. bandwidth) or different read and write latencies/bandwidths. Here, system configuration information could contemplates different latencies and bandwidths for both read and writes for the various memory components and configure the various applications to operate out of certain ones of the different memory components whose characteristics were a good fit from a behavior/performance perspective. Fig. 5a shows an exemplary root complex that could, e.g., be loaded into a computing system's BIOS and referred to by an OS/VMM during system configuration. Here, the root complex includes a System Memory Attribute Table (which could be defined by another name) that lists in a first list 501 the different entities, referred to as memory access initiators (MAIs), that can issue a read or write request to system memory. In the exemplary system of Fig. 2 these included a first platform 201_1 ("platform_l" in Fig. 5a) and a second platform 201_2("platform_2" in Fig. 5a).Note that the list 501 and overall root complex may take the form of a directory rather than just a collection of lists. For example, each platform entry in the MAI list 501 may act as a higher level directory node that further lists its constituent CPU cores within/beneath it.Further still, any kind of entity that issues a request to system memory can have its entry or node in the MAI list with further sub nodes listing its constituent parts that can individually issue system memory requests. For example, an I O control hub node can further list its various PCIe interfaces as sub nodes. Each of the various PCIe interfaces can list the corresponding devices that connected to it as further sub-nodes of the PCIe interface subnodes. Similar structures can be composed for mass storage devices (e.g., disk drives, solid state drives).Here, any component that can issue a read or write request to system memory (e.g. a network interface, a mass storage device, a CPU core) can be given MAI status and assigned a region of system memory space. As discussed at length above, a CPU core is assigned system memory space for its software to execute out of. Thus, not only may a CPU core be recognized as an MAI entry within the list, but also, e.g., each application that is configured to run on a particular CPU core may be given MAI status and listed in the MAI list 501.By contrast, I/O devices may or may not execute software but nevertheless may issue system memory read/write requests. For instance, a network interface may stream the data it receives from a network into system memory and/or receive from system memory the data it is streaming into a network. Again, the notion that higher performance components can be allocated higher performance levels of system memory still applies. For example, a first network interface that is coupled to a high bandwidth link may be coupled to a higher performance system memory level while a second network interface that is coupled to a low bandwidth link may be coupled to a lower performance system memory level. An analogous arrangement can be applied with respect to faster performance mass storage devices and slower performance mass storage devices.Thus, each MAI entry in the MAI list 501 may include some further meta data information that describes or otherwise indicates its performance level so that an operating system instance and/or virtual machine monitor can comprehend the appropriate level of system memory performance that it will need. CPU core entries and/or the applications that run on them can include similar meta data.A second list 502 lists the different memory access ("MA") regions or domains within the system memory that can be separately identified. The MAI list 502 of Fig. 5a simplistically only lists the eight different memory levels observed in Fig. 3a. However, consistent with the discussion just above that the overall root complex may take the form of a directory, certain memory levels/domains may be further expanded upon to show different performance levels within itself. For example, the memory domains that correspond to a non volatile memory region having near memory cache service may further be broken down in the root complex to reflect the structures of Figs. 3(b)(i) and 3(b)(ii). As such, the root complex can show the different performance (more/less near memory cache space) or behavior (LRU/LRA) within system memory with various levels of granularity.Again, each node in the MA list 502, besides identifying its specific system memory address range, may include some meta data that describes attributes of itself such as technology type (e.g., DRAM/non volatile), associated access speed and architecture (e.g., 2LM with a specific amount of near memory cache space and cache eviction policy). An OS instance or virtual machine monitor can therefore refer to this information when attempting to configure a certain memory access initiator with a specific memory domain.The root complex of Fig. 5a also includes a performance list 503 that lists each of the different logical connections that can exist from each of the memory initiators to each of the different memory access domains and identifies an estimated or approximate latency for each logical connection. Here, again Fig. 5a is simplistic in that it only lists all sixteen such logical connections depicted in Figs. 3a and 4 (eight for application that run on platform 201_1 and eight for applications that run on platform 201_1). Here, a logical connection on a same platform will largely be based on system memory technology and architectural implementation of the system memory (e.g., 2LM or not 2LM) whereas a logical connection that spans across platforms will be based not only on technology implementation of the system memory level but also networking latency associated with the inter platform communication that occurs over a link/network.Fig. 5b shows a slightly more comprehensive performance list than the simplistic latency list 503 of Fig. 5b. In particular, the performance list 503 of Fig. 5a could be expanded to separate read latencies from write latencies for each of the different memory components. Here, read latency entries are denoted "RL_. . . " whereas write latency entries are denoted "WL As such, configuration software can better align applications that have a greater tendency or sensitivity to one or other type of access (read or write) by studying links between entries in the expanded performance list with entries in the MAI list 501.Here, a DRAM component having its own address space may present same read and write latency metadata whereas any of the NVRAM components may present substantially different read and write latency data.Further still, the performance list of Fig. 5b could even be further extended to include bandwidth in addition to latencies for each memory domain, and, further still, to show different read bandwidth and different write bandwidth meta data for each of the different memory domains.Returning to Fig. 5a, once all information from each the MAI 501, MA 502 and performance 503 lists are presented, an operating system instance or virtual machine monitor can synthesize the information and begin to assign/configure specific memory access initiators with specific memory access domains where the particular assignment/configuration between a particular memory access initiator in list 501 and a particular memory domain in list 502 is based on an appropriate read/write latency and/or read/write bandwidth between the two that is recognized from list 503. In particular, if a first application requires high read bandwidth but not high write bandwidth, the application may be assigned to operate out of memory domain that corresponds to an underlying memory technology that has much faster read bandwidth than write bandwidth (e.g., an emerging non volatile memory technology). By contrast, a second application that requires approximately the same low latency for both reads and writes may be assigned to operate out of a higher performance memory that has approximately same read/write latency (e.g., DRAM).The root complex approach described just above may be written to be compatible with any of a number of system and/or component configuration specifications (e.g., AdvancedConfiguration and Power Interface (ACPI), NVDIMM Firmware Interface Table (NFIT)). Here, again, the root table may be stored in non volatile BIOS and used by configuration software during a configuration operation (e.g., upon boot- up, in response to component addition/removal, etc.). Conceivably, current versions of SLIT and/or SRAT information (discussed in the background) could be expanded to include the attribute features described just above with respect to the root complex of Fig. 5.Fig. 6 shows a method described in the preceding sections. The method includes recognizing different latencies between different levels of a system memory and different memory access requestors of a computing system, where, the system memory includes the different levels and different technologies 601. The method also includes allocating each of the memory access requestors with a respective region of the system memory having an appropriate latency 602.5.0 Computing System EmbodimentsFig. 7 shows a depiction of an exemplary computing system 700 such as a personal computing system (e.g., desktop or laptop) or a mobile or handheld computing system such as a tablet device or smartphone, or, a larger computing system such as a server computing system. In the case of a large computing system, various one or all of the components observed in Fig. 7 may be replicated multiple times to form the various platforms of the computer which are interconnected by a network of some kind.As observed in Fig. 7, the basic computing system may include a central processing unit701 (which may include, e.g., a plurality of general purpose processing cores and a main memory controller disposed on an applications processor or multi-core processor), system memory 702, a display 703 (e.g., touchscreen, flat-panel), a local wired point-to-point link (e.g., USB) interface 704, various network I/O functions 705 (such as an Ethernet interface and/or cellular modem subsystem), a wireless local area network (e.g., WiFi) interface 706, a wireless point-to-point link (e.g., Bluetooth) interface 707 and a Global Positioning System interface 708, various sensors 709_1 through 709_N (e.g., one or more of a gyroscope, an accelerometer, a magnetometer, a temperature sensor, a pressure sensor, a humidity sensor, etc.), a camera 710, a battery 711, a power management control unit 712, a speaker and microphone 713 and an audio coder/decoder 714.An applications processor or multi-core processor 750 may include one or more general purpose processing cores 715 within its CPU 701, one or more graphical processing units 716, a memory management function 717 (e.g., a memory controller) and an I/O control function 718. The general purpose processing cores 715 typically execute the operating system and application software of the computing system. The graphics processing units 716 typically execute graphics intensive functions to, e.g., generate graphics information that is presented on the display 703. The memory control function 717 interfaces with the system memory 702. The system memory702 may be a multi-level system memory and the BIOS of the system may contain attributes of the system memory as discussed at length above so that configuration software can configure certain memory access initiators with specific components of the system memory that have an appropriate latency from the perspective of the initiators.Each of the touchscreen display 703, the communication interfaces 704 - 707, the GPS interface 708, the sensors 709, the camera 710, and the speaker/microphone codec 713, 714 all can be viewed as various forms of I/O (input and/or output) relative to the overall computing system including, where appropriate, an integrated peripheral device as well (e.g., the camera 710). Depending on implementation, various ones of these I/O components may be integrated on the applications processor/multi-core processor 750 or may be located off the die or outside the package of the applications processor/multi-core processor 750.Embodiments of the invention may include various processes as set forth above. The processes may be embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor to perform certain processes. Alternatively, these processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of software or instruction programmed computer components or custom hardware components, such as application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), or field programmable gate array (FPGA).Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, FLASH memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the present invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via acommunication link (e.g., a modem or network connection).In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Techniques for cutting dies from respective workpieces are described, as well as techniques for incorporating one or more cut dies into a stacked device structure. In some embodiments, dicing a die from a workpiece includes chemically etching the workpiece in a dicing line. In some embodiments, cutting a die from a workpiece includes mechanically cutting the workpiece in a cutting line and forminga pad along a sidewall of the die. The die may be incorporated into a stacked device structure. The die may be attached to the substrate along another die attached to the substrate. The encapsulant may be located between each die and the substrate, and laterally between the dies.
1.A structure, characterized in that the structure comprises:Substratea first die having a first sidewall, the first sidewall having at least one first indentation, the first die being attached to the substrate;a package, the package being located between the first die and the substrate, the package being disposed in the first indentation and bonded to a first surface of the first indent .2.The structure of claim 1 wherein said first side wall is a corrugated side wall and said first indentation is a concave surface of said corrugated side wall.3.The structure of claim 1 wherein said first side wall is a vertical side wall having one or more recesses and said at least one first indentation is said one or more recesses.4.The structure of claim 1 further comprising a second die, said second die having a second sidewall, said second sidewall having at least one second indentation a second die is attached to the substrate, wherein the package is further located between the second die and the substrate and laterally located between the first sidewall and the second sidewall And bonding to the first side wall and the second side wall, the package is further disposed in the second indentation and bonded to the second surface of the second indentation.5.The structure of claim 1 wherein said first sidewall is formed by cutting said first die from a workpiece, said cutting said first die from said workpiece comprising using deep reactive ion etching DRIE process.6.The structure of claim 1 wherein said first sidewall is formed by cutting said first die from a workpiece, said cutting said first die from said workpiece comprising using an anisotropic etch A process and a subsequent isotropic etch process, wherein at least a portion of the sidewall formed by the anisotropic etch process is passivated during the isotropic etch process.7.The structure of claim 1 wherein said substrate is a second die having an integrated circuit or said substrate is an interposer.8.A structure, characterized in that the structure comprises:Substratea first die having a first sidewall and a first liner along the first sidewall, the first die being attached to the substrate;An encapsulation between the first die and the substrate, the package being bonded to the first liner.9.The structure of claim 8 further comprising a second die, the second die having a second sidewall and a second liner along the second sidewall a second die is attached to the substrate, wherein the package is further located between the second die and the substrate, and laterally located on the first die and the second die And bonded to the second liner.10.The structure of claim 8 wherein said first liner is a nitride layer.11.The structure of claim 8 wherein forming the first liner comprises depositing the first liner along the first sidewall and smoothing the first liner using an etching process.12.The structure of claim 8 wherein said substrate is a second die having an integrated circuit or said substrate is an interposer.
Die cutting and stacked device structureTechnical fieldEmbodiments of the present application generally relate to die singulation and stacked device structures, and in particular, to a process for cutting a die from a workpiece for improving characteristics in a stacked device structure.Background techniqueIn general, the semiconductor processing industry has generally developed stacking techniques in which integrated circuits formed on a die are stacked onto another substrate. An example of a stacking technique includes a technique known as a 2.5-dimensional integrated circuit (2.5 DIC) in which one or more dies (with integrated circuits formed on each dies) are stacked on an interposer. Another example includes a technique known as 3-dimensional integrated circuit (3DIC) in which one or more dies (with integrated circuits formed on each dies) are stacked on another die (on which an integrated circuit is also formed) )on. In another example, a multi-level stacked die with or without an interposer can be implemented.The benefits of stacking technology are higher density, smaller footprint, shorter electrical routing and lower power consumption. For example, vertical integration of the dies can reduce the area that connects the stacked dies to the package substrate. Moreover, in some cases, the conductive path of the electrical signal can include a portion that is perpendicular to the connection to another die, which can reduce the distance over which the electrical signal propagates. The reduced distance can reduce the resistance and, in turn, reduce power consumption and propagation delay.Utility model contentEmbodiments of the present application generally relate to techniques for cutting dies and to stacked device structures including diced dicing. The various cutting processes described herein can improve the robustness and reliability of stacked device structures.One embodiment of the present application is a structure. The structure includes a substrate, a first die attached to the substrate, and an encapsulation between the first die and the substrate. The first die has a first sidewall and the first sidewall has at least one first indent. The package is disposed in the first indentation and bonded to the first surface of the first indentation.Another embodiment of the present application is a method of an integrated circuit package. The method includes cutting a first die from a first workpiece, attaching the first die to a first region of a substrate, and forming a package on the substrate. Cutting the first die includes chemically removing material from the first workpiece in a first cutting line. The package is further formed between the first region and the first die and bonded to the first sidewall of the first die.Another embodiment of the present application is a structure. The structure includes a substrate, a first die attached to the substrate, and an encapsulation between the first die and the substrate. The first die has a first sidewall and a first liner along the first sidewall. The package is bonded to the first liner.Another embodiment of the present application is a method of an integrated circuit package. The method includes cutting a first die from a first workpiece, attaching the first die to a first region of a substrate, and forming a package on the substrate. Cutting the first die includes mechanically cutting the first workpiece in a first cutting line and forming a first liner along a first sidewall of the first die. A first sidewall of the first die is formed by mechanically cutting the first workpiece. The package is further formed between the first region and the first die. The package is bonded to the first liner.Another embodiment of the present application is a structure, the structure comprising: a substrate; a first die, the first die has a first sidewall, and the first sidewall has at least one first indentation, The first die is attached to the substrate; and a package, the package being located between the first die and the substrate, the package being disposed in the first indent And being bonded to the first surface of the first indentation.In some embodiments, the first sidewall is a corrugated sidewall and the first indent is a concave surface of the corrugated sidewall.In certain embodiments, the first side wall is a vertical side wall having one or more notches, and the at least one first indentation is the one or more notches.In some embodiments, the structure further includes a second die, the second die having a second sidewall, the second sidewall having at least one second indentation, the second die being Attached to the substrate, wherein the package is further located between the second die and the substrate and laterally between the first sidewall and the second sidewall and bonded to the substrate The first side wall and the second side wall, the package is further disposed in the second indentation and bonded to the second surface of the second indentation.In certain embodiments, the first sidewall is formed by cutting the first die from a workpiece, the cutting the first die from the workpiece comprising using a deep reactive ion etch DRIE process.In some embodiments, the first sidewall is formed by cutting the first die from a workpiece, the cutting the first die from the workpiece comprising using an anisotropic etch process and subsequent directions A isotropic etch process in which at least a portion of sidewalls formed by the anisotropic etch process are passivated during the isotropic etch process.In some embodiments, the substrate is a second die having an integrated circuit, or the substrate is an interposer.Another embodiment of the present application is a structure including: a substrate; a first die having a first sidewall and a first pad along the first sidewall, The first die is attached to the substrate; and an encapsulant between the first die and the substrate, the package being bonded to the first pad .In some embodiments, the structure further includes a second die having a second sidewall and a second liner along the second sidewall, the second die being Attached to the substrate, wherein the package is further between the second die and the substrate, and laterally between the first die and the second die and bonded to The second pad.In certain embodiments, the first liner is a nitride layer.In certain embodiments, forming the first liner includes depositing the first liner along the first sidewall and smoothing the first liner using an etching process.In some embodiments, the substrate is a second die having an integrated circuit, or the substrate is an interposer.The above and other aspects are to be understood by reference to the following detailed description.DRAWINGSThe above-described features of the present application can be understood in more detail by reference to the exemplary embodiments, and a more detailed description of the present application, in which the exemplary embodiments are illustrated. It should be noted, however, that the drawings are merely illustrative of exemplary embodiments and are not to be construed as limiting1 through 6 are cross-sectional views of intermediate structures in a general process stage for forming a stacked device structure in accordance with an embodiment of the present application;7 is a flow diagram of a general process for forming a stacked device structure in accordance with an embodiment of the present application;8 and 9 are cross-sectional views of an intermediate structure at a first die cutting process stage in accordance with an embodiment of the present application;10 is a flow chart of a first die cutting process in accordance with an embodiment of the present application;11 is a cross-sectional view of a stacked device structure in accordance with an embodiment of the present application, wherein the stacked device structure includes a die cut from a workpiece using the first die cutting process of FIGS. 8 and 9;12-17 are cross-sectional views of intermediate structures in a second die cutting process stage in accordance with an embodiment of the present application;18 is a flow chart of a second die cutting process in accordance with an embodiment of the present application;19 is a cross-sectional view of a stacked device structure in accordance with an embodiment of the present application, wherein the stacked device structure includes a die cut from a workpiece using the second die cutting process of FIGS. 12-17;20 is a cross-sectional view of a stacked device structure in accordance with an embodiment of the present application, wherein the stacked device structure includes a die cut from a workpiece using a modified version of the second die cutting process of FIGS. 12-17;21-24 are cross-sectional views of intermediate structures in a third die cutting process stage in accordance with an embodiment of the present application;25 is a flow chart of a third die cutting process in accordance with an embodiment of the present application;26 is a stacked device structure in accordance with an embodiment of the present application, wherein the stacked device structure includes a die cut from a workpiece using a third die cutting process of FIGS. 21-24.For ease of understanding, the same reference numerals will be used, where possible, to refer to the same elements in the drawings. It is contemplated that elements of one embodiment may be beneficially incorporated into other embodiments.Detailed waysEmbodiments of the present application provide techniques for cutting a die and provide a stacked device structure including a diced die. The various cutting processes described herein can improve the robustness and reliability of the stacked device structure. For example, the dicing process described herein can be used to reduce defects that can cause stress at the dies and thereby cause cracks in the dies, which in turn can reduce cracking in the stacked device structure. Moreover, for example, the cutting process described herein can provide a surface with improved adhesion characteristics, while a surface with improved adhesion characteristics can reduce delamination in a stacked device structure. These and other possible advantages will become apparent from the description of the application.Generally, in some embodiments, a die can be cut from a corresponding workpiece using a non-mechanical process, such as using a chemical etching process. For example, the chemical etch process can be or can include a plasma dicing process or other etch process, which can further include an anisotropic and/or isotropic etch process. The chemical etch process can form one or more indentations in the sidewalls of the diced die. The one or more indentations may be or may form one or more notches in the corrugated sidewalls of the die and/or the sidewalls of the die. One or more cut dies that are cut using a non-mechanical process can then be incorporated into the stacked device structure, which has better robustness and reliability.Moreover, in some embodiments, a mechanical process can also be used to cut a die from a corresponding workpiece, such as a mechanical cut (eg, a mechanical saw). The sidewall of the die formed using mechanical cutting may be formed with a liner thereon. One or more cut dies that are cut using a mechanical process can then be incorporated into the stacked device structure, which has better robustness and reliability.Various features are described below with reference to the drawings. It is noted that the drawings may be drawn to scale or not to scale, and in the various figures, elements of similar structure or function are denoted by the same reference numerals. It should also be noted that the drawings are only intended to facilitate the description of the features. The drawings are not intended to be exhaustive or to limit the scope of the application. In addition, the described embodiments are not required to have all of the aspects or advantages shown. The aspects or advantages described in connection with the specific embodiments are not necessarily limited to the embodiments, but may be implemented in any other embodiment, even if not so stated or not explicitly described.Exemplary process for forming a stacked device structure1 through 6 depict cross-sectional views of intermediate structures in a general process stage for forming a stacked device structure, in accordance with an embodiment of the present application. 7 is a flow diagram of a general process for forming a stacked device structure in accordance with an embodiment of the present application. Specific examples of cutting of the die and more cutting are generally described in this general process, and the resulting stacked device structure is described.FIG. 1 depicts a plurality of dies 42 formed on a first workpiece 40, such as a silicon wafer. For example, the first workpiece 40 can include a semiconductor wafer having any diameter (eg, 100 mm, 150 mm, 200 mm, 300 mm, 450 mm, or other diameter) and any thickness (eg, 525 μm, 675 μm, 725 μm, 775 μm, 925 μm, or other thickness). The die 42 is formed on the first workpiece 40 in accordance with design specifications. Die 42 may include, for example, a memory, a processor, an application specific integrated circuit (ASIC), a programmable integrated circuit (such as a field programmable gate array (FPGA) or a complex programmable logic device (CPLD)), and the like. Any number of dies 42 can be formed on the first workpiece 40. The first workpiece 40 can be processed such that the electrical connector 44 is formed on the die 42. The electrical connector 44 can include microbumps, for example each of the microprotrusions has a copper post and is formed with solder thereon (e.g., lead-free solder). In other embodiments, electrical connector 44 can be other types of electrical connectors. For convenience, the side of the die 42 in which the electrical connector 44 is formed is referred to as the "front side" or "active side", and the side of the die 42 opposite the front side of the die 42 is referred to as "back side". The dicing lines 46 are located between adjacent dies 42 and along the edges of the dies 42 with the edges of the dies 42 along the surface of the first workpiece 40. Each of the dicing lines 46 surrounds each die 42 such that each dies 42 can be diced from other dies 42 (eg, dicing the wafer) by removing portions within the dicing lines 46 of the first workpiece 40.FIG. 2 depicts attaching the first workpiece 40 to the support structure 50 for cutting of the die 42 (eg, after flipping the first workpiece 40). For example, the support structure 50 can be a glass or silicon carrier substrate, or a metal frame, although other support structures can also be used. The first workpiece 40 can be attached to the support structure 50 using an adhesive 52, such as the adhesive 52, which can be an ultraviolet (UV) tape that loses adhesive properties upon exposure to UV light. The active side of the die 42 on the first workpiece 40 is bonded to the support structure 50 using an adhesive 52 while the back side of the die 42 faces away from the support structure 50.FIG. 3 depicts the cutting of the die 42 from the first workpiece 40, which is performed in block 202 of FIG. A portion of the first workpiece 40 along the cutting line 46 is removed to cut the die 42. Exemplary cutting processes are described below that may be used to form stacked device structures or other types of structures in the general process. FIG. 3 does not necessarily describe various aspects of the die 42 formed by at least some of the following cutting processes, such as the sidewalls of the die 42. Moreover, even though each side wall is not specifically shown or described, since the portion of the first workpiece 40 along each of the cutting lines 46 can be simultaneously removed during cutting, the bareness described and illustrated in the subsequent figures Various aspects of the sidewalls of the sheet 42 are generally applicable to each side wall of the die 42.4 depicts attaching at least one of the dies 42 to the substrate 62 (formed on the second workpiece 60), which is at least partially performed in block 204 of FIG. The second workpiece 60 may include a semiconductor wafer such as described above for the first workpiece 40 or may include an organic substrate. In some embodiments, substrate 62 can be a die on which an integrated circuit is formed or can be an interposer. When implemented as a die on which an integrated circuit is formed, substrate 62 can include, for example, a memory, a processor, an application specific integrated circuit, or the like. The interposer typically does not include moving devices such as transistors, diodes, and the like. Any number of substrates 62 can be formed on the second workpiece 60.Similar to the above, the substrate 62 also has "front side" and "back side", but these terms do not necessarily imply any particular structure. The second workpiece 60 can be processed by front side processing such that the electrical connector 64 is formed on the substrate 62. For example, during the front side processing, a through substrate channel (TSV) may be formed at least in part by a semiconductor wafer such as the second workpiece 60. The TSV can be electrically connected to one or more redistribution metal layers on the front side of the substrate 62. The electrical connector 64 can include microprojections, for example, each microprotrusion has a copper post and is formed with or without solder (e.g., lead-free solder). In other examples, electrical connector 64 can be other types of electrical connectors. A cutting line 66 can be disposed between adjacent substrates 62 and along an edge of the substrate 62 with the edge of the substrate 62 along the exterior of the second workpiece 60.After cutting the die 42 from the first workpiece 40, the die 42 is separated from the adhesive 52, including, for example, exposing the adhesive 52 to ultraviolet light to cause the adhesive 52 to lose adhesive properties. The die 42 can then be placed over the first die attach area of the substrate 62 while the electrical connector 44 of the die 42 contacts the electrical connector 64 of the substrate 62 in the first die attach area. A reflow process can be used to reflow the electrical connector 44 to the electrical connector 64, such as reflowing the solder of the electrical connector 44 and the electrical connector 64 together to physically and electrically attach the die 42 to the substrate 62. .Similarly, the die 70 can be attached to a second die attach area of the substrate 62. Die 70 may be one of the dies 42 or may be another dies formed on another workpiece. Die 70 may undergo processing similar to the general process for die 42 described with respect to blocks 202 and 204 of FIGS. 1-3 and 7. Die 70 may include, for example, a memory, a processor, an application specific integrated circuit, a programmable IC, or the like. The die 70 can include an electrical connector 72 that can include microprojections, each having a copper post and having solder thereon (eg, lead-free solder). In other embodiments, electrical connector 72 can be other types of electrical connectors. After the die 70 is diced from its workpiece, the die 70 can be placed over the second die attach area of the substrate 62 while the electrical connector 72 of the die 70 contacts the substrate 62 in the second die attach area. Electrical connector 64. A reflow process can be used to reflow the electrical connector 72 to the electrical connector 64, such as reflowing the solder of the electrical connector 72 and the electrical connector 64 together to physically and electrically attach the die 70 to the substrate 62. The reflow process of reflowing the electrical connector 72 and the electrical connector 64 together may be the same or different than the reflow process of reflowing the electrical connector 44 and the electrical connector 64 together. In other embodiments, other dies may also be attached to the substrate 62.After die 42 and die 70 are attached to substrate 62, in block 206 of FIG. 7, die 42 and die 70 on substrate 62 may be packaged. A package 68 may be formed on the front side of the second workpiece 60 and formed between the die 42 and the die 70. For example, the package 68 can be a molded underfill (MUF) that is dispensed and molded using a vacuum assisted mold system. In other embodiments, the package 68 can include a variety of materials formed in different operations, such as capillary underfill (CUF) formed using a dispensing process and subsequently formed using compression molding or other molding processes. Molding compound. Encapsulant 68 may be formed between reflowed electrical connector 44 and die 42 and substrate 62 around electrical connector 64 between reflowed electrical connector 72 and die 70 and substrate 62 around electrical connector 64. Formed and formed laterally between the sidewalls of the die 42 and the die 70.FIG. 5 depicts the backside processing on the second workpiece 60. For example, during the backside processing, the TSV can be exposed through the semiconductor wafer of the second workpiece 60 by grinding or polishing the semiconductor wafer (by using, for example, chemical mechanical polishing (CMP)). One or more redistribution metal layers may be formed on the back side of the substrate 62, wherein the TSVs may be electrically connected to the one or more redistribution metal layers. Electrical connector 80 is formed on the back side of substrate 62, and electrical connector 80 is also electrically coupled to the one or more redistribution metal layers. The electrical connector 80 can include controlled collapse chip connection (C4) bumps, each of which has an under bump metallization (UBM) with solder (e.g., lead-free solder) formed thereon. In other embodiments, electrical connector 80 can be other types of electrical connectors, such as ball grid array (BGA) balls.FIG. 6 is a diagram of a stacked device structure after cutting the substrate 62 from the second workpiece 60, wherein the cutting is performed in block 208 of FIG. The second workpiece 60 along the cutting line 66 and portions of the package 68 can be removed by cutting of the substrate 62. The cutting of the substrate 62 can be performed, for example, by using a mechanical saw.The general process illustrated and described in Figures 1-6 is merely an exemplary process for forming a stacked device structure. The described operational flows can be performed in any logical order. For example, the order in which the substrate 62 is diced, the dies 42 and dies 70 are attached to the substrate 62, and/or the package 68 is formed may be modified and replaced in any logical order.Moreover, some of the components in Figures 1-6 have been described as having particular characteristics and/or described as specific components. These are merely examples, and are intended to convey various aspects of the embodiments of the present application. Various modifications and/or substitutions made to these components will be readily apparent to those of ordinary skill in the art.First example die cutting process8 and 9 depict cross-sectional views of intermediate structures in a first die cutting process stage, in accordance with an embodiment of the present application. 10 is a flow chart of a first die cutting process in accordance with an embodiment of the present application. The first die cutting process can be performed at block 202 of FIG.FIG. 8 depicts a portion of the intermediate structure of FIG. 2 after laser grooving in the cutting line 46, wherein the laser grooving is performed in block 222 of FIG. Next, FIG. 9 depicts the die 42 after being cut using plasma cutting, wherein the plasma cutting die is performed in block 224 of FIG. The plasma dicing in this embodiment may use deep reactive ion etching (DRIE), such as the Bosch DRIE process. The plasma cutting in this embodiment forms undulating sidewalls 88 on the die 42, for example each of the sidewalls has a plurality of vertical concave surfaces. Each curved surface can have a radius of curvature 90 in the range of from about 0.1 μm to about 50 μm, a depth 92 in the range of from about 0.1 μm to about 100 μm, and a height 94 in the range of from about 0.1 μm to about 100 μm. The radius of curvature 90, depth 92, and height 94 can be controlled by controlling process parameters of plasma cutting, such as plasma energy.11 depicts a cross-sectional view of a stacked device structure including a die 42 and a die 70 cut from a workpiece using the first die cutting process of FIGS. 8 and 9 in accordance with an embodiment of the present application. The stacked device structure of Figure 11 is similar in construction to the stacked device of Figure 6. In FIG. 11, each die 42 and die 70 have corrugated sidewalls 88 with the encapsulant 68 bonded to the corrugated sidewalls 88.By cutting the die 42 and the die 70 using a plasma cutting rather than a mechanical sawing process, defects along the sidewalls of the die 42 and die 70 caused by mechanical sawing can be avoided. For example, mechanical sawing can cause cracking and chipping along the sidewalls of the die. These defects can be caused by the type of blade such as the dicing saw, the size of the grit of the blade, the vibration of the blade, and the wear and tear of the blade during mechanical sawing of the dicing die. These defects may be the source of the crack, which may propagate into the active portion of the die and/or may be the cause of the localized pressure concentration region. Defects and/or pressure caused by defects can result in delamination of the package at the sidewall of the die and/or low dielectric constant (low-k) dielectric layer (eg, for the intermediate metallization layer) on the die Stratification or cracking at the side walls. By avoiding the use of a mechanical sawing process to cut the die 42 and the die 70, defects such as cracking and chipping caused by mechanical sawing can be avoided. Therefore, the occurrence of delamination and cracking in the stacked device structure can be reduced, and the situation of the partial pressure concentration region in the stacked device structure can also be reduced.Moreover, the undulating sidewalls 88 of the die 42 and die 70 have a larger surface area than the straight and vertical sidewalls formed using, for example, a mechanical sawing process. The encapsulant 68 is bonded to the larger surface area, which in turn provides greater adhesion between the respective die 42/die 70 and the package 68. Additionally, the undulating sidewalls 88 of the die 42 and die 70 can reduce the effects of cracking. The larger surface area of the undulating sidewalls 88 can increase the distance that the crack must reach the active portion of the die 42 and the die 70. In addition, the undulating sidewalls 88 can create discontinuities along the sidewalls that can intersect the propagated ruptures and stop the propagation of these propagating ruptures. Therefore, the adverse effects of cracking in the stacked device structure can be reduced.Second example die cutting process12-17 illustrate cross-sectional views of intermediate structures in a second die cutting process stage, in accordance with an embodiment of the present application. 18 is a flow chart of a second die cutting process in accordance with an embodiment of the present application. The second die cutting process can be performed at block 202 in FIG.FIG. 12 depicts a portion of the intermediate structure of FIG. 2 after laser grooving in the cutting line 46, wherein the laser grooving is performed in block 232 of FIG. In FIG. 13, a mask 100 is deposited on the back side of the die 42 and patterned to expose the dicing lines 46, which are performed in block 234 of FIG. Mask 100 may comprise or may be any suitable hard mask material such as silicon nitride, silicon oxynitride, silicon carbon nitride or other materials, and may be applied by spin coating, chemical vapor deposition (CVD). ), physical vapor deposition (PVD) or other deposition techniques to deposit. The mask 100 can be patterned using a photolithography process and an etching process. Once patterned, mask 100 has a mask opening corresponding to cut line 46.Where the mask is patterned, mask 100 may be used during the anisotropic etch process to form recesses 102 having vertical sidewalls in respective dicing lines 46, which are performed in block 236 of FIG. The anisotropic etch process can be a plasma dicing process, reactive ion etching (RIE), or other anisotropic etch process. The groove 102 can be formed to a depth 104 in the first workpiece 40, wherein the depth 104 is in the range of from about 0.1 [mu]m to about 100 [mu]m.14 and 15 depict the formation of a passivation film 106 on the sidewalls of the recess 102, which is performed in block 238 of FIG. In the illustrated embodiment, passivation film 106 is formed separately from the etch process described in FIG. 13, but in other embodiments, passivation film 106 on the sidewalls of trench 102 can be described as FIG. Formed as a by-product of the etching process. In FIG. 14, a passivation film 106 is conformally deposited on the mask 100 and along the sidewalls and on the bottom surface of the recess 102. The passivation film 106 may include or may be any material that has an etch selectivity different from that of the first workpiece 40. For example, passivation film 106 may include or may be silicon oxynitride, silicon carbide, silicon nitride carbon, silicon nitride, or other materials, and may be deposited using CVD, atomic layer deposition (ALD), or other conformal deposition techniques. In FIG. 15, the horizontal portion of the passivation film 106 is removed, such as by using an anisotropic etching process such as RIE. The passivation film 106 may remain on the sidewalls of the recess 102, and the surface of the first workpiece 40 in the corresponding cut line 46 (eg, the bottom surface of the recess 102) may be exposed.FIG. 16 depicts the formation of a recess 110 in the die 42 below the passivation film 106, which is performed in block 240 of FIG. The recess 110 can be formed using an isotropic etch process, which can be an RIE, wet etch process, or other isotropic etch process. The passivation film 106 prevents the etch process from etching the sidewalls of the die 42 covered by the passivation film 106 (eg, because of differences in etch selectivity). The isotropic etch process etches the first workpiece 40 vertically in the dicing line 46 and etches the dies 42 laterally through the exposed surface along the bottom of the recess 102 to form the recesses 110 in the dies 42. The recess 110 is depicted as having a square profile, but in other examples, the recess 110 can have a semi-circular or semi-elliptical profile. Each recess 110 can have a depth 112 along a corresponding sidewall of the die 42 that is in the range of from about 0.1 μm to about 100 μm and has a depth 114 from the respective sidewall into the die 42 that is deep. 114 is in the range of from about 0.1 μm to about 100 μm.FIG. 17 depicts a further etch to dicing the die 42, which is performed in block 242 of FIG. An anisotropic etch process can be etched through the remaining portion of the first workpiece 40 in the dicing line 46. The anisotropic etch process can be a plasma dicing process, reactive ion etching (RIE), or other anisotropic etch process. Mask 100 can then be removed, such as using a wet etch process, a plasma ashing process, or other process. FIG. 17 depicts a passivation film 106 that remains on the upper sidewall of the die 42 (eg, above the recess 110). In some embodiments, the passivation film 106 can be removed by a process such as removing the mask 100 or other processes. In other embodiments, for example, when a passivation film is formed as a by-product of an anisotropic etch process, a passivation film can be along the sidewalls of the die 42 (eg, above and below the recess 110) And along the surface of the recess 110.19 depicts a cross-sectional view of a stacked device structure including dies 42 and 70 cut from a workpiece using the second die cutting process of FIGS. 12-17, in accordance with an embodiment of the present application. The stacked device structure of Figure 19 is similar in construction to the stacked device of Figure 6. In FIG. 19, each die 42 and die 70 have respective recesses 110 in the sidewalls, and an encapsulant 68 is disposed in the surface of each recess 110 and bonded to each recess 110 s surface.20 depicts a cross-sectional view of a stacked device structure including dies 42 and 70 cut from a workpiece using a modified version of the second die cutting process of FIGS. 12-17, in accordance with an embodiment of the present application. . The stacked device structure of Figure 20 is similar in construction to the stacked device of Figure 6. In FIG. 20, each of the dies 42 and 70 has a plurality of notches 110 in the sidewalls, and an encapsulant 68 is disposed in the surface of each of the notches 110 and bonded to the surface of each of the notches 110. In the illustrated embodiment, each side wall of the dies 42 and 70 has three notches 110, while in other embodiments, each side wall can have any number of notches 110, such as two, four, or Other quantities. Moreover, the number of notches 110 in each sidewall of the die 42 can be different than the number of notches 110 in each sidewall of the die 70. A plurality of notches 110 can be formed in the sidewalls of the dies 42 and 70 by repeating the appropriate number of etch and passivation operations of FIGS. 13-16 and 236-block 240 of FIG. Depths 104, 112, and 114 can be controlled by controlling the duration of the etching process and/or etching chemistry to provide a suitable depth to obtain a desired number of notches 110 in the sidewalls.As with the first example die dicing process, by cutting the dies 42 and 70 using an etch instead of a mechanical sawing process, defects along the sidewalls of the dies 42 and 70 caused by mechanical sawing can be avoided. By avoiding the use of a mechanical sawing process to cut the dies 42 and 70, defects such as cracking and chipping caused by mechanical sawing can be avoided. Therefore, the occurrence of delamination and cracking in the stacked device structure can be reduced, and the situation of the partial pressure concentration region in the stacked device structure can also be reduced.Moreover, the sidewalls of the dies 42 and 70 having one or more recesses 110 have a larger surface area than, for example, vertical and vertical sidewalls that may be formed using a mechanical sawing process. The encapsulant 68 is bonded to this larger surface area, which in turn provides greater adhesion between the respective die 42/70 and the package 68. The recesses 110 in adjacent dies 42 and 70 can provide internal locking using the package 68. Additionally, the sidewalls of the dies 42 and 70 having one or more recesses 110 can reduce the effects of cracks. The larger surface area of the sidewalls increases the distance that the cracks must travel to reach the active portions of the dies 42 and 70. In addition, the notches 110 may create discontinuities along the sidewalls, discontinuities may intersect the propagated cracks, and the propagated cracks cease to propagate. More specifically, the notches 110 can provide alternating pressure concentration zones. By placing the recess 110 away from the active portions of the dies 42 and 70, the cracks can be transferred away from the active portions of the dies 42 and 70. Therefore, the adverse effects of cracks in the stacked device structure can be reduced.Third example die cutting process21-24 depict cross-sectional views of intermediate structures in a third die cutting process stage, in accordance with an embodiment of the present application. 25 is a flow chart of a third die cutting process in accordance with an embodiment of the present application. The third die cutting process can be performed at block 202 in FIG.21 depicts a portion of the intermediate structure of FIG. 2 after laser grooving in the cutting line 46, wherein the laser grooving is performed in block 252 of FIG. Next, FIG. 22 depicts the die 42 being cut along the cutting line 46 using, for example, mechanical cutting (eg, mechanical sawing), where mechanical cutting is performed in block 254 of FIG.FIG. 23 depicts the formation of a liner 120 along the sidewall of the die 42 that is performed in block 256 of FIG. Pad 120 may include or may be a nitride, such as silicon nitride or other material, and may be formed using spin coating, CVD, or other deposition processes. FIG. 24 depicts a smoothing pad 120 that is executed in block 258 of FIG. For example, an etch process can be used to smooth the liner 120 to achieve a smooth outer surface of the liner 120. For example, an oblique directional etch process and/or an isotropic etch process may also be performed to smooth the outer surface of the liner 120. In some embodiments, the smoothing operation of Figure 24 can be omitted, such as when the liner 120 is deposited with sufficient smoothness. In some embodiments, the thickness of the liner 120 (as in a direction perpendicular to the support sidewalls of the die) is in the range of from about 0.1 [mu]m to about 100 [mu]m. In some embodiments, the surface roughness of the outer surface is in the range of from about 0.1 nm RMS to about 1,000 nm RMS.26 depicts a stacked device structure in accordance with an embodiment of the present application, wherein the stacked device structure includes dies 42 and 70 that are cut from a workpiece using a third die cutting process of FIGS. 21-24. The stacked device structure of Figure 26 is similar in construction to the stacked device of Figure 6. In Figure 26, each die 42 and 70 has straight and vertical sidewalls with corresponding pads 120 formed on the sidewalls. The package 68 is bonded to the liner 120.Any defects along the sidewalls of the dies 42 and 70 caused by the mechanical sawing process can be covered by the liner 120 by using the liner 120 along the sidewalls of the dies 42 and 70. Covering the defect by the liner 120 may reduce or mitigate the effects that the defect may have on the package 68. Additionally, the liner 120 can be a stress buffer layer that can prevent crack propagation from propagating beyond the corresponding liner 120.***Although the stacked device structures of Figures 11, 19, 20, and 26 are shown and described as including two dies 42 and 70 formed in accordance with the example processes described herein, in some embodiments, stacked device structures One die formed in accordance with one of the example processes described herein can be included, as well as one or more dies that are cut using a mechanical sawing process. A stacked device structure in accordance with embodiments of the present application may include any number of dies cut according to any of the example processes described herein, in addition, the stacked device structure may include any number of cuts using a mechanical sawing process Die.Certain aspects of embodiments of the present application may allow for a more robust stacked device structure. As described above, the effects of defects caused by mechanical sawing can be eliminated or alleviated. This makes the stacked device structure more reliable, and the possibility of defects that cause structural failure of the stacked device becomes low. Accordingly, stacked device structures formed in accordance with certain embodiments described herein may be more suitable for applications requiring high reliability, such as automotive, military, or aerospace applications.As used in this application (including the claims), the term "at least one" in the <RTIgt; item</RTI> column indicates any combination of those items, including a single member. As an example, "at least one of x, y, and z" includes: x, y, z, x-y, x-z, y-z, x-y-z, and any combination thereof (eg, x-y-y and x-x-y-z).While the foregoing is directed to the specific embodiments of the present application, the subject matter of the invention
Caching instruction block header data in block architecture processor-based systems is disclosed. In one aspect, a computer processor device, based on a block architecture, provides an instruction block header cache dedicated to caching instruction block header data. Upon a subsequent fetch of an instruction block, cached instruction block header data may be retrieved from the instruction block header cache (if present) and used to optimize processing of the instruction block. In some aspects, the instruction block header data may include a microarchitectural block header (MBH) generated upon the first decoding of the instruction block by an MBH generation circuit. The MBH may contain static or dynamic information about the instructions within the instruction block. As non-limiting examples, the information may include data relating to register reads and writes, load and store operations, branch information, predicate information, special instructions, and/or serial execution preferences.
What is claimed is:1. A block-based computer processor device of a block architecture processor- based system, comprising:an instruction block header cache comprising a plurality of instruction block header cache entries each configured to store instruction block header data corresponding to an instruction block; andan instruction block header cache controller configured to:determine whether an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next; and responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier, provide the instruction block header data of the instruction block header cache entry to an execution pipeline.2. The block-based computer processor device of claim 1, wherein:the plurality of instruction block header cache entries are each configured to store a microarchitectural block header (MBH) as the instruction block header data;the block-based computer processor device further comprises an MBH generation circuit configured to generate an MBH for the instruction block based on decoding of the instruction block; andthe instruction block header cache controller is further configured to, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier, store the MBH of the instruction block as a new instruction block header cache entry.3. The block-based computer processor device of claim 2, wherein the MBH comprises one or more of data relating to register reads and writes within the instruction block, data relating to load and store operations within the instruction block, data relating to branches within the instruction block, data related to predicate information within the instruction block, data related to special instructions within the instruction block, and data related to serial execution preferences for the instruction block.4. The block-based computer processor device of claim 2, wherein the instruction block header cache controller is further configured to, further responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier:prior to the instruction block being committed, determine whether the MBH provided to the execution pipeline corresponds to the MBH previously generated; andresponsive to determining that the MBH provided to the execution pipeline does not correspond to the MBH previously generated, store the MBH previously generated of the instruction block in an instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block.5. The block-based computer processor device of claim 1, wherein:the plurality of instruction block header cache entries are each configured to store an architectural block header (ABH) as the instruction block header data; andthe instruction block header cache controller is further configured to, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier, store the ABH of the instruction block as a new instruction block header cache entry.6. The block-based computer processor device of claim 1 , wherein the plurality of instruction block header cache entries are each further configured to store an instruction block virtual address for indexing and tagging.7. The block-based computer processor device of claim 1, wherein the plurality of instruction block header cache entries are each further configured to store a subset of bits of an instruction block virtual address for indexing and tagging.8. The block-based computer processor device of claim 1 integrated into an integrated circuit (IC).9. The block-based computer processor device of claim 1 integrated into a device selected from the group consisting of: a set top box; an entertainment unit; a navigation device; a communications device; a fixed location data unit; a mobile location data unit; a global positioning system (GPS) device; a mobile phone; a cellular phone; a smart phone; a session initiation protocol (SIP) phone; a tablet; a phablet; a server; a computer; a portable computer; a mobile computing device; a wearable computing device; a desktop computer; a personal digital assistant (PDA); a monitor; a computer monitor; a television; a tuner; a radio; a satellite radio; a music player; a digital music player; a portable music player; a digital video player; a video player; a digital video disc (DVD) player; a portable digital video player; an automobile; a vehicle component; avionics systems; a drone; and a multicopter.10. A method for caching instruction block header data of instruction blocks in a block-based computer processor device, comprising:determining, by an instruction block header cache controller, whether an instruction block header cache entry of a plurality of instruction block header cache entries of an instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next; andresponsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier, providing instruction block header data of the instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block to an execution pipeline.11. The method of claim 10, wherein:the plurality of instruction block header cache entries are each configured to store a microarchitectural block header (MBH) as the instruction block header data; andthe method further comprises:generating, by an MBH generation circuit, an MBH for the instruction block based on decoding of the instruction block; andresponsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier, storing, by the instruction block header cache controller, the MBH of the instruction block as a new instruction block header cache entry.12. The method of claim 11, wherein the MBH comprises one or more of data relating to register reads and writes within the instruction block, data relating to load and store operations within the instruction block, data relating to branches within the instruction block, data related to predicate information within the instruction block, data related to special instructions within the instruction block, and data related to serial execution preferences for the instruction block.13. The method of claim 11, comprising, further responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier: prior to the instruction block being committed, determining whether the MBH provided to the execution pipeline corresponds to the MBH previously generated; andresponsive to determining that the MBH provided to the execution pipeline does not correspond to the MBH previously generated, storing the MBH previously generated of the instruction block in an instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block.14. The method of claim 10, wherein:the plurality of instruction block header cache entries are each configured to store an architectural block header (ABH) as the instruction block header data; andthe method further comprises, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier, storing the ABH of the instruction block as a new instruction block header cache entry.15. The method of claim 10, wherein the plurality of instruction block header cache entries are each further configured to store an instruction block virtual address for indexing and tagging.16. The method of claim 10, wherein the plurality of instruction block header cache entries are each further configured to store a subset of bits of an instruction block virtual address for indexing and tagging.17. A block-based computer processor device of a block architecture processor- based system, comprising:a means for determining whether an instruction block header cache entry of a plurality of instruction block header cache entries of an instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next; anda means for providing instruction block header data of the instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block to an execution pipeline, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier.The block-based computer processor device of claim 17, wherein:the plurality of instruction block header cache entries are each configured to store a microarchitectural block header (MBH) as the instruction block header data; andthe block-based computer processor device further comprises:a means for generating an MBH for the instruction block based on decoding of the instruction block; anda means for storing the MBH of the instruction block as a new instruction block header cache entry, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier.The block-based computer processor device of claim 18, further comprising: a means for determining, prior to the instruction block being committed, whether the MBH provided to the execution pipeline corresponds to the MBH previously generated, further responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier; anda means for storing the MBH previously generated of the instruction block in an instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block, responsive to determining that the MBH provided to the execution pipeline does not correspond to the MBH previously generated.20. The block-based computer processor device of claim 17, wherein:the plurality of instruction block header cache entries are each configured to store an architectural block header (ABH) as the instruction block header data; andthe block-based computer processor device further comprises a means for storing the ABH of the instruction block as a new instruction block header cache entry, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier.21. A non- transitory computer-readable medium having stored thereon computer- executable instructions which, when executed by a processor, cause the processor to: determine whether an instruction block header cache entry of a plurality of instruction block header cache entries of an instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next; andresponsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier, provide instruction block header data of the instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block to an execution pipeline.22. The non- transitory computer-readable medium of claim 21 having stored thereon computer-executable instructions which, when executed by a processor, further cause the processor to: generate a microarchitectural block header (MBH) for the instruction block based on decoding of the instruction block; andresponsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier, store, by an instruction block header cache controller, the MBH of the instruction block as the instruction block header data of a new instruction block header cache entry.23. The non-transitory computer-readable medium of claim 22, wherein the MBH comprises one or more of data relating to register reads and writes within the instruction block, data relating to load and store operations within the instruction block, data relating to branches within the instruction block, data relating to predicate information within the instruction block, data relating to special instructions within the instruction block, and data relating to serial execution preferences for the instruction block.24. The non-transitory computer-readable medium of claim 22 having stored thereon computer-executable instructions which, when executed by a processor, further cause the processor to, further responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier:prior to the instruction block being committed, determine whether the MBH provided to the execution pipeline corresponds to the MBH previously generated; andresponsive to determining that the MBH provided to the execution pipeline does not correspond to the MBH previously generated, store the MBH previously generated of the instruction block in an instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block.25. The non- transitory computer-readable medium of claim 21 having stored thereon computer-executable instructions which, when executed by a processor, further cause the processor to, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier, storing an architectural block header (ABH) of the instruction block as the instruction block header data for a new instruction block header cache entry.26. The non-transitory computer-readable medium of claim 21, wherein the plurality of instruction block header cache entries are each further configured to store an instruction block virtual address for indexing and tagging.27. The non-transitory computer-readable medium of claim 21, wherein the plurality of instruction block header cache entries are each further configured to store a subset of bits of an instruction block virtual address for indexing and tagging.
CACHING INSTRUCTION BLOCK HEADER DATA IN BLOCKARCHITECTURE PROCESSOR-BASED SYSTEMSPRIORITY APPLICATION[0001] The present application claims priority to U.S. Patent Application Serial No. 15/688,191, filed August 28, 2017 and entitled "CACHING INSTRUCTION BLOCK HEADER DATA IN BLOCK ARCHITECTURE PROCESSOR-BASED SYSTEMS," the contents of which is incorporated herein by reference in its entirety.BACKGROUNDI. Field of the Disclosure[0002] The technology of the disclosure relates generally to processor-based systems based on block architectures, and, in particular, to optimizing the processing of instruction blocks by block-based computer processor devices.II. Background[0003] In conventional computer architectures, an instruction is the most basic unit of work, and encodes all the changes to the architectural state that result from its execution (e.g., each instruction describes the registers and/or memory regions that it modifies). Therefore, a valid architectural state is definable after execution of each instruction. In contrast, block architectures (such as the E2 architecture and the Cascade architecture, as non-limiting examples) enable instructions to be fetched and processed in groups called "instruction blocks," which have no defined architectural state except at boundaries between instruction blocks. In block architectures, the architectural state needs to be defined and recoverable only at block boundaries. Thus, an instruction block, rather than an individual instruction, is the basic unit of work, as well as the basic unit for advancing an architectural state.[0004] Block architectures conventionally employ an architecturally defined instruction block header, referred to herein as an "architectural block header" (ABH), to express meta-information about a given block of instructions. Each ABH is typically organized as a fixed-size preamble to each block of instructions in the instruction memory. At the very least, an ABH must be able to demarcate block boundaries, and thus the ABH exists outside of the regular set of instructions which perform data and control flow manipulation.[0005] However, other information may be very useful for optimizing processing of an instruction block by a computer processing device. For example, data indicating a number of instructions in the instruction block, a number of bytes that make up the instruction block, a number of general purpose registers modified by the instructions in the instruction block, specific registers being modified by the instruction block, and/or a number of stores and register writes performed within the instruction block may assist the computer processing device in processing the instruction block more efficiently. While this additional data could be provided within each ABH, this would require a larger amount of storage space, which in turn would increase pressure on the computer processing device's instruction cache hierarchy that is responsible for caching ABHs. The additional data could also be determined on the fly by hardware when decoding an instruction block, but the decoding would have to be repeatedly performed each time the instruction block is fetched and decoded.SUMMARY OF THE DISCLOSURE[0006] Aspects according to the disclosure include caching instruction block header data in block architecture processor-based systems. In this regard, in one aspect, a computer processor device, based on a block architecture, provides an instruction block header cache, which is a cache structure that is exclusively dedicated to caching instruction block header data. Upon a subsequent fetch of an instruction block, the cached instruction block header data may be retrieved from the instruction block header cache (if present) and used to optimize processing of the instruction block. In some aspects, the instruction block header data cached by the instruction block header cache may include "microarchitectural block headers" (MBHs), which are generated upon the first decoding of an instruction block and which contain additional metadata for the instruction block. Each MBH is dynamically constructed by an MBH generation circuit, and may contain static or dynamic information about the instruction block's instructions. As non-limiting examples, the information may include data relating to register reads and writes, load and store operations, branch information, predicate information, special instructions, and/or serial execution preferences. Some aspects may provide that the instruction block header data cached by the instruction block header cache may include conventional architectural block headers (ABHs) to alleviate pressure on the instruction cache hierarchy of the computer processor device.[0007] In another aspect, a block-based computer processor device of a block architecture processor-based system is provided. The block-based computer processor device comprises an instruction block header cache comprising a plurality of instruction block header cache entries, each configured to store instruction block header data corresponding to an instruction block. The block-based computer processor device further comprises an instruction block header cache controller. The instruction block header cache controller is configured to determine whether an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next. The instruction block header cache controller is further configured to, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier, provide the instruction block header data of the instruction block header cache entry to an execution pipeline.[0008] In another aspect, a method for caching instruction block header data of instruction blocks in a block-based computer processor device is provided. The method comprises determining, by an instruction block header cache controller, whether an instruction block header cache entry of a plurality of instruction block header cache entries of an instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next. The method further comprises, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier, providing instruction block header data of the instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block to an execution pipeline.[0009] In another aspect, a block-based computer processor device of a block architecture processor-based system is provided. The block-based computer processor device comprises a means for determining whether an instruction block header cache entry of a plurality of instruction block header cache entries of an instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next. The block-based computer processor device further comprises a means for providing instruction block header data of the instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block to an execution pipeline, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier.[0010] In another aspect, a non-transitory computer-readable medium having stored thereon computer-executable instructions is provided. The computer-executable instructions, when executed by a processor, cause the processor to determine whether an instruction block header cache entry of a plurality of instruction block header cache entries of an instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next. The computer-executable instructions further cause the processor to, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier, provide instruction block header data of the instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block to an execution pipeline.BRIEF DESCRIPTION OF THE FIGURES[0011] Figure 1 is a block diagram of an exemplary block architecture processor- based system including an instruction block header cache providing caching of instruction block headers, and an optional microarchitectural block header (MBH) generation circuit;[0012] Figure 2 is a block diagram illustrating the internal structure of an exemplary instruction block header cache of Figure 1 ;[0013] Figures 3A and 3B are a flowchart illustrating exemplary operations of the instruction block header cache of Figure 1 for caching instruction block header data comprising an MBH generated by the MBH generation circuit of Figure 1; [0014] Figure 4 is a flowchart illustrating additional exemplary operations of the instruction block header cache of Figure 1 for caching instruction block header data comprising an architectural block header (ABH); and[0015] Figure 5 is a block diagram of an exemplary processor-based system that can include the instruction block header cache and the MBH generation circuit of Figure 1.DETAILED DESCRIPTION[0016] With reference now to the drawing figures, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.[0017] Aspects disclosed in the detailed description include caching instruction block header data in block architecture processor-based systems. In this regard, Figure 1 illustrates an exemplary block architecture processor-based system 100 that includes a computer processor device 102. The computer processor device 102 implements a block architecture, and is configured to execute a sequence of instruction blocks, such as instruction blocks 104(0)- 104(X). In some aspects, the computer processor device 102 may be one of multiple processor devices or cores, each executing separate sequences of instruction blocks 104(0)-104(X) and/or coordinating to execute a single sequence of instruction blocks 104(0)- 104(X).[0018] In exemplary operation, an instruction cache 106 (for example, a Level 1 (LI) instruction cache) of the computer processor device 102 receives instruction blocks (e.g., instruction blocks 104(0)- 104(X)) for execution. It is to be understood that, at any given time, the computer processor device 102 may be processing more or fewer instruction blocks than the instruction blocks 104(0)-104(X) illustrated in Figure 1. Each of the instruction block 104(0)-104(X) includes a corresponding instruction block identifier 108(0)-108(X), which provides a unique handle by which the instruction block 104(0)- 104(X) may be referenced. In some aspects, the instruction block identifiers 108(0)- 108(X) may comprise a physical or virtual memory address at which the corresponding instruction block 104(0)- 104(X) begins. The instruction blocks 104(0)- 104(X) also each include a corresponding architectural block header (ABH) 110(0)- 110(X). Each ABH 110(0)-110(X) is a fixed-size preamble to the instruction block 104(0)- 104(X), and provides static information that is generated by a compiler and that is associated with the instruction block 104(0)- 104(X). At a minimum, each of the ABHs 110(0)-110(X) includes data demarcating the boundaries of the instruction block 104(0)- 104(X) (e.g., a number of instructions within the instruction block 104(0)- 104(X) and/or a number of bytes occupied by the instruction block 104(0)- 104(X), as non-limiting examples).[0019] A block predictor 112 determines a predicted execution path of the instruction blocks 104(0)- 104(X). In some aspects, the block predictor 112 may predict an execution path in a manner analogous to a branch predictor of a conventional out-of- order processor (OoP). A block sequencer 114 within an execution pipeline 116 orders the instruction blocks 104(0)- 104(X), and forwards the instruction blocks 104(0)- 104(X) to one of one or more instruction decode stages 118 for decoding.[0020] After decoding, the instruction blocks 104(0)- 104(X) are held in an instruction buffer 120 pending execution. An instruction scheduler 122 distributes instructions of the active instruction blocks 104(0)-104(X) to one of one or more execution units 124 of the computer processor device 102. As non-limiting examples, the one or more execution units 124 may comprise an arithmetic logic unit (ALU) and/or a floating-point unit. The one or more execution units 124 may provide results of instruction execution to a load/store unit 126, which in turn may store the execution results in a data cache 128, such as a Level 1 (LI) data cache.[0021] The computer processor device 102 may encompass any one of known digital logic elements, semiconductor circuits, processing cores, and/or memory structures, among other elements, or combinations thereof. Aspects described herein are not restricted to any particular arrangement of elements, and the disclosed techniques may be easily extended to various structures and layouts on semiconductor dies or packages. Additionally, it is to be understood that the computer processor device 102 may include additional elements not shown in Figure 1, may include a different number of the elements shown in Figure 1 , and/or may omit elements shown in Figure 1.[0022] While data that is conventionally provided by the ABHs 110(0)-110(X) of the instruction blocks 104(0)- 104(X) is useful in processing the instructions contained within the instruction blocks 104(0)-104(X), a greater variety of per-instruction-block metadata could allow the elements of the execution pipeline 116 to further optimize the fetching, decoding, scheduling, execution, and completion of the instruction blocks 104(0)- 104(X). However, including such data as part of the ABHs 110(0)-110(X) would further increase the size of the ABHs 110(0)-110(X), and consequently would consume a larger amount of storage. Moreover, larger ABHs 110(0)-110(X) would reduce the capacity of the instruction cache 106, which may already be stressed by the generally lower density of instructions in block architectures.[0023] Thus, to provide richer data regarding the properties of the instruction blocks 104(0)- 104(X), the computer processor device 102 includes a microarchitectural block header (MBH) generation circuit ("MBH GENERATION CIRCUIT") 130. The MBH generation circuit 130 receives data from the one or more instruction decode stages 118 of the execution pipeline 116 after decoding of an instruction block 104(0)- 104(X), and generates an MBH 132 for the decoded instruction block 104(0)-104(X). The data included as part of the MBH 132 comprises static or dynamic information about the instructions within the instruction block 104(0)- 104(X) that may be useful to the elements of the execution pipeline 116. Such data may include, as non- limiting examples, data relating to register reads and writes within the instruction block 104(0)- 104(X), data relating to load and store operations within the instruction block 104(0)- 104(X), data relating to branches within the instruction block 104(0)-104(X), data related to predicate information within the instruction block 104(0)-104(X), data related to special instructions within the instruction block 104(0)-104(X), and/or data related to serial execution preferences for the instruction block 104(0)- 104(X).[0024] The use of the MBH 132 may help to improve processing of the instruction blocks 104(0)- 104(X), thereby improving the overall performance of the computer processor device 102. However, the MBH 132 for each one of the instruction blocks 104(0)- 104(X) would have to be repeatedly generated each time the instruction block 104(0)- 104(X) is decoded by the one or more instruction decode stages 118 of the execution pipeline 116. Moreover, a next instruction block 104(0)-104(X) could not be executed until the MBH 132 for the previous instruction block 104(0)- 104(X) has been generated, which requires that all of the instructions of the previous instruction block 104(0)- 104(X) have at least been decoded. [0025] In this regard, the computer processor device 102 provides an instruction block header cache 134, which stores a plurality of instruction block header cache entries 136(0)-136(N), and an instruction block header cache controller 138. The instruction block header cache 134 is a cache structure dedicated to exclusively caching instruction block header data. In some aspects, the instruction block header data cached by the instruction block header cache 134 comprises MBHs 132 generated by the MBH generation circuit 130. Such aspects enable the computer processor device 102 to realize the performance benefits of the instruction block header data provided by the MBH 132 without the cost of relearning the instruction block header data every time the corresponding instruction block 104(0)-104(X) is fetched and decoded. Other aspects may provide that the instruction block header data comprises the ABHs 110(0)- 110(X) of the instruction blocks 104(0)-104(X). Because aspects disclosed herein may store both the MBH 132 and/or the ABHs 110(0)-110(X), both may be referred to herein as "instruction block header data."[0026] In exemplary operation, the instruction block header cache 134 operates in a manner analogous to a conventional cache. The instruction block header cache controller 138 receives an instruction block identifier 108(0)-108(X) of a next instruction block 104(0)-104(X) to be fetched and executed. The instruction block header cache controller 138 then accesses the instruction block header cache 134 to determine whether the instruction block header cache 134 contains an instruction block header cache entry 136(0)-136(N) that corresponds to the instruction block identifier 108(0)- 108(X). If so, a cache hit results, and the instruction block header data stored by the instruction block header cache entry 136(0)- 136(N) is provided to the execution pipeline 116 to optimize processing of the corresponding instruction block 104(0)- 104(X).[0027] As noted above, some aspects of the instruction block header cache 134 store the MBH 132 as instruction block header data within the instruction block header cache entries 136(0)- 136(N). In such aspects, after a cache hit occurs, the instruction block header cache controller 138 compares the MBH 132 generated by the MBH generation circuit 130 after decoding the corresponding instruction block 104(0)- 104(X) with the instruction block header data provided from the instruction block header cache 134. If the MBH 132 previously generated does not match the instruction block header data, the instruction block header cache controller 138 updates the instruction block header cache 134 by storing the MBH 132 previously generated in the instruction block header cache entry 136(0)-136(N) corresponding to the instruction block 104(0)-104(X).[0028] If no instruction block header cache entry 136(0)-136(N) corresponding to the instruction block identifier 108(0)-108(X) exists within the instruction block header cache 134 (i.e., a cache miss), the instruction block header cache controller 138 in some aspects stores instruction block header data for the associated instruction block 104(0)- 104(X) as a new instruction block header cache entry 136(0)-136(N). In aspects in which the instruction block header data stored by the instruction block header cache entry 136(0)-136(N) comprises the MBH 132, the instruction block header cache controller 138 receives and stores the MBH 132 generated by the MBH generation circuit 130 as the instruction block header data after decoding of the corresponding instruction block 104(0)-104(X) is performed by the one or more instruction decode stages 118 of the execution pipeline 116. Aspects of the instruction block header cache 134 in which the instruction block header data comprises the ABH 110(0)-ABH 110(X) store the ABH 110(0)-ABH 110(X) of the corresponding instruction block 104(0)- 104(X).[0029] Figure 2 provides a more detailed illustration of the contents of the instruction block header cache 134 of Figure 1. As seen in the example of Figure 2, the instruction block header cache 134 comprises a tag array 200 that stores a plurality of tag array entries 202(0)-202(N), and further comprises a data array 204 comprising the instruction block header cache entries 136(0)- 136(N) of Figure 1. Each of the tag array entries 202(0)-202(N) includes a valid indicator ("VALID") 206(0)-206(N) representing a current validity of the tag array entry 202(0)-202(N). The tag array entries 202(0)- 202(N) each also includes a tag 208(0)-208(N), which serves as an identifier for the corresponding instruction block header cache entry 136(0)-136(N). In some aspects, the tags 208(0)-208(N) may comprise a virtual address of the instruction block 104(0)- 104(X) for which instruction block header data is being cached. Some aspects may further provide that the tags 208(0)-208(N) comprise only a subset of the bits (e.g., only the lower order bits) of the virtual address of the instruction block 104(0)-104(X).[0030] Similar to the tag array entries 202(0)-202(N), each of the instruction block header cache entries 136(0)-136(N) provides a valid indicator ("VALID") 210(0)- 210(N) representing a current validity of the instruction block header cache entry 136(0)-136(N). The instruction block header cache entries 136(0)-136(N) also store instruction block header data 212(0)-212(N). As noted above, the instruction block header data 212(0)-212(N) may comprise the MBH 132 generated by the MBH generation circuit 130 for the corresponding instruction block 104(0)-104(X), or may comprise the ABH 110(0)-110(X) of the instruction block 104(0)- 104(X).[0031] To illustrate exemplary operations of the instruction block header cache 134 and the instruction block header cache controller 138 of Figure 1 for caching instruction block header data, Figures 3A and 3B are provided. In the example of Figures 3A and 3B, it is assumed that the instruction block header data comprises the MBH 132 generated by the MBH generation circuit 130 of Figure 1. Elements of Figures 1 and 2 are referenced in describing Figures 3 A and 3B, for the sake of clarity. Operations in Figure 3A begin with the instruction block header cache controller 138 determining whether an instruction block header cache entry of the plurality of instruction block header cache entries 136(0)- 136(N) of the instruction block header cache 134 corresponds to an instruction block identifier 108(0)-108(X) of an instruction block 104(0)- 104(X) to be fetched next (block 300). In this regard, the instruction block header cache controller 138 may be referred to herein as "a means for determining whether an instruction block header cache entry of a plurality of instruction block header cache entries of an instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next."[0032] If no corresponding instruction block header cache entry 136(0)-136(N) exists (i.e., a cache miss occurs), processing resumes at block 302 of Figure 3B. However, if the instruction block header cache controller 138 determines at decision block 300 that an instruction block header cache entry 136(0)-136(N) corresponds to the instruction block identifier 108(0)-108(X) (i.e., a cache hit), the instruction block header cache controller 138 provides the instruction block header data 212(0)-212(N) (in this example, a cached MBH 132) of the instruction block header cache entry of the plurality of instruction block header cache entries 136(0)-136(N) corresponding to the instruction block 104(0)-104(X) to the execution pipeline 116 (block 304). Accordingly, the instruction block header cache controller 138 may be referred to herein as "a means for providing instruction block header data of the instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block to an execution pipeline, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier."[0033] In some aspects, the MBH generation circuit 130 subsequently generates an MBH 132 for the instruction block 104(0)-104(X) based on decoding of the instruction block 104(0)-104(X) (block 306). The MBH generation circuit 130 thus may be referred to herein as "a means for generating an MBH for the instruction block based on decoding of the instruction block." The instruction block header cache controller 138 then determines whether the MBH 132 provided to the execution pipeline 116 corresponds to the MBH 132 previously generated (block 308). In this regard, the instruction block header cache controller 138 may be referred to herein as "a means for determining, prior to the instruction block being committed, whether the MBH provided to the execution pipeline corresponds to the MBH previously generated, further responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier."[0034] If the instruction block header cache controller 138 determines at decision block 308 that the MBH 132 provided to the execution pipeline 116 corresponds to the MBH 132 previously generated, processing continues (block 310). However, if the MBH 132 previously generated does not correspond to the MBH 132 provided to the execution pipeline 116, the instruction block header cache controller 138 stores the MBH 132 previously generated of the instruction block 104(0) in an instruction block header cache entry of the plurality of instruction block header cache entries 136(0)- 136(N) corresponding to the instruction block 104(0)- 104(X) (block 312). Accordingly, the instruction block header cache controller 138 may be referred to herein as "a means for storing the MBH previously generated of the instruction block in an instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block, responsive to determining that the MBH provided to the execution pipeline does not correspond to the MBH previously generated." Processing then continues at block 310. [0035] Referring now to Figure 3B, if a cache miss occurs at decision block 300 of Figure 3A, the MBH generation circuit 130 generates an MBH 132 for the instruction block 104(0)- 104(X) based on decoding of the instruction block 104(0)- 104(X) (block 302). The MBH generation circuit 130 thus may be referred to herein as "a means for generating an MBH for the instruction block based on decoding of the instruction block." The instruction block header cache controller 138 then stores the MBH 132 of the instruction block 104(0)-104(X) as a new instruction block header cache entry 136(0)- 136(N) (block 314). In this regard, the instruction block header cache controller 138 may be referred to herein as "a means for storing the MBH of the instruction block as a new instruction block header cache entry, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier." Processing then continues at block 316.[0036] Figure 4 is a flowchart illustrating additional exemplary operations of the instruction block header cache 134 and the instruction block header cache controller 138 of Figure 1 for caching instruction block header data comprising an ABH, such as one of the ABHs 110(0)-110(X). For the sake of clarity, elements of Figures 1 and 2 are referenced in describing Figure 4. In Figure 4, operations begin with the instruction block header cache controller 138 determining whether an instruction block header cache entry of a plurality of instruction block header cache entries 136(0)-136(N) of the instruction block header cache 134 corresponds to an instruction block identifier 108(0)- 108(X) of an instruction block 104(0)- 104(X) to be fetched next (block 400). Accordingly, the instruction block header cache controller 138 may be referred to herein as "a means for determining whether an instruction block header cache entry of a plurality of instruction block header cache entries of an instruction block header cache corresponds to an instruction block identifier of an instruction block to be fetched next."[0037] If the instruction block header cache controller 138 determines at decision block 400 that an instruction block header cache entry 136(0)-136(N) corresponds to the instruction block identifier 108(0)-108(X) (i.e., a cache hit), the instruction block header cache controller 138 provides the instruction block header data 212(0)-212(N) (in this example, a cached ABH 110(0)-110(X)) of the instruction block header cache entry of the plurality of instruction block header cache entries 136(0)- 136(N) corresponding to the instruction block 104(0)-104(X) to the execution pipeline 116 (block 402). The instruction block header cache controller 138 thus may be referred to herein as "a means for providing instruction block header data of the instruction block header cache entry of the plurality of instruction block header cache entries corresponding to the instruction block to an execution pipeline, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache corresponds to the instruction block identifier." Processing then continues at block 404.[0038] However, if it is determined at decision block 400 that no corresponding instruction block header cache entry 136(0)- 136(N) exists (i.e., a cache miss occurs), the instruction block header cache controller 138 stores the ABH 110(0)-110(X) of the instruction block 104(0)-104(X) as a new instruction block header cache entry 136(0)- 136(N) (block 406). In this regard, the instruction block header cache controller 138 may be referred to herein as "a means for storing the ABH of the instruction block as a new instruction block header cache entry, responsive to determining that an instruction block header cache entry of the plurality of instruction block header cache entries of the instruction block header cache does not correspond to the instruction block identifier." Processing then continues at block 404.[0039] Caching instruction block header data in block architecture processor-based systems according to aspects disclosed herein may be provided in or integrated into any processor-based system. Examples, without limitation, include a set top box, an entertainment unit, a navigation device, a communications device, a fixed location data unit, a mobile location data unit, a global positioning system (GPS) device, a mobile phone, a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a tablet, a phablet, a server, a computer, a portable computer, a mobile computing device, a wearable computing device (e.g., a smart watch, a health or fitness tracker, eyewear, etc.), a desktop computer, a personal digital assistant (PDA), a monitor, a computer monitor, a television, a tuner, a radio, a satellite radio, a music player, a digital music player, a portable music player, a digital video player, a video player, a digital video disc (DVD) player, a portable digital video player, an automobile, a vehicle component, avionics systems, a drone, and a multicopter. [0040] In this regard, Figure 5 illustrates an example of a processor-based system 500 that corresponds to the block architecture processor-based system 100 of Figure 1. The processor-based system 500 includes one or more CPUs 502, each including one or more processors 504. The processor(s) 504 may comprise the instruction block header cache controller ("IBHCC") 138 and the MBH generation circuit ("MBHGC") 130 of Figure 1. The CPU(s) 502 may have cache memory 506 that is coupled to the processor(s) 504 for rapid access to temporarily stored data. The cache memory 506 may comprise the instruction block header cache ("IBHC") 134 of Figure 1. The CPU(s) 502 is coupled to a system bus 508 and can intercouple master and slave devices included in the processor-based system 500. As is well known, the CPU(s) 502 communicates with these other devices by exchanging address, control, and data information over the system bus 508. For example, the CPU(s) 502 can communicate bus transaction requests to a memory controller 510 as an example of a slave device.[0041] Other master and slave devices can be connected to the system bus 508. As illustrated in Figure 5, these devices can include a memory system 512, one or more input devices 514, one or more output devices 516, one or more network interface devices 518, and one or more display controllers 520, as examples. The input device(s) 514 can include any type of input device, including, but not limited to, input keys, switches, voice processors, etc. The output device(s) 516 can include any type of output device, including, but not limited to, audio, video, other visual indicators, etc. The network interface device(s) 518 can be any devices configured to allow exchange of data to and from a network 522. The network 522 can be any type of network, including, but not limited to, a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a BLUETOOTH™ network, and the Internet. The network interface device(s) 518 can be configured to support any type of communications protocol desired. The memory system 512 can include one or more memory units 524(0)-524(N).[0042] The CPU(s) 502 may also be configured to access the display controller(s) 520 over the system bus 508 to control information sent to one or more displays 526. The display controller(s) 520 sends information to the display(s) 526 to be displayed via one or more video processors 528, which process the information to be displayed into a format suitable for the display(s) 526. The display(s) 526 can include any type of display, including, but not limited to, a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, etc.[0043] Those of skill in the art will further appreciate that the various illustrative logical blocks, modules, circuits, and algorithms described in connection with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer readable medium and executed by a processor or other processing device, or combinations of both. The master devices, and slave devices described herein may be employed in any circuit, hardware component, integrated circuit (IC), or IC chip, as examples. Memory disclosed herein may be any type and size of memory and may be configured to store any type of information desired. To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. How such functionality is implemented depends upon the particular application, design choices, and/or design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[0044] The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).[0045] The aspects disclosed herein may be embodied in hardware and in instructions that are stored in hardware, and may reside, for example, in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a remote station. In the alternative, the processor and the storage medium may reside as discrete components in a remote station, base station, or server.[0046] It is also noted that the operational steps described in any of the exemplary aspects herein are described to provide examples and discussion. The operations described may be performed in numerous different sequences other than the illustrated sequences. Furthermore, operations described in a single operational step may actually be performed in a number of different steps. Additionally, one or more operational steps discussed in the exemplary aspects may be combined. It is to be understood that the operational steps illustrated in the flowchart diagrams may be subject to numerous different modifications as will be readily apparent to one of skill in the art. Those of skill in the art will also understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[0047] The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
A semiconductor fabrication facility architecture which includes a fabrication facility, a middleware component, a real time dispatcher application program interface coupled between the fabrication facility and the middleware component, and a work in progress application program interface coupled between the fabrication facility and the middleware component. The fabrication facility includes a manufacturing execution system and a real time dispatch system. The manufacturing execution system tracks overall processing of semiconductor wafers and the real time dispatch system provides near real time information regarding processing of semiconductor wafers. The real time dispatcher application program interface provides a common interface for providing information to the middleware component. The work in progress application program interface provides a common interface for communicating to the fabrication facility from the middleware component.
1. A semiconductor fabrication facility architecture comprising:a fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a middleware component, a real time dispatcher application program interface coupled between the fabrication facility and the middleware component, the real time dispatcher application program interface providing a common interface for providing information to the middleware component; and, a work in progress application program interface coupled between the fabrication facility and the middleware component, the work in progress application program interface providing a common interface for communicating to the fabrication facility from the middleware component; and wherein the fabrication facility includes a front end portion; the real time dispatcher application program interface includes a front end real time dispatcher application program interface coupled to the fabrication facility front end portion; and, the work in progress application program interface includes a front end work in progress application program interface coupled to the fabrication facility front end portion. 2. The semiconductor fabrication facility architecture of claim 1 further comprising:an inventory management system coupled to the middleware component, the inventory management system providing centralized lot database information for global visibility for product movement. 3. The semiconductor fabrication facility architecture of claim 1 wherein:the inventory management system includes a work in progress management component, the work in progress management component providing a global view of work in progress as well as static inventory. 4. The semiconductor fabrication facility architecture of claim 1 further comprising:the inventory management system includes a lot start component, the lot start component providing lot start information relating to inventory management. 5. The semiconductor fabrication facility architecture of claim 1 further comprising:an enterprise resource planning system coupled to the middleware component. 6. The semiconductor fabrication facility architecture of claim 1 further comprising:a sapphire system coupled to the middleware component, the sapphire system providing a systematic approach to product performance history and reliability engineering. 7. A semiconductor fabrication facility architecture comprising:a fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a middleware component, a real time dispatcher application program interface coupled between the fabrication facility and the middleware component, the real time dispatcher application program interface providing a common interface for providing information to the middleware component; and, a work in progress application program interface coupled between the fabrication facility and the middleware component, the work in progress application program interface providing a common interface for communicating to the fabrication facility from the middleware component, and wherein the fabrication facility includes a back end portion; the real time dispatcher application program interface includes a back end real time dispatcher application program interface coupled to the fabrication facility back end portion; and, the work in progress application program interface includes a back end work in progress application program interface coupled to the fabrication facility back end portion. 8. A semiconductor fabrication architecture comprising:a middleware component, a first fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a first real time dispatcher application program interface coupled between the first fabrication facility and the middleware component, the first real time dispatcher application program interface providing a common interface for providing information to the middleware component; a first work in progress application program interface coupled between the fabrication facility and the middleware component, the first work in progress application program interface providing a common interface for communicating to the first fabrication facility from the middleware component; a second fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a second real time dispatcher application program interface coupled between the second fabrication facility and the middleware component, the second real time dispatcher application program interface providing a common interface for providing information to the middleware component; and, a second work in progress application program interface coupled between the fabrication facility and the middleware component, the second work in progress application program interface providing a common interface for communicating to the second fabrication facility from the middleware component; and wherein the first fabrication facility includes a front end portion; the first real time dispatcher application program interface includes a front end real time dispatcher application program interface coupled to the first fabrication facility front end portion; and, the first work in progress application program interface includes a front end work in progress application program interface coupled to the first fabrication facility front end portion. 9. The semiconductor fabrication architecture of claim 8 further comprising:an inventory management system coupled to the middleware component, the inventory management system providing centralized lot database information for global visibility for product movement. 10. The semiconductor fabrication architecture of claim 8 wherein:the inventory management system includes a work in progress management component, the work in progress management component providing a global view of work in progress as well as static inventory. 11. The semiconductor fabrication architecture of claim 8 further comprising:the inventory management system includes a lot start component, the lot start component providing lot start information relating to inventory management. 12. The semiconductor fabrication architecture of claim 8 further comprising:an enterprise resource planning system coupled to the middleware component. 13. The semiconductor fabrication architecture of claim 8 further comprising:a sapphire system coupled to the middleware component, the sapphire system providing a systematic approach to product performance history and reliability engineering. 14. A semiconductor fabrication architecture comprising:a middleware component, a first fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a first real time dispatcher application program interface coupled between the first fabrication facility and the middleware component, the first real time dispatcher application program interface providing a common interface for providing information to the middleware component; a first work in progress application program interface coupled between the fabrication facility and the middleware component, the first work in progress application program interface providing a common interface for communicating to the first fabrication facility from the middleware component; a second fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a second real time dispatcher application program interface coupled between the second fabrication facility and the middleware component, the second real time dispatcher application program interface providing a common interface for providing information to the middleware component; and, a second work in progress application program interface coupled between the fabrication facility and the middleware component, the second work in progress application program interface providing a common interface for communicating to the second fabrication facility from the middleware component; and wherein the first fabrication facility includes a back end portion; the first real time dispatcher application program interface includes a back end real time dispatcher application program interface coupled to the first fabrication facility back end portion; and, the first work in progress application program interface includes a back end work in progress application program interface coupled to the first fabrication facility back end portion. 15. A semiconductor fabrication architecture comprising:a middleware component, a first fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a first real time dispatcher application program interface coupled between the first fabrication facility and the middleware component, the first real time dispatcher application program interface providing a common interface for providing information to the middleware component; a first work in progress application program interface coupled between the fabrication facility and the middleware component, the first work in progress application program interface providing a common interface for communicating to the first fabrication facility from the middleware component; a second fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a second real time dispatcher application program interface coupled between the second fabrication facility and the middleware component, the second real time dispatcher application program interface providing a common interface for providing information to the middleware component; and, a second work in progress application program interface coupled between the fabrication facility and the middleware component, the second work in progress application program interface providing a common interface for communicating to the second fabrication facility from the middleware component; and wherein the second fabrication facility includes a front end portion; the second real time dispatcher application program interface includes a front end real time dispatcher application program interface coupled to the second fabrication facility front end portion; and, the second work in progress application program interface includes a front end work in progress application program interface coupled to the second fabrication facility front end portion. 16. A semiconductor fabrication architecture comprising:a middleware component, a first fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a first real time dispatcher application program interface coupled between the first fabrication facility and the middleware component, the first real time dispatcher application program interface providing a common interface for providing information to the middleware component; a first work in progress application program interface coupled between the fabrication facility and the middleware component, the first work in progress application program interface providing a common interface for communicating to the first fabrication facility from the middleware component; a second fabrication facility, the fabrication facility including a manufacturing execution system and a real time dispatch system, the manufacturing execution system tracking overall processing of semiconductor wafers, the real time dispatch system providing near real time information regarding processing of semiconductor wafers; a second real time dispatcher application program interface coupled between the second fabrication facility and the middleware component, the second real time dispatcher application program interface providing a common interface for providing information to the middleware component; and, a second work in progress application program interface coupled between the fabrication facility and the middleware component, the second work in progress application program interface providing a common interface for communicating to the second fabrication facility from the middleware component; and wherein the second fabrication facility includes a back end portion; the second real time dispatcher application program interface includes a back end real time dispatcher application program interface coupled to the second fabrication facility back end portion; and, the second work in progress application program interface includes a back end work in progress application program interface coupled to the second fabrication facility back end portion.
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to semiconductor manufacturing and more particularly to providing and receiving data related to semiconductor manufacturing in a uniform format.2. Description of the Related ArtManufacturing semiconductor devices uses a plurality of discrete process steps to create a semiconductor circuit from raw semiconductor material. The discrete process steps, from the initial melt and refinement of the semiconductor material, the slicing of the semiconductor crystal into individual wafers, the fabrication stages (e.g., etching, doping, ion implanting or the like), to the packaging and final testing of the completed device may be performed in different facilities in remote regions of the globe.One issue which arises in semiconductor manufacturing is that the various processes which may take place at discrete locations may make it difficult to track a semiconductor device through the fabrication process. Such tracking may be desirable for quality control as well as inventory management.In known semiconductor fabrication facilities, individual fabrication machines may provide and receive data regarding operating conditions during the fabrication process in many different data formats. Some of the data that is provided and received by the fabrication machines includes intrinsic data such as, for example, lot numbers, device model number or the like as well as extrinsic data such as production test data, production conditions or the like.SUMMARY OF THE INVENTIONIn one embodiment, the invention relates to a semiconductor fabrication facility architecture which includes a fabrication facility, a middleware component, a real time dispatcher application program interface coupled between the fabrication facility and the middleware component, and a work in progress application program interface coupled between the fabrication facility and the middleware component. The fabrication facility includes a manufacturing execution system and a real time dispatch system. The manufacturing execution system tracks overall processing of semiconductor wafers and the real time dispatch system provides near real time information regarding processing of semiconductor wafers. The real time dispatcher application program interface provides a common interface for providing information to the middleware component. The work in progress application program interface provides a common interface for communicating to the fabrication facility from the middleware component.In another embodiment, the invention relates to a semiconductor fabrication architecture which includes a middleware component, a first fabrication facility, a first real time dispatcher application program interface coupled between the first fabrication facility and the middleware component, a first work in progress application program interface coupled between the fabrication facility and the middleware component, a second fabrication facility, a second real time dispatcher application program interface coupled between the second fabrication facility and the middleware component, and a second work in progress application program interface coupled between the fabrication facility and the middleware component. The fabrication facility includes a manufacturing execution system and a real time dispatch system. The manufacturing execution system tracks the overall processing of semiconductor wafers. The real time dispatch system provides near real time information regarding processing of semiconductor wafers. The first real time dispatcher application program interface provides a common interface for providing information to the middleware component. The first work in progress application program interface provides a common interface for communicating to the first fabrication facility from the middleware component. The fabrication facility includes a manufacturing execution system and a real time dispatch system. The manufacturing execution system tracks the overall processing of semiconductor wafers and the real time dispatch system provides near real time information regarding processing of semiconductor wafers. The second real time dispatcher application program interface provides a common interface for providing information to the middleware component and the second work in progress application program interface provides a common interface for communicating to the second fabrication facility from the middleware component.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention may be better understood, and its numerous objects, features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several figures designates a like or similar element.FIG. 1 shows a block diagram of a semiconductor fabrication architecture including ERP integration.FIGS. 2A and 2B show a more detailed block diagram of the semiconductor fabrication architecture of FIG. 1.FIG. 3 shows a flow chart of an intrafacility shipping process flow.FIG. 4 shows a flow chart of an interfacility shipping process flow.FIG. 5 shows another interfacility shipping process flow.FIG. 6 shows a process flow for a lot history upload.FIG. 7 shows a process flow for a lot attribute upload.FIG. 8 shows a process flow for a wafer data upload.FIG. 9 shows a process flow for a post shipping lot create in a destination facility.FIG. 10 shows the process flow for creating a product that corresponds to a new ERP material.FIG. 11 shows the process flow for associating an MES route to an ERP material.FIGS. 12A and 12B show an alternate fabrication architecture.DETAILED DESCRIPTIONReferring to FIG. 1, a block diagram of a semiconductor fabrication architecture 100 is shown. More specifically, the front end fabrication architecture 100 includes a plurality of individual fabrication locations 110, which may be distributed across various manufacturing facilities. Each individual fabrication facility 110 represents a black box within the architecture 100. I.e., providing data to and receiving data from the fabrication facility 110 is in a format which is common to the architecture 100 whether or not this format is understood by an individual fabrication facility 110 or machines within the individual fabrication facility 110.Each individual fabrication facility 110 includes a manufacturing execution system (MES) 120 for tracking the overall processing of semiconductor wafers as well as a Real Time Dispatch (RTD) system 122 for providing near real time information regarding the processing of the semiconductor wafers. The MES may be, for example, a manufacturing execution system such as Workstream available from Applied Materials. The real time dispatch system 122 executes at the factories at an execution level. A APF/RTD real time dispatch system available from Brooks Automation is an example of one known RTD system 122.Each individual fabrication facility 110 includes a respective work in progress (WIP) management application program interface (API) 130 as well as a real time dispatch (RTD) application program interface (API) 132. The WM API 130 provides a common interface with which to communicate with each individual fabrication facility 110. The RTD API 132 provides a common interface from which to receive information from each individual fabrication facility 110.The individual fabrication facilities 110 communicate with each other via a middleware component 140. One example of such a middleware component is the TIBCO Rendezvous System available from TIBCO Software, Inc.A SAPPHiRE (Systematic Approach to Product Performance History and REliability Engineering) system 150 is also coupled to the middleware component 140. The SAPPHiRE system 150 may be located remotely from one or more of the individual fabrication facilities.An enterprise resource planning system 160 is also coupled to the middleware component 140. One example of an enterprise resource planning system is the ERP R/3 system available from SAP. The enterprise resource planning system 160 may be located remotely from one or more of the individual fabrication facilities as well as from the SAPPHiRE system 150. SAPPHiRE system 150 includes a database that collects engineering data and allows users to perform yields and engineering analysis. The SAPPHiRE system 150 also allows a user to trace product failures.An inventory management system 170 is also coupled to the middleware component 140. The inventory management system 170 includes a WIP management (WM) component 172 as well as a Lot Start component 174. One example of an inventory management system is available from Triniti Corporation. The inventory management system 170 provides a centralized lot database for global visibility into all WIP moves. The inventory management system also provides the following functions: real time integration of items between the ERP system 160, MES 120, and other systems; real time integration of Bill of Materials in the ERP system 160, MES 120, and other systems; real time integration of Routes from the ERP system 160 to the MES system 120 for Back-end facilities; real time integration of Routes from the MES system 120 to the ERP system 160 for Front-end facilities; real time access to all relevant MES lot Transactions; the ability to make ERP system BOM levels transparent to MES systems 120; real time updates to the ERP system 160 regarding costing data and inventory valuations, and a global real time view of inventory in the ERP system 160 and WIP in MES systems 120. The inventory management system 170 may be located remotely from one or more of the individual fabrication facilities as well as from the SAPPHiRE system 150 and the enterprise resource planning system 160.The WM component 172 provides a global view of WIP as well as static inventory. The WM component 172 also enables fallout fixing by matching WIP and inventory locations to physical lot locations. The WM component 172 is the system of record for all lot history, lot attribute and route/operation transactions performed and provides the capability to trace forward and backward from raw wafer to sort/die out. The lot start component 174 provides lot start information relating to all WIP.By providing the WM API 130 and the RTD API 132 each of the components within the architecture 100 may communicate using a common language in near real time. I.e., when a transaction is performed, information relating to the performance of the transaction is communicated to the other systems within the semiconductor fabrication architecture 100 as soon as the transaction occurs (there is no need for running another set of programs on a source system to obtain and transmit the data from the source system).Information from the MES 120 is interfaced to the database of the ERP system 160 via the WM component 172. The MES 120 sends all lot history, lot attribute and route operation information to the WM component 172. The WM component 172 filters, formats and forwards the data to the database of the ERP system 160. Thus, the database of the ERP system 160 reflects a subset of all MES activities implemented by the various modules of the individual fabrication facility 110.The WM API 130 subscribes to messages published by the RTD API 132 and by the WM component 172 and publishes messages back to the WM component 172. The RTD API 132 is an RTD client that publishes MES data to the middleware component 140. All lot, lot history, lot attribute, route and operation data is published via the RTD API 132.Referring to FIGS. 2A and 2B, a more detailed block diagram of the semiconductor fabrication architecture 100 is shown. More specifically, each individual fabrication facility 11O may be divided into a front end portion 200 and a back end portion 202. The front end portion 200 may be further divided into a front end MES portion 210 and a front end RTD portion 212. The back end portion 202 may be further divided into a back end MES portion 220 and a back end RTD portion 222.The front end MES portion 210 includes the MES 120 as well as a real time interceptor 230, a primary writer 232, a front end WM API 234. The front end MES 210 also includes a script writer 236 and a transport layer 238. The interceptor 230 intercepts transactions from the MES 120 that occur during and after an extract of information from the MES 120. The primary writer 232 stores extracted and intercepted messages in a message buffer until the messages are transferred to the front end RTD portion 212. The primary writer only keeps a message until the message is successfully communicated to the front end RTD portion 212. The front end WM API 234 has the capabilities to execute remote transactions via the MES API functions.The front end RTD portion 212 includes a front end real time dispatcher 240, a secondary writer 242 and a front end RTD API 244 as well as an MDS API 246 and an MDS database 248. The front end real time dispatcher 240 prioritizes lot movement based upon predefined roles. The secondary writer 242 requests messages from the primary writer 232. The secondary writer 242 stores the messages that are received in a secondary writer message buffer. The secondary writer 242 then provides the messages received from the primary writer to the middleware component 140 via the RTD API 244 under the control of the front end real time dispatcher 240. The messages are also provided to the MDS database 248 via the MDS API 246 under the control of the front end real time dispatcher 240.The back end MES portion 220 includes the MES 120. The back end MES portion 220 also includes a back end API 252 which includes a back end WM API 254 and a back end MES API 256.The back end RTD portion 222 includes a back end real time dispatcher 260 as well as a combo writer 262. The back end RTD portion 222 also includes a back end RTD API 264 as well as a back end MDS API 266 and a back end MDS database 268. The combination writer 262 is a combination of a primary writer and a secondary writer. Thus the combination writer 262 receives messages from the MDS database 268 and provides messages to the RTD API 264 as well as to the MDS API 266 under control of the back end real time dispatcher 260.Referring to FIGS. 3-5, a number of shipping process flows are shown. More specifically, FIG. 3 shows a flow chart of an intrafacility shipping process flow. FIG. 4 shows a flow chart of an interfacility shipping process flow. FIG. 5 shows another interfacility shipping process flow.With an intrafacility shipping process flow, a lot is shipped from one MES facility 110 to another MES facility 110 that share the same database instance. For example, a lot is shipped from a fabrication portion of the facility 110 to a test portion of the facility or from a fabrication portion of a facility to a bump portion of the facility.With an interfacility shipping process flow, there are two instances. An interfacility shipping process is invoked when the facilities do not share the same database instance. In one instance, a lot is shipped from one facility that conforms to a particular MES to another facility that conforms to the particular MES. In another instance, a lot is shipped from one facility that conforms to a particular MES to another facility that conforms to another MES.Referring to FIG. 3, an intrafacility shipping process flow is shown. More specifically, when a lot is shipped from a first facility (facility 1) to another facility (facility 2) a ship lot (SHLT) transaction is initiated. A lot history WIPLTH record containing the SHLT transaction is received by the real time dispatcher 122 and is then intercepted the by RTD API 132. The RTD API 132 converts the record to XML and publishes the record to the middleware message bus 140. The WM API 130 of the facility 2 subscribes to the message and initiates a wafer data upload process. (See FIG. 8). The WIP management system 172 subscribes to the message and updates its database to indicate that the lot has completed in the source facility (facility 1). If attributes associated with the lot were changed, these changed attributes are uploaded to the second facility via the WM API 130 of the second facility. The WIP management system 172 also initiates a goods receipt transaction within the ERP system 160 to place the lot in a storage location for the receiving facility. When the lot is received at the destination facility, a receive lot (RVLT) transaction is initiated by the destination facility. The receive lot transaction is received by the real time dispatcher 122 and is intercepted by the RTD API 132. The RTD API 132 publishes the updated WIPLTH record to the middleware message bus 140. The WIP management system 172 subscribes to the message and initiates a goods issue transaction to update the appropriate fields within the ERP system 160, thus completing the intrafacility shipping process.Referring to FIG. 4, the process flow for an interfacility shipping process is shown. More specifically, a process flow for when a lot is shipped from one facility that conforms to a particular MES to another facility that conforms to the particular MES is shown. When a lot is shipped from one facility to another facility, a ship lot (SHLT) transaction is initiated. The lot is moved to an ERP specific shipping facility and the lot status is terminated. For example, if the lot is shipped from one fab facility to another bump facility, the lot is removed from the fab facility and placed in a shipping facility, where the status of the lot is terminated. The WIPLTH record containing the SHLT transaction is received the by RTD 122 and intercepted by the RTD API 132. The RTD API 132 converts the record to an XML message and published the record to the middleware message bus 140. The message includes the names of the source and destination facilities. The WM API 130 of the other facility subscribes to the message and initiates a wafer data upload process. (See FIG. 8). The WIP management system 172 subscribes to the message and updates the WM database to indicate that the lot has completed in the source facility. If the attributes associated with the lot were changed, then these attributes are also uploaded. The WM system 172 also initiates a goods receipt transaction to place the lot in an ERP storage location for the destination facility. When the lot is physically shipped, a stock transfer order is completed. Then a goods issue transaction updates the ERP storage location of the lot and the WM system database. When the lot is received at the destination, a goods receipt transaction is issued to the ERP system 160. This initiates a post shipping remote create lot process. Completion of the post shipping remote create lot process completes the interfacility shipping process flow.Referring to FIG. 5, the process flow for another interfacility shipping process is shown. More specifically, a process flow for when a lot is shipped from one facility that conforms to a particular MES to another facility that conforms to another MES is shown. When a lot is shipped from one facility to another facility, a ship lot (SHLT) transaction is initiated. The lot is moved to an ERP specific shipping facility and the lot status is terminated. For example, if the lot is shipped from one fab facility to another bump facility, the lot is removed from the fab facility and placed in a shipping facility, where the status of the lot is terminated. The WIPLTH record containing the SHLT transaction is received the by RTD 122 and intercepted by the RTD API 132. The RTD API 132 converts the record to an XML message and published the record to the middleware message bus 140. The message includes the names of the source and destination facilities. The WM API 130 of the other facility subscribes to the message and initiates a wafer data upload process. (See FIG. 8). The WIP management system 172 subscribes to the message and updates the WM database to indicate that the lot has completed in the source facility. If the attributes associated with the lot were changed, then these attributes are also uploaded. The WM system 172 also initiates a goods receipt transaction to place the lot in an ERP storage location for the destination facility. When the lot is physically shipped, a stock transfer order is completed. Then a goods issue transaction updates the ERP storage location of the lot and the WM system database. When the lot is received at the destination, a goods receipt transaction is issued to the ERP system 160. This issue initiates a post shipping remote create lot process. Completion of the post shipping remote create lot process completes the interfacility shipping process flow.Referring to FIG. 6, a process flow for a lot history upload is shown. More specifically, the WM system 172 receives and stores all lot history transactions in its database. If a transaction is relevant for costing, then the WM system 172 uploads the pertinent lot history data to the ERP system 160. For example, the WM system 172 uploads pertinent data to the ERP system 160 for move out (MVOU) transactions that occur at reporting points. Likewise, the WM system 172 uploads data for ship lot (SHLT) transactions that indicate movement across bill of material (BOM) levels. However, the actual transactions are not necessarily uploaded to the ERP system 160.When uploading lot history, a lot based transaction occurs in the facility MES 120 and is written to a WIP Lot History (WIPLTH) table. The RTD API 132 intercepts the lot based transaction via the RTD 122. The RTD API 132 converts the record to an XML message and publishes the message to the message bus 140. The WM system 172 subscribes to the message and updates its database. If the transaction is relevant for costing, the WM system 172 sends the relevant data to the ERP 160. The lot history upload process then completes.Referring to FIG. 7, a process flow for a lot attribute upload is shown. More specifically, the WM system 172 receives and stores all lot attribute transactions in its database. If a transaction is relevant for the ERP system 160, then the WM system 172 uploads the pertinent lot attribute data to the ERP system 160. However, the actual transactions are not necessarily uploaded to the ERP system 160.When uploading lot attribute data, a lot attribute is set or changed in the MES 120 via a set lot attribute (SLTA) transaction which in turn updates a WIP lot attribute (WIPLTA) table. The RTD API 132 intercepts the lot attribute transaction via the RTD 122. The RTD API 132 converts the lot attribute transaction record to an XML message and publishes all lot attribute messages to the message bus 140. The WM system 172 subscribes to the message and writes the set lot attribute transaction to its lot history table and updates the value of the attribute in its lot attribute table. If the transaction is relevant to the ERP system 160, then the WM system 172 sends the pertinent data to the ERP system 160. The lot attribute upload process then completes.Referring to FIG. 8, a process flow for a wafer data upload is shown. More specifically, whenever a lot is shipped from an MES facility 120 (e.g., intrafacility shipping, interfacility shipping), the wafer scribes and virtual wafer identifiers (i.e., the original wafer slot positions) are sent to the WM system 172.When uploading wafer data, a lot is shipped from an MES facility 120. A ship lot (SHLT) transaction is received by the RTD 122 and intercepted by the RTD API 132. The RTD API 132 publishes the updated WIPLTH record to the message bus 140. This record includes the names of the source and destination facilities. The WM API 130 subscribes to the WIPLTH record and filters the record to identify the SHLT transaction. The WM API 130 then sends a request for wafer scribes and virtual wafer IDs to the MES 120 via the SR Tester 236 and the TL 238. The TL issues a remote MES transaction to obtain the wafer data. The WM API then receives and publishes the wafer data to the message bus 140. The WM system 172 subscribes to the wafer data message and updates its database. The wafer data upload process then completes. The wafer data is not actually sent from the WM system 172 to the ERP system 160, the wafer data is stored in the WM system database until the WM system 172 publishes the data in response to a wafer data request for that lot. The ERP system 160 then subscribes to the published data.Referring to FIG. 9, a process flow for a post shipping lot create in a destination facility is shown. More specifically, whenever a lot is shipped from one MES facility 120 to another MES facility 120 that does not share the same database instance, then the lot is created in the receiving facility. The process begins when a goods receipt transaction is initiated within the ERP 160. The process completes when the lot is created in the destination facility 120, the lot attributes are copied and the WM database is updated to reflect the new lot location.When a lot is shipped from an MES facility 120 to another MES facility 120 that does not share the same database instance (e.g., interfacility shipping of semi-finished goods), a ship lot (SHLT) transaction is initiated. The lot is shipped from the source facility to an ERP specific shipping facility and the lot status is marked as terminated at the shipping facility. When the lot arrives at the destination facility 120, an ERP goods receipt transaction is issued for the lot and published to the message bus 140. The WM system 172 subscribes to the message, updates the lot status information in its database for that lot and publishes the lot ID, the product ID, the operation , the lot quantity, the lot owner a lot indicator and attributes in a lot create message to the message bus 140. The WM API 130 subscribes to the lot create message and sends a request to the MES 120 via the SR Tester 236 and the TL 238. The TL 238 issues a remote create lot transaction. The lot is then created in the destination facility 120 and the lot record WIPLOT, lot history WIPLTH and lot attribute WIPLTA tables of the destination facility are updated with records for the new lot. The WIPLTA table is also populated with attributes associated with the lot in the source facility. The remote create lot transaction is received by the real time dispatcher 122 and intercepted by the RTD API 132. The RTD API publishes the create lot transaction in the WIPLTH record to the message bus 140. The WM system 172 subscribes to the message and updates the lot status in the WM system 172 to reflect the new facility. The process then completes.Referring to FIG. 10, the process flow for creating a product that corresponds to a new ERP material is shown. More specifically, a new material is created in an ERP message and published to the message bus 140. The WM system 172 subscribes to the message and publishes a product create message to the message bus 140. The WM API 130 subscribes to the message and instructs the MES 120 via the SR tester 236 and the TL 238 to initiate a remote update product transaction (RUPR). The remote update product transaction inserts an ERP material ID into an MES product table of the MES 120. The ERP material ID is used for associating MES routes with ERP materials. This ERP material ID is not used to manufacture lots within the MES 120.Referring to FIG. 11, the process flow for associating an MES route to an ERP material is shown. The monetary value of a lot is based in part upon the position of the lot within a manufacturing route. Thus, for the ERP system 160 to accurately cost materials, each ERP material should be associated with a routing that includes reporting points. ERP routings are automatically uploaded from the facility routes. This flow does not address the MES route update or on demand routing updates within the ERP system 160.An MES administrator is notified that a new product has been created (See, e.g. FIG. 10) or that a route has been modified in a way that impacts the ERP costing and inventory. For a route that has been modified in a way that impacts the ERP costing and inventory, the MES administrator initiates an associate route to product (ARTP) transaction to associate the ERP material to an MES route. The ARTP transaction updates the WIP product description WIPPRD table in the MES 120 to map the route to the new material. When a new product is created (or after the ARTP transaction is initiated), the MES administrator initiates an update product (UPRD) transaction. The update product transaction identifies ERP material. By submitting an update product with an ERP material type triggers a route upload to the WM system 172. When the route upload occurs, the RTD API 132 receives a WIP product WIPPRD record from the real time dispatcher 122, converts the record to XML and publishes the message to the message bus 140. The WM API 130 subscribes to the message and forwards the request to the MES 120 via the SR Tester 236 and the TL 238. The WM API 130 receives the route information, converts the route information to XML and publishes the route information to the message bus 140. The routing information is provided in a plurality of records: a WIPPRD record, which includes a product ID, a facility, and an ERP material ID; a WIP route information (WIPRTE) record, which includes the route name, the route description and a BOM level for the route; the sequence operation steps of the route (WIPRTO) records, which include the route name and operation for every operation on the route; and, WIP operation product (WIPOPR) records, which include the operation number, short and long description, work centers and reporting point indicators for each operation on the route. The WM system 172 subscribes to these messages, populates the relevant WM database tables and uploads the route, operation, BOM levels, work centers and reporting points to the ERP system 160. This completes the route upload process.Referring again to FIG. 2, the various MES facilities 120 are initialized to correspond to a plurality of ERP related functions. These facilities are initialized to include an ERP material create function, a routing upload function, a product validation at lot create function, a product mapping function and a shipping facility function.The material create function is used when a new material is created within the ERP system 160, that material is then associated with an MES route. This association enables the ERP system 160 to accurately cost a product based upon the location of the product in the processing stream. Relevant points within the route include the BOM level, the reporting points and the work centers. The MES system 120 passes this information to the ERP system 160. For the BOM level, this information is accomplished via the route. For the reporting points and the work centers, this information is accomplished via the operation.The BOM level for a lot does not change as long as the lot remains within the same facility 110. Thus the BOM level is associated with an MES route. There are at least three BOM levels, a FAB BOM level (which corresponds to a processed wafer) a BUMP BOM level (which corresponds to a bumped wafer) and a SORT BOM level (which corresponds to a sorted die). When setting up routes with BOM levels, an update user defined facility fields (UUFF) transaction specifies a user defined field as the BOM level field for all routes. The update user route field (UURF) transaction defines the value of the BOM level for each route and the BOM level is defined for each route within a facility.The work centers and reporting points may vary with each MES operation. Thus two user defined fields are associated with the operations and are used to specify these values. To set up the MES operations with work centers and reporting points, the UUFF transaction is used to specify a user defined field as a work center field and a user defined field as an ERP reporting field for operations. An update user defined operation field (UUOF) transaction is used to define the value of the work center and the reporting point for each operation. If the operation is an ERP reporting point, the value is yes (Y), otherwise the value is no (N). The UUOF transaction is repeated for each operation within a facility.During lot create, if a product is salable, then a check is performed to confirm that the MES product corresponds to a valid ERP material. To determine whether a product is salable, an update table entry general (UTEG) transaction is used to update a general tables (GTS) owners table to include a salable indicate field. The GTS owners table is updated to specify a value of 1 for owner codes engineering (ENG) and product (PROD) (i.e., salable vs. not salable).Other EmbodimentsThe present invention is well adapted to attain the advantages mentioned as well as others inherent therein. While the present invention has been depicted, described, and is defined by reference to particular embodiments of the invention, such references do not imply a limitation on the invention, and no such limitation is to be inferred. The invention is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent arts. The depicted and described embodiments are examples only, and are not exhaustive of the scope of the invention.Also for example, referring again to FIG. 11, an alternate route upload process may be used. In the alternate route upload process an ERP material is created in the ERP system 160. The WM system 172 publishes the ERP material to the middleware component 140. The WM API 130 receives the message and initiates a remote transaction (RSAP) to store the ERP material (referred to as a ERP ID) in an MES database table (called the ERP PRODUCTS database). The association between the ERP ID and the MES product is via a web based product mapping application. With the product mapping application, a user goes to the web based product mapping application and associates the ERP ID with an MES product, routes and other characteristics. Once the mapping association is completed, the user can select from a list of ERP IDs and trigger a route upload from the MES 120 to the ERP 160 for every select ID by actuating a Route Upload button on the product mapping application.Once triggered, the product mapping application publishes a message for a route upload for the selected ERP ID to the middleware component 140. The message contains the ERP ID and its associated Product. The WM API 130 receives the message and starts the upload process by initiating a remote transaction. Upon completion of the remote transaction, the WM API 130 initiates another remote transaction (RGOR) to obtain all operations for each returned route from the remote transaction. Once the WM API 130 has all of the operations for each route, the WM API 130 publishes all routes and operations for an ERP ID to the middleware component 140. The WM system 172 receives the message and performs the appropriate actions to update its database tables and uploads the route and operations, the BOM and the work center information to the ERP system 160.For example, referring to FIGS. 12A and 12B, an alternate fabrication architecture may include front end WM API 1210 and a back end WM API 1220 which are coupled directly to the MES 120.Also for example, the above-discussed embodiments include software modules that perform certain tasks. The software modules discussed herein may include script, batch, or other executable files. The software modules may be stored on a machine-readable or computer-readable storage medium such as a disk drive. Storage devices used for storing software modules in accordance with an embodiment of the invention may be magnetic floppy disks, hard disks, or optical discs such as CD-ROMs or CD-Rs, for example. A storage device used for storing firmware or hardware modules in accordance with an embodiment of the invention may also include a semiconductor-based memory, which may be permanently, removably or remotely coupled to a microprocessor/memory system. Thus, the modules may be stored within a computer system memory to configure the computer system to perform the functions of the module. Other new and various types of computer-readable storage media may be used to store the modules discussed herein. Additionally, those skilled in the art will recognize that the separation of functionality into modules is for illustrative purposes. Alternative embodiments may merge the functionality of multiple modules into a single module or may impose an alternate decomposition of functionality of modules. For example, a software module for calling sub-modules may be decomposed so that each sub-module performs its function and passes control directly to another sub-module.Consequently, the invention is intended to be limited only by the spirit and scope of the appended claims, giving full cognizance to equivalents in all respects.
Technologies for allocating ephemeral data storage among managed nodes include an orchestrator server to receive ephemeral data storage availability information from the managed nodes, receive a request from a first managed node of the managed nodes to allocate an amount of ephemeral data storage as the first managed node executes one or more workloads, determine, as a function of the ephemeral data storage availability information, an availability of the requested amount of ephemeral data storage, and allocate, in response to a determination that the requested amount of ephemeral data storage is available from one or more other managed nodes, the requested amount of ephemeral data storage to the first managed node as the first managed node executes the one or more workloads. Other embodiments are also described and claimed.
WHAT IS CLAIMED IS:1. An orchestrator server to manage the allocation of ephemeral data storage among a plurality of managed nodes, the orchestrator server comprising:a network communicator to receive ephemeral data storage availability information from the plurality of managed nodes, wherein the ephemeral data storage availability information is indicative of at least an amount of ephemeral data storage available for allocation in the corresponding managed node and receive a request from a first managed node of the plurality of managed nodes to allocate an amount of ephemeral data storage as the first managed node executes one or more workloads;an ephemeral data storage request manager to determine, as a function of the ephemeral data storage availability information, an availability of the requested amount of ephemeral data storage and allocate, in response to a determination that the requested amount of ephemeral data storage is available from one or more other managed nodes of the plurality of managed nodes, the requested amount of ephemeral data storage to the first managed node as the first managed node executes the one or more workloads.2. The orchestrator server of claim 1, wherein the network communicator is further to send, to the first managed node, a notification indicative of the amount of allocated ephemeral data storage.3. The orchestrator server of claim 1, wherein to receive a request from a first managed node comprises to receive a request indicative of a type of ephemeral data storage to be allocated, wherein the type is indicative of a performance of a data storage medium to provide the ephemeral data storage.4. The orchestrator server of claim 1, wherein the network communicator is further to send, in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of a type of the allocated ephemeral data storage, wherein the type is indicative of a performance of a data storage medium to provide the allocated ephemeral data storage.5. The orchestrator server of claim 1, wherein the network communicator is further to send, in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of one or more addresses of the allocated ephemeral data storage.6. The orchestrator server of claim 1, wherein to receive the ephemeral data storage availability information comprises to receive a deallocation notification that ephemeral data storage has been deallocated by at least one of the managed nodes.7. The orchestrator server of claim 1, wherein to receive the ephemeral data storage availability information comprises to receive information indicative of a type of available ephemeral data storage, wherein the type is indicative of a performance of a data storage device associated with the available ephemeral data storage medium.8. The orchestrator server of claim 1, wherein to determine the availability of the requested amount of ephemeral data storage comprises to compare the requested amount of ephemeral data storage to the ephemeral data storage availability information.9. The orchestrator server of claim 8, wherein to determine the availability of the requested amount of ephemeral data storage further comprises to compare a requested type of ephemeral data storage to one or more types of ephemeral data storage indicated in the ephemeral data storage availability information.10. The orchestrator server of claim 1, wherein to allocate the ephemeral data storage comprises to send a notification to the one or more other managed nodes to allocate at least a portion of the requested amount of ephemeral data storage.11. The orchestrator server of claim 10, wherein to send a notification comprises to send multiple notifications to each of multiple managed nodes to allocate portions of the requested amount of ephemeral data storage.12. The orchestrator server of claim 10, wherein to send the notification to the one or more other managed nodes further comprises to send a notification of a requested type of ephemeral data storage to the one or more other managed nodes.13. A method for managing the allocation of ephemeral data storage among a plurality of managed nodes, the method comprising:receiving, by an orchestrator server, ephemeral data storage availability information from the plurality of managed nodes, wherein the ephemeral data storage availability information is indicative of at least an amount of ephemeral data storage available for allocation in the corresponding managed node;receiving, by the orchestrator server, a request from a first managed node of the plurality of managed nodes to allocate an amount of ephemeral data storage as the first managed node executes one or more workloads;determining, by the orchestrator server and as a function of the ephemeral data storage availability information, an availability of the requested amount of ephemeral data storage; andallocating, by the orchestrator server and in response to a determination that the requested amount of ephemeral data storage is available from at least a one or more other managed nodes of the plurality of managed nodes, the requested amount of ephemeral data storage to the first managed node as the first managed node executes the one or more workloads.14. The method of claim 13, further comprising sending, by the orchestrator server to the first managed node, a notification indicative of the amount of allocated ephemeral data storage.15. The method of claim 13, wherein receiving a request from a first managed node comprises receiving a request indicative of a type of ephemeral data storage to be allocated, wherein the type is indicative of a performance of a data storage medium to provide the ephemeral data storage.16. The method of claim 13, further comprising sending, by the orchestrator server and in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of a type of the allocated ephemeral data storage, wherein the type is indicative of a performance of a data storage medium to provide the allocated ephemeral data storage.17. The method of claim 13, further comprising sending, by the orchestrator server in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of one or more addresses of the allocated ephemeral data storage.18. The method of claim 13, wherein receiving the ephemeral data storage availability information comprises receiving a deallocation notification that ephemeral data storage has been deallocated by at least one of the managed nodes.19. The method of claim 13, wherein receiving the ephemeral data storage availability information comprises receiving information indicative of a type of available ephemeral data storage, wherein the type is indicative of a performance of a data storage medium associated with the available ephemeral data storage.20. The method of claim 13, wherein determining the availability of the requested amount of ephemeral data storage comprises comparing the requested amount of ephemeral data storage to the ephemeral data storage availability information.21. The method of claim 20, wherein determining the availability of the requested amount of ephemeral data storage further comprises comparing a requested type of ephemeral data storage to one or more types of ephemeral data storage indicated in the ephemeral data storage availability information.22. The method of claim 13, wherein allocating the ephemeral data storage comprises sending a notification to the one or more other managed nodes to allocate at least a portion of the requested amount of ephemeral data storage.23. The method of claim 22, wherein sending a notification comprises sending multiple notifications to each of multiple managed nodes to allocate portions of the requested amount of ephemeral data storage.24. One or more computer-readable storage media comprising a plurality of instructions that, when executed by an orchestrator server, cause the orchestrator server to perform the method of any of claims 13-23.25. An orchestrator server comprising means for performing the method of any of claims 13-23.
TECHNOLOGIES FOR ALLOCATING EPHEMERAL DATA STORAGE AMONGMANAGED NODESCROSS-REFERENCE TO RELATED APPLICATIONS[0001] The present application claims priority to U.S. Utility Patent Application SerialNo. 15/395,550, entitled "TECHNOLOGIES FOR ALLOCATING EPHEMERAL DATA STORAGE AMONG MANAGED NODES," which was filed on December 30, 2016, and which claims priority to U.S. Provisional Patent Application No. 62/365,969, filed July 22, 2016; U.S. Provisional Patent Application No. 62/376859, filed August 18, 2016; and U.S. Provisional Patent Application No. 62/427,268, filed November 29, 2016.BACKGROUND[0002] In a typical cloud-based computing environment (e.g., a data center), multiple compute nodes may execute workloads (e.g., processes, applications, services, etc.) on behalf of customers. During the execution of the workloads, the amount of data storage capacity to be used for ephemeral data (e.g., cache or other data temporarily used by an application to perform operations) varies with the number and types of workloads executed by each compute node. Typically, such data is local to each compute node, either in one or more local solid state drives (SSD), hard disk drives (HDD), or other local data storage device and may be addressable in blocks (e.g., sets of bytes). To guard against the possibility of having inadequate local data storage for the ephemeral data storage needs of the workloads, each compute node is typically equipped with a fixed amount of data storage capacity to meet the peak amount that may occasionally be requested by the workloads. However, given the variations in the ephemeral data storage needs of the workloads as they are executed, the capacity of the local data storage devices may go unused for a significant percentage of the time, resulting in wasted resources in the data center.BRIEF DESCRIPTION OF THE DRAWINGS[0003] The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.[0004] FIG. 1 is a diagram of a conceptual overview of a data center in which one or more techniques described herein may be implemented according to various embodiments; [0005] FIG. 2 is a diagram of an example embodiment of a logical configuration of a rack of the data center of FIG. 1;[0006] FIG. 3 is a diagram of an example embodiment of another data center in which one or more techniques described herein may be implemented according to various embodiments;[0007] FIG. 4 is a diagram of another example embodiment of a data center in which one or more techniques described herein may be implemented according to various embodiments;[0008] FIG. 5 is a diagram of a connectivity scheme representative of link-layer connectivity that may be established among various sleds of the data centers of FIGS. 1, 3, and 4;[0009] FIG. 6 is a diagram of a rack architecture that may be representative of an architecture of any particular one of the racks depicted in FIGS. 1-4 according to some embodiments;[0010] FIG. 7 is a diagram of an example embodiment of a sled that may be used with the rack architecture of FIG. 6;[0011] FIG. 8 is a diagram of an example embodiment of a rack architecture to provide support for sleds featuring expansion capabilities;[0012] FIG. 9 is a diagram of an example embodiment of a rack implemented according to the rack architecture of FIG. 8;[0013] FIG. 10 is a diagram of an example embodiment of a sled designed for use in conjunction with the rack of FIG. 9;[0014] FIG. 11 is a diagram of an example embodiment of a data center in which one or more techniques described herein may be implemented according to various embodiments;[0015] FIG. 12 is a simplified block diagram of at least one embodiment of a system for managing the allocation of ephemeral data storage among a set of managed nodes on an as- requested basis;[0016] FIG. 13 is a simplified block diagram of at least one embodiment of an orchestrator server of the system of FIG. 12;[0017] FIG. 14 is a simplified block diagram of at least one embodiment of an environment that may be established by the orchestrator server of FIGS. 12 and 13;[0018] FIG. 15 is a simplified block diagram of at least one embodiment of an environment that may be established by a managed node of FIG. 12; [0019] FIGS. 16-17 are a simplified flow diagram of at least one embodiment of a method for managing the allocation of ephemeral data storage among a set of managed nodes that may be performed by the orchestrator server of FIGS. 12-14;[0020] FIGS. 18-19 are a simplified flow diagram of at least one embodiment of a method for requesting the allocation of ephemeral data storage that may be performed by a managed node of FIGS. 12 and 14; and[0021] FIGS. 20-21 are a simplified flow diagram of at least one embodiment of a method for responding to a request to allocate ephemeral data storage that may be performed by a managed node of FIGS. 12 and 14.DETAILED DESCRIPTION OF THE DRAWINGS[0022] While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.[0023] References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of "at least one A, B, and C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).[0024] The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine- readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).[0025] In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.[0026] FIG. 1 illustrates a conceptual overview of a data center 100 that may generally be representative of a data center or other type of computing network in/for which one or more techniques described herein may be implemented according to various embodiments. As shown in FIG. 1, data center 100 may generally contain a plurality of racks, each of which may house computing equipment comprising a respective set of physical resources. In the particular non- limiting example depicted in FIG. 1, data center 100 contains four racks 102A to 102D, which house computing equipment comprising respective sets of physical resources 105A to 105D. According to this example, a collective set of physical resources 106 of data center 100 includes the various sets of physical resources 105A to 105D that are distributed among racks 102A to 102D. Physical resources 106 may include resources of multiple types, such as - for example - processors, co-processors, accelerators, field-programmable gate arrays (FPGAs), memory, and storage. The embodiments are not limited to these examples.[0027] The illustrative data center 100 differs from typical data centers in many ways.For example, in the illustrative embodiment, the circuit boards ("sleds") on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. In particular, in the illustrative embodiment, the sleds are shallower than typical boards. In other words, the sleds are shorter from the front to the back, where cooling fans are located. This decreases the length of the path that air must to travel across the components on the board. Further, the components on the sled are spaced further apart than in typical circuit boards, and the components are arranged to reduce or eliminate shadowing (i.e., one component in the air flow path of another component). In the illustrative embodiment, processing components such as the processors are located on a top side of a sled while near memory, such as dual in-line memory modules (DIMMs), are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in each rack 102A, 102B, 102C, 102D, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.[0028] Furthermore, in the illustrative embodiment, the data center 100 utilizes a single network architecture ("fabric") that supports multiple other network architectures including Ethernet and Omni-Path. The sleds, in the illustrative embodiment, are coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center 100 may, in use, pool resources, such as memory, accelerators (e.g., graphics accelerators, FPGAs, application specific integrated circuits (ASICs), etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. The illustrative data center 100 additionally receives usage information for the various resources, predicts resource usage for different types of workloads based on past resource usage, and dynamically reallocates the resources based on this information.[0029] The racks 102A, 102B, 102C, 102D of the data center 100 may include physical design features that facilitate the automation of a variety of types of maintenance tasks. For example, data center 100 may be implemented using racks that are designed to be robotically- accessed, and to accept and house robotically-manipulatable resource sleds. Furthermore, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include integrated power sources that receive a greater voltage than is typical for power sources. The increased voltage enables the power sources to provide additional power to the components on each sled, enabling the components to operate at higher than typical frequencies.[0030] FIG. 2 illustrates an exemplary logical configuration of a rack 202 of the data center 100. As shown in FIG. 2, rack 202 may generally house a plurality of sleds, each of which may comprise a respective set of physical resources. In the particular non-limiting example depicted in FIG. 2, rack 202 houses sleds 204-1 to 204-4 comprising respective sets of physical resources 205-1 to 205-4, each of which constitutes a portion of the collective set of physical resources 206 comprised in rack 202. With respect to FIG. 1, if rack 202 is representative of - for example - rack 102A, then physical resources 206 may correspond to the physical resources 105A comprised in rack 102A. In the context of this example, physical resources 105A may thus be made up of the respective sets of physical resources, including physical storage resources 205-1, physical accelerator resources 205-2, physical memory resources 205-3, and physical compute resources 205-5 comprised in the sleds 204-1 to 204-4 of rack 202. The embodiments are not limited to this example. Each sled may contain a pool of each of the various types of physical resources (e.g., compute, memory, accelerator, storage). By having robotically accessible and robotically manipulatable sleds comprising disaggregated resources, each type of resource can be upgraded independently of each other and at their own optimized refresh rate.[0031] FIG. 3 illustrates an example of a data center 300 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. In the particular non-limiting example depicted in FIG. 3, data center 300 comprises racks 302-1 to 302-32. In various embodiments, the racks of data center 300 may be arranged in such fashion as to define and/or accommodate various access pathways. For example, as shown in FIG. 3, the racks of data center 300 may be arranged in such fashion as to define and/or accommodate access pathways 311A, 31 IB, 311C, and 31 ID. In some embodiments, the presence of such access pathways may generally enable automated maintenance equipment, such as robotic maintenance equipment, to physically access the computing equipment housed in the various racks of data center 300 and perform automated maintenance tasks (e.g., replace a failed sled, upgrade a sled). In various embodiments, the dimensions of access pathways 311A, 31 IB, 311C, and 31 ID, the dimensions of racks 302-1 to 302-32, and/or one or more other aspects of the physical layout of data center 300 may be selected to facilitate such automated operations. The embodiments are not limited in this context.[0032] FIG. 4 illustrates an example of a data center 400 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. As shown in FIG. 4, data center 400 may feature an optical fabric 412. Optical fabric 412 may generally comprise a combination of optical signaling media (such as optical cabling) and optical switching infrastructure via which any particular sled in data center 400 can send signals to (and receive signals from) each of the other sleds in data center 400. The signaling connectivity that optical fabric 412 provides to any given sled may include connectivity both to other sleds in a same rack and sleds in other racks. In the particular non-limiting example depicted in FIG. 4, data center 400 includes four racks 402A to 402D. Racks 402A to 402D house respective pairs of sleds 404 A- 1 and 404 A-2, 404B-1 and 404B-2, 404C-1 and 404C-2, and 404D-1 and 404D-2. Thus, in this example, data center 400 comprises a total of eight sleds. Via optical fabric 412, each such sled may possess signaling connectivity with each of the seven other sleds in data center 400. For example, via optical fabric 412, sled 404 A- 1 in rack 402A may possess signaling connectivity with sled 404 A-2 in rack 402A, as well as the six other sleds 404B-1, 404B-2, 404C-1, 404C-2, 404D-1, and 404D-2 that are distributed among the other racks 402B, 402C, and 402D of data center 400. The embodiments are not limited to this example.[0033] FIG. 5 illustrates an overview of a connectivity scheme 500 that may generally be representative of link-layer connectivity that may be established in some embodiments among the various sleds of a data center, such as any of example data centers 100, 300, and 400 of FIGS. 1, 3, and 4. Connectivity scheme 500 may be implemented using an optical fabric that features a dual-mode optical switching infrastructure 514. Dual-mode optical switching infrastructure 514 may generally comprise a switching infrastructure that is capable of receiving communications according to multiple link-layer protocols via a same unified set of optical signaling media, and properly switching such communications. In various embodiments, dual- mode optical switching infrastructure 514 may be implemented using one or more dual-mode optical switches 515. In various embodiments, dual-mode optical switches 515 may generally comprise high-radix switches. In some embodiments, dual-mode optical switches 515 may comprise multi-ply switches, such as four-ply switches. In various embodiments, dual-mode optical switches 515 may feature integrated silicon photonics that enable them to switch communications with significantly reduced latency in comparison to conventional switching devices. In some embodiments, dual-mode optical switches 515 may constitute leaf switches 530 in a leaf-spine architecture additionally including one or more dual-mode optical spine switches 520.[0034] In various embodiments, dual-mode optical switches may be capable of receiving both Ethernet protocol communications carrying Internet Protocol (IP packets) and communications according to a second, high-performance computing (HPC) link-layer protocol (e.g., Intel's Omni-Path Architecture's, Infiniband) via optical signaling media of an optical fabric. As reflected in FIG. 5, with respect to any particular pair of sleds 504A and 504B possessing optical signaling connectivity to the optical fabric, connectivity scheme 500 may thus provide support for link-layer connectivity via both Ethernet links and HPC links. Thus, both Ethernet and HPC communications can be supported by a single high-bandwidth, low- latency switch fabric. The embodiments are not limited to this example.[0035] FIG. 6 illustrates a general overview of a rack architecture 600 that may be representative of an architecture of any particular one of the racks depicted in FIGS. 1 to 4 according to some embodiments. As reflected in FIG. 6, rack architecture 600 may generally feature a plurality of sled spaces into which sleds may be inserted, each of which may be robotically-accessible via a rack access region 601. In the particular non-limiting example depicted in FIG. 6, rack architecture 600 features five sled spaces 603-1 to 603-5. Sled spaces 603-1 to 603-5 feature respective multi-purpose connector modules (MPCMs) 616-1 to 616-5.[0036] FIG. 7 illustrates an example of a sled 704 that may be representative of a sled of such a type. As shown in FIG. 7, sled 704 may comprise a set of physical resources 705, as well as an MPCM 716 designed to couple with a counterpart MPCM when sled 704 is inserted into a sled space such as any of sled spaces 603-1 to 603-5 of FIG. 6. Sled 704 may also feature an expansion connector 717. Expansion connector 717 may generally comprise a socket, slot, or other type of connection element that is capable of accepting one or more types of expansion modules, such as an expansion sled 718. By coupling with a counterpart connector on expansion sled 718, expansion connector 717 may provide physical resources 705 with access to supplemental computing resources 705B residing on expansion sled 718. The embodiments are not limited in this context.[0037] FIG. 8 illustrates an example of a rack architecture 800 that may be representative of a rack architecture that may be implemented in order to provide support for sleds featuring expansion capabilities, such as sled 704 of FIG. 7. In the particular non-limiting example depicted in FIG. 8, rack architecture 800 includes seven sled spaces 803-1 to 803-7, which feature respective MPCMs 816-1 to 816-7. Sled spaces 803-1 to 803-7 include respective primary regions 803-1 A to 803 -7 A and respective expansion regions 803- IB to 803- 7B. With respect to each such sled space, when the corresponding MPCM is coupled with a counterpart MPCM of an inserted sled, the primary region may generally constitute a region of the sled space that physically accommodates the inserted sled. The expansion region may generally constitute a region of the sled space that can physically accommodate an expansion module, such as expansion sled 718 of FIG. 7, in the event that the inserted sled is configured with such a module.[0038] FIG. 9 illustrates an example of a rack 902 that may be representative of a rack implemented according to rack architecture 800 of FIG. 8 according to some embodiments. In the particular non-limiting example depicted in FIG. 9, rack 902 features seven sled spaces 903- 1 to 903-7, which include respective primary regions 903-1 A to 903 -7 A and respective expansion regions 903- IB to 903-7B. In various embodiments, temperature control in rack 902 may be implemented using an air cooling system. For example, as reflected in FIG. 9, rack 902 may feature a plurality of fans 919 that are generally arranged to provide air cooling within the various sled spaces 903-1 to 903-7. In some embodiments, the height of the sled space is greater than the conventional "1U" server height. In such embodiments, fans 919 may generally comprise relatively slow, large diameter cooling fans as compared to fans used in conventional rack configurations. Running larger diameter cooling fans at lower speeds may increase fan lifetime relative to smaller diameter cooling fans running at higher speeds while still providing the same amount of cooling. The sleds are physically shallower than conventional rack dimensions. Further, components are arranged on each sled to reduce thermal shadowing (i.e., not arranged serially in the direction of air flow). As a result, the wider, shallower sleds allow for an increase in device performance because the devices can be operated at a higher thermal envelope (e.g., 250W) due to improved cooling (i.e., no thermal shadowing, more space between devices, more room for larger heat sinks, etc.).[0039] MPCMs 916-1 to 916-7 may be configured to provide inserted sleds with access to power sourced by respective power modules 920-1 to 920-7, each of which may draw power from an external power source 921. In various embodiments, external power source 921 may deliver alternating current (AC) power to rack 902, and power modules 920-1 to 920-7 may be configured to convert such AC power to direct current (DC) power to be sourced to inserted sleds. In some embodiments, for example, power modules 920-1 to 920-7 may be configured to convert 277-volt AC power into 12-volt DC power for provision to inserted sleds via respective MPCMs 916-1 to 916-7. The embodiments are not limited to this example.[0040] MPCMs 916-1 to 916-7 may also be arranged to provide inserted sleds with optical signaling connectivity to a dual-mode optical switching infrastructure 914, which may be the same as - or similar to - dual-mode optical switching infrastructure 514 of FIG. 5. In various embodiments, optical connectors contained in MPCMs 916-1 to 916-7 may be designed to couple with counterpart optical connectors contained in MPCMs of inserted sleds to provide such sleds with optical signaling connectivity to dual-mode optical switching infrastructure 914 via respective lengths of optical cabling 922-1 to 922-7. In some embodiments, each such length of optical cabling may extend from its corresponding MPCM to an optical interconnect loom 923 that is external to the sled spaces of rack 902. In various embodiments, optical interconnect loom 923 may be arranged to pass through a support post or other type of load- bearing element of rack 902. The embodiments are not limited in this context. Because inserted sleds connect to an optical switching infrastructure via MPCMs, the resources typically spent in manually configuring the rack cabling to accommodate a newly inserted sled can be saved.[0041] FIG. 10 illustrates an example of a sled 1004 that may be representative of a sled designed for use in conjunction with rack 902 of FIG. 9 according to some embodiments. Sled 1004 may feature an MPCM 1016 that comprises an optical connector 1016A and a power connector 1016B, and that is designed to couple with a counterpart MPCM of a sled space in conjunction with insertion of MPCM 1016 into that sled space. Coupling MPCM 1016 with such a counterpart MPCM may cause power connector 1016 to couple with a power connector comprised in the counterpart MPCM. This may generally enable physical resources 1005 of sled 1004 to source power from an external source, via power connector 1016 and power transmission media 1024 that conductively couples power connector 1016 to physical resources 1005.[0042] Sled 1004 may also include dual-mode optical network interface circuitry 1026.Dual-mode optical network interface circuitry 1026 may generally comprise circuitry that is capable of communicating over optical signaling media according to each of multiple link-layer protocols supported by dual-mode optical switching infrastructure 914 of FIG. 9. In some embodiments, dual-mode optical network interface circuitry 1026 may be capable both of Ethernet protocol communications and of communications according to a second, high- performance protocol. In various embodiments, dual-mode optical network interface circuitry 1026 may include one or more optical transceiver modules 1027, each of which may be capable of transmitting and receiving optical signals over each of one or more optical channels. The embodiments are not limited in this context.[0043] Coupling MPCM 1016 with a counterpart MPCM of a sled space in a given rack may cause optical connector 1016A to couple with an optical connector comprised in the counterpart MPCM. This may generally establish optical connectivity between optical cabling of the sled and dual-mode optical network interface circuitry 1026, via each of a set of optical channels 1025. Dual-mode optical network interface circuitry 1026 may communicate with the physical resources 1005 of sled 1004 via electrical signaling media 1028. In addition to the dimensions of the sleds and arrangement of components on the sleds to provide improved cooling and enable operation at a relatively higher thermal envelope (e.g., 250W), as described above with reference to FIG. 9, in some embodiments, a sled may include one or more additional features to facilitate air cooling, such as a heat pipe and/or heat sinks arranged to dissipate heat generated by physical resources 1005. It is worthy of note that although the example sled 1004 depicted in FIG. 10 does not feature an expansion connector, any given sled that features the design elements of sled 1004 may also feature an expansion connector according to some embodiments. The embodiments are not limited in this context.[0044] FIG. 11 illustrates an example of a data center 1100 that may generally be representative of one in/for which one or more techniques described herein may be implemented according to various embodiments. As reflected in FIG. 11, a physical infrastructure management framework 1150A may be implemented to facilitate management of a physical infrastructure 1100A of data center 1100. In various embodiments, one function of physical infrastructure management framework 1150A may be to manage automated maintenance functions within data center 1100, such as the use of robotic maintenance equipment to service computing equipment within physical infrastructure 1100A. In some embodiments, physical infrastructure 1100A may feature an advanced telemetry system that performs telemetry reporting that is sufficiently robust to support remote automated management of physical infrastructure 1100A. In various embodiments, telemetry information provided by such an advanced telemetry system may support features such as failure prediction/prevention capabilities and capacity planning capabilities. In some embodiments, physical infrastructure management framework 1150A may also be configured to manage authentication of physical infrastructure components using hardware attestation techniques. For example, robots may verify the authenticity of components before installation by analyzing information collected from a radio frequency identification (RFID) tag associated with each component to be installed. The embodiments are not limited in this context.[0045] As shown in FIG. 11, the physical infrastructure 1100A of data center 1100 may comprise an optical fabric 1112, which may include a dual-mode optical switching infrastructure 1114. Optical fabric 1112 and dual-mode optical switching infrastructure 1114 may be the same as - or similar to - optical fabric 412 of FIG. 4 and dual-mode optical switching infrastructure 514 of FIG. 5, respectively, and may provide high-bandwidth, low- latency, multi-protocol connectivity among sleds of data center 1100. As discussed above, with reference to FIG. 1, in various embodiments, the availability of such connectivity may make it feasible to disaggregate and dynamically pool resources such as accelerators, memory, and storage. In some embodiments, for example, one or more pooled accelerator sleds 1130 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of accelerator resources - such as co-processors and/or FPGAs, for example - that is globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114.[0046] In another example, in various embodiments, one or more pooled storage sleds1132 may be included among the physical infrastructure 1100A of data center 1100, each of which may comprise a pool of storage resources that is available globally accessible to other sleds via optical fabric 1112 and dual-mode optical switching infrastructure 1114. In some embodiments, such pooled storage sleds 1132 may comprise pools of solid-state storage devices such as solid-state drives (SSDs). In various embodiments, one or more high-performance processing sleds 1134 may be included among the physical infrastructure 1100A of data center 1100. In some embodiments, high-performance processing sleds 1134 may comprise pools of high-performance processors, as well as cooling features that enhance air cooling to yield a higher thermal envelope of up to 250W or more. In various embodiments, any given high- performance processing sled 1134 may feature an expansion connector 1117 that can accept a far memory expansion sled, such that the far memory that is locally available to that high- performance processing sled 1134 is disaggregated from the processors and near memory comprised on that sled. In some embodiments, such a high-performance processing sled 1134 may be configured with far memory using an expansion sled that comprises low-latency SSD storage. The optical infrastructure allows for compute resources on one sled to utilize remote accelerator/FPGA, memory, and/or SSD resources that are disaggregated on a sled located on the same rack or any other rack in the data center. The remote resources can be located one switch jump away or two-switch jumps away in the spine-leaf network architecture described above with reference to FIG. 5. The embodiments are not limited in this context.[0047] In various embodiments, one or more layers of abstraction may be applied to the physical resources of physical infrastructure 1100A in order to define a virtual infrastructure, such as a software-defined infrastructure 1100B. In some embodiments, virtual computing resources 1136 of software-defined infrastructure 1100B may be allocated to support the provision of cloud services 1140. In various embodiments, particular sets of virtual computing resources 1136 may be grouped for provision to cloud services 1140 in the form of SDI services 1138. Examples of cloud services 1140 may include - without limitation - software as a service (SaaS) services 1142, platform as a service (PaaS) services 1144, and infrastructure as a service (IaaS) services 1146.[0048] In some embodiments, management of software-defined infrastructure 1100B may be conducted using a virtual infrastructure management framework 1150B. In various embodiments, virtual infrastructure management framework 1150B may be designed to implement workload fingerprinting techniques and/or machine-learning techniques in conjunction with managing allocation of virtual computing resources 1136 and/or SDI services 1138 to cloud services 1140. In some embodiments, virtual infrastructure management framework 1150B may use/consult telemetry data in conjunction with performing such resource allocation. In various embodiments, an application/service management framework 1150C may be implemented in order to provide QoS management capabilities for cloud services 1140. The embodiments are not limited in this context.[0049] As shown in FIG. 12, an illustrative system 1210 for managing the allocation of ephemeral data storage among a set of managed nodes 1260 on an as-requested basis includes an orchestrator server 1240 in communication with the set of managed nodes 1260. Each managed node 1260 may be embodied as an assembly of resources (e.g., physical resources 206), such as compute resources (e.g., physical compute resources 205-4), storage resources (e.g., physical storage resources 205-1), accelerator resources (e.g., physical accelerator resources 205-2), or other resources (e.g., physical memory resources 205-3) from the same or different sleds (e.g., the sleds 204-1, 204-2, 204-3, 204-4, etc.) or racks (e.g., one or more of racks 302-1 through 302-32). Each managed node 1260 may be established, defined, or "spun up" by the orchestrator server 1240 at the time a workload is to be assigned to the managed node 1260 or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node 1260. The system 1210 may be implemented in accordance with the data centers 100, 300, 400, 1100 described above with reference to FIGS. 1, 3, 4, and 11. In the illustrative embodiment, the set of managed nodes 1260 includes managed nodes 1250, 1252, and 1254. While three managed nodes 1260 are shown in the set, it should be understood that in other embodiments, the set may include a different number of managed nodes 1260 (e.g., tens of thousands). The system 1210 may be located in a data center and provide storage and compute services (e.g., cloud services) to a client device 1220 that is in communication with the system 1210 through a network 1230. The orchestrator server 1240 may support a cloud operating environment, such as OpenStack, and the managed nodes 1250 may execute one or more applications or processes (i.e., workloads), such as in virtual machines or containers, on behalf of a user of the client device 1220.[0050] As discussed in more detail herein, the managed nodes 1260 may request the allocation of ephemeral data storage from other managed nodes 1260 in the system 1210 as the workloads are being performed, and later deallocate all or a portion of the allocated storage, to free up the storage for use by other managed nodes 1260. Due to the architecture described above, the managed nodes 1260 may treat the allocated ephemeral data storage as if it is local, such as by allocating blocks of ephemeral data storage and addressing the blocks with write and read operations as if the blocks of data storage were local (e.g., physically located on the sled of the managed node 1260). Some of the managed nodes 1260 may be equipped with more ephemeral data storage than others, and in some embodiments, one or more of the managed nodes 1260 may be equipped with no local ephemeral data storage and be reliant on the other managed nodes 1260 to provide ephemeral data storage on an as needed basis. For example, the managed node 1250 may be similar to the sled 204-4 of FIG. 2, with physical compute resource and little or no physical storage resources, while the managed nodes 1252 and 1254 may be similar to sled 204-1 of FIG. 2, and be equipped with a relatively large amount of physical data storage resources 205-1. In the illustrative embodiment, the orchestrator server 1240 is configured to track the availability of blocks of ephemeral data storage among the managed nodes 1260, receive requests for the allocation of ephemeral data storage, determine which managed nodes have available ephemeral data storage to allocate in response to the request, and send messages to one or more of the managed nodes 1260 to allocate ephemeral data storage in fulfillment of the request. In other embodiments, the managed nodes 1260 are configured to communicate directly to coordinate the allocation and deallocation of ephemeral data storage among them, as the workloads are performed.[0051] Referring now to FIG. 13, the orchestrator server 1240 may be embodied as any type of compute device capable of performing the functions described herein, including issuing a request to have cloud services performed, receiving results of the cloud services, assigning workloads to compute devices, and managing the allocation of ephemeral data storage among the managed nodes 1260. For example, the orchestrator server 1240 may be embodied as a computer, a distributed computing system, one or more sleds (e.g., the sleds 204-1, 204-2, 204- 3, 204-4, etc.), a server (e.g., stand-alone, rack-mounted, blade, etc.), a multiprocessor system, a network appliance (e.g., physical or virtual), a desktop computer, a workstation, a laptop computer, a notebook computer, a processor-based system, or a network appliance. As shown in FIG. 13, the illustrative orchestrator server 1240 includes a central processing unit (CPU) 1302, a main memory 1304, an input/output (I/O) subsystem 1306, communication circuitry 1308, and one or more data storage devices 1312. Of course, in other embodiments, the orchestrator server 1240 may include other or additional components, such as those commonly found in a computer (e.g., display, peripheral devices, etc.). Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. For example, in some embodiments, the main memory 1304, or portions thereof, may be incorporated in the CPU 1302.[0052] The CPU 1302 may be embodied as any type of processor capable of performing the functions described herein. The CPU 1302 may be embodied as a single or multi-core processor(s), a microcontroller, or other processor or processing/controlling circuit. In some embodiments, the CPU 1302 may be embodied as, include, or be coupled to a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. As discussed above, the managed node 1260 may include resources distributed across multiple sleds and in such embodiments, the CPU 1302 may include portions thereof located on the same sled or different sled. Similarly, the main memory 1304 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. In some embodiments, all or a portion of the main memory 1304 may be integrated into the CPU 1302. In operation, the main memory 1304 may store various software and data used during operation, such as a map of the allocation of ephemeral data storage among the managed nodes, ephemeral data, operating systems, applications, programs, libraries, and drivers. As discussed above, the managed node 1260 may include resources distributed across multiple sleds and in such embodiments, the main memory 1304 may include portions thereof located on the same sled or different sled.[0053] The I/O subsystem 1306 may be embodied as circuitry and/or components to facilitate input/output operations with the CPU 1302, the main memory 1304, and other components of the orchestrator server 1240. For example, the I/O subsystem 1306 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1306 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the CPU 1302, the main memory 1304, and other components of the orchestrator server 1240, on a single integrated circuit chip.[0054] The communication circuitry 1308 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications over the network 1230 between the orchestrator server 1240 and another compute device (e.g., the client device 1220, and/or the managed nodes 1260). The communication circuitry 1308 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.[0055] The illustrative communication circuitry 1308 includes a network interface controller (NIC) 1310, which may also be referred to as a host fabric interface (HFI). The NIC 1310 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the orchestrator server 1240 to connect with another compute device (e.g., the client device 1220 and/or the managed nodes 1260). In some embodiments, the NIC 1310 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, the NIC 1310 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 1310. In such embodiments, the local processor of the NIC 1310 may be capable of performing one or more of the functions of the CPU 1302 described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC 1310 may be integrated into one or more components of the orchestrator server 1240 at the board level, socket level, chip level, and/or other levels. As discussed above, the managed node 1260 may include resources distributed across multiple sleds and in such embodiments, the communication circuitry 1308 may include portions thereof located on the same sled or different sled. [0056] The one or more illustrative data storage devices 1312, may be embodied as any type of devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Each data storage device 1312 may include a system partition that stores data and firmware code for the data storage device 1312. Each data storage device 1312 may also include an operating system partition that stores data files and executables for an operating system.[0057] Additionally, the orchestrator server 1240 may include a display 1314. The display 1314 may be embodied as, or otherwise use, any suitable display technology including, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, a plasma display, and/or other display usable in a compute device. The display 1314 may include a touchscreen sensor that uses any suitable touchscreen input technology to detect the user's tactile selection of information displayed on the display including, but not limited to, resistive touchscreen sensors, capacitive touchscreen sensors, surface acoustic wave (SAW) touchscreen sensors, infrared touchscreen sensors, optical imaging touchscreen sensors, acoustic touchscreen sensors, and/or other type of touchscreen sensors.[0058] Additionally or alternatively, the orchestrator server 1240 may include one or more peripheral devices 1316. Such peripheral devices 1316 may include any type of peripheral device commonly found in a compute device such as speakers, a mouse, a keyboard, and/or other input/output devices, interface devices, and/or other peripheral devices.[0059] The client device 1220 and the managed nodes 1260 may have components similar to those described in FIG. 13. The description of those components of the orchestrator server 1240 is equally applicable to the description of components of the client device 1220 and the managed nodes 1260 and is not repeated herein for clarity of the description. Further, it should be appreciated that any of the client device 1220 and the managed nodes 1260 may include other components, sub-components, and devices commonly found in a computing device, which are not discussed above in reference to the orchestrator server 1240 and not discussed herein for clarity of the description.[0060] As described above, the client device 1220, the orchestrator server 1240 and the managed nodes 1260 are illustratively in communication via the network 1230, which may be embodied as any type of wired or wireless communication network, including global networks (e.g., the Internet), local area networks (LANs) or wide area networks (WANs), cellular networks (e.g., Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscriber line (DSL) networks, cable networks (e.g., coaxial networks, fiber networks, etc.), or any combination thereof.[0061] Referring now to FIG. 14, in the illustrative embodiment, the orchestrator server1240 may establish an environment 1400 during operation. The illustrative environment 1400 includes a network communicator 1420 and an ephemeral data storage request manager 1430. Each of the components of the environment 1400 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment 1400 may be embodied as circuitry or a collection of electrical devices (e.g., network communicator circuitry 1420, ephemeral data storage request manager circuitry 1430, etc.). It should be appreciated that, in such embodiments, one or more of the network communicator circuitry 1420 or the ephemeral data storage request manager circuitry 1430 may form a portion of one or more of the CPU 1302, the main memory 1304, the I/O subsystem 1306, the communication circuitry 1308, and/or other components of the orchestrator server 1240. In the illustrative embodiment, the environment 1400 includes an ephemeral data map 1402 which may be embodied as any data indicative of the availability of ephemeral data storage among the managed nodes 1260, such as amounts of available ephemeral data storage in each managed node 1260, addresses of the blocks of the ephemeral data storage, associations between the managed node 1260 that is using each allocated block of ephemeral data storage and the managed node 1260 that physically includes those blocks of ephemeral data storage, and types of ephemeral data storage. In the illustrative embodiment, the type of the ephemeral data storage may be embodied as any data indicative of the performance (e.g., read time, write time, seek time, bandwidth, input/output instructions per second, etc.) of the underlying data storage device (also referred to herein as data storage medium) that has the ephemeral data storage. The different types of data storage devices to provide the different types of ephemeral data storage may be solid state drives (SSDs), hard disk drives (HDDs), dual in-line memory modules, processor caches, and/or other memory devices.[0062] In the illustrative environment 1400, the network communicator 1420, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the orchestrator server 1240, respectively. To do so, the network communicator 1420 is configured to receive and process data packets from one system or computing device (e.g., the client device 1220) and to prepare and send data packets to another computing device or system (e.g., the managed nodes 1260). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator 1420 may be performed by the communication circuitry 1308, and, in the illustrative embodiment, by the NIC 1310.[0063] The ephemeral data storage request manager 1430, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to monitor the allocation of ephemeral data storage among the managed nodes 1260 and respond to requests from the managed nodes 1260 to allocate ephemeral data storage as the workloads are executed. To do so, in the illustrative embodiment, the ephemeral data storage request manager 1430 includes an availability tracker 1432 and a request servicer 1434. In the illustrative embodiment, the availability tracker 1432 is configured to receive update messages from the managed nodes 1260 as ephemeral data storage is allocated and/or deallocated and update the ephemeral data map 1402 to reflect the changes in the allocation of the ephemeral data across the managed nodes 1260.[0064] The request servicer 1434, in the illustrative embodiment, is configured to receive a request from a managed node 1260 to allocate ephemeral data storage, analyze the ephemeral data map 1402 to identify a set (e.g., one or more) managed nodes 1260 having the ephemeral data storage to fulfill the request, and send a notification to the set of managed nodes 1260 to allocate the requested ephemeral data storage. The request servicer 1434, in operation, may then send a message back to the requestor managed node 1260 (i.e., the managed node 1260 that sent the request) with information about the allocated ephemeral data storage, including the amount of allocated storage, the addresses of the blocks of ephemeral data storage (e.g., a combination of a unique address of the managed node 1260 equipped with the ephemeral data storage, such as a media access control (MAC) address, and a data storage address of each block within that managed node 1260), and the type (e.g., performance characteristics) of the allocated ephemeral data storage. It should be appreciated that each of the availability tracker 1432 and the request servicer 1434 may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof. For example, the availability tracker 1432 may be embodied as a hardware component, while the request servicer 1434 is embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof.[0065] Referring now to FIG. 15, in the illustrative embodiment, each managed node1260 may establish an environment 1500 during operation. The illustrative environment 1500 includes a network communicator 1520 and an ephemeral data storage manager 1530. Each of the components of the environment 1500 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of the environment 1500 may be embodied as circuitry or a collection of electrical devices (e.g., network communicator circuitry 1520, ephemeral data storage manager circuitry 1530, etc.). It should be appreciated that, in such embodiments, one or more of the network communicator circuitry 1520 or the ephemeral data storage manager circuitry 1530 may form a portion of one or more of the CPU 1302, the main memory 1304, the I/O subsystem 1306, the communication circuitry 1308, and/or other components of the managed node 1260. In the illustrative embodiment, the environment 1500 includes ephemeral data 1502 which may be embodied as any temporary data (e.g., cache) used by the managed node 1260 during the execution of the workloads. A portion of the ephemeral data 1502 may be physically local to the managed node 1260 while another portion may be remotely located (e.g., allocated on one or more other managed nodes 1260) and mapped to local data storage addresses (e.g., logical block addresses) of the managed node 1260. The environment 1500 also includes, in the illustrative embodiment, address translation data 1504 which may be embodied as any data indicative of a map between local data storage addresses of the managed node 1260 and addresses of remotely located ephemeral data storage. As described with reference to FIG. 14, the addresses of the remotely located ephemeral data storage may be embodied as a combination of a unique address of the remote managed node 1260, such as a MAC address, and the internal (e.g., local) address of the ephemeral data storage (e.g., a logical block address) within that remote managed node 1260. As such, as read and write requests are issued by workloads executed by the managed node 1260, the managed node 1260 may translate addresses included in the read or write requests to addresses of remotely located ephemeral data storage by looking up the corresponding addresses in the address translation data 1504.[0066] In the illustrative environment 1500, the network communicator 1520, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the managed node 1260, respectively. To do so, the network communicator 1520 is configured to receive and process data packets from one system or computing device (e.g., the orchestrator server 1240 or another managed node 1260) and to prepare and send data packets to another computing device or system (e.g., the orchestrator server 1240 or another managed node 1260). Accordingly, in some embodiments, at least a portion of the functionality of the network communicator 1520 may be performed by the communication circuitry 1308, and, in the illustrative embodiment, by the NIC 1310. [0067] The ephemeral data storage manager 1530, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to receive requests to allocate, deallocate, read, and/or write to ephemeral data storage, which may be local to the managed node 1260 and/or remote from the managed node 1260, as described above. To do so, in the illustrative embodiment, the ephemeral data storage manager 1530 includes a local ephemeral data servicer 1532 and a remote ephemeral data servicer 1534. In the illustrative embodiment, the local ephemeral data servicer 1532 is configured to write to, read from, allocate, and deallocate ephemeral data storage that is local to the managed node 1260 on behalf of the managed node 1260 itself or another managed node 1260. As such, in the illustrative embodiment, the local ephemeral data servicer 1532 is configured to respond to messages from the orchestrator server 1240 to allocate local ephemeral data storage for another managed node 1260, or from the other managed node 1260 itself and likewise is configured to deallocate local ephemeral data storage in response to a message to do so. The remote ephemeral data servicer 1534 is configured to request the allocation or deallocation of ephemeral data storage on a remote managed node 1260, such as when the local ephemeral data storage, if any, is inadequate for the workloads presently executed by the managed node 1260. After the ephemeral data storage is allocated on a remote managed node 1260, the remote ephemeral data servicer 1534 is configured to redirect read and/or write requests from workloads executed on the managed node 1260 to the remotely- located ephemeral data storage using the address translation data 1504 as described above. It should be appreciated that each of the local ephemeral data servicer 1532 and the remote ephemeral data servicer 1534 may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof. For example, the local ephemeral data servicer 1532 may be embodied as a hardware component, while the remote ephemeral data servicer 1534 is embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof.[0068] Referring now to FIG. 16, in use, the orchestrator server 1240 may execute a method 1600 for managing the allocation of ephemeral data storage among the managed nodes 1260 as the workloads are executed. The method 1600 begins with block 1602, in which the orchestrator server 1240 determines whether to manage ephemeral data. In the illustrative embodiment, the orchestrator server 1240 determines to manage ephemeral data if the orchestrator server 1240 is powered on, in communication with the managed nodes 1260, and has received at least one request from the client device 1220 to provide cloud services (i.e., to perform one or more workloads). In other embodiments, the orchestrator server 1240 may determine whether to manage ephemeral data based on other factors. Regardless, in response to a determination to manage ephemeral data, in the illustrative embodiment, the method 1600 advances to block 1604 in which the orchestrator server 1240 receives ephemeral data storage availability information from the managed nodes 1260. The orchestrator server 1240 may actively poll the managed nodes 1260 for the ephemeral data storage availability information, such as on a periodic basis, or may receive the ephemeral data storage availability information based on events occurring within each managed node (e.g., a change in the availability of ephemeral data storage, the expiration of a periodic timer, etc.). In doing so, the orchestrator server 1240 may receive one or more deallocation notifications indicating that ephemeral data storage has been deallocated from one or more of the managed nodes 1260 and is now available to be re-allocated, as indicated in block 1606. In receiving the ephemeral data storage availability information, the orchestrator server 1240 may additionally or alternatively receive an indication of the amount and/or type of the available ephemeral data storage from the managed nodes 1260, as indicated in block 1608. As described above and as indicated in block 1610, in receiving information about the type of available ephemeral data storage, the orchestrator server 1240 may receive information about the performance of the underlying data storage medium. For example, some solid state drives or other data storage devices with available capacity may have lower latency and/or higher bandwidth than other data storage devices. Further, in the illustrative embodiment and as indicated in block 1612, the orchestrator server 1240 stores the ephemeral data storage availability information in association with the managed nodes 1260, such as in the ephemeral data map 1402 described with reference to FIG. 14.[0069] As indicated in block 1614, the orchestrator server 1240 receives a request from a managed node 1260 to allocate ephemeral data storage. In doing so, the orchestrator server 1240 receives a request that indicates the amount of ephemeral data storage to be allocated (e.g., a number of blocks, a total number of bytes, etc.), as indicated in block 1616. Further, as indicated in block 1618, the orchestrator server 1240 may receive a request that also indicates a requested type of ephemeral data storage. In doing so, as indicated in block 1620, the orchestrator server 1240 may receive a request that indicates a target (i.e., requested) performance of the storage medium that is to provide the ephemeral data storage. For example, a managed node 1260 that is executing a workload that makes relatively frequent read and write accesses to the ephemeral data, such as a data encryption workload, may request a higher performance than a managed node that is executing a workload that makes less frequent access to ephemeral data. [0070] Subsequent to receiving a request to allocate ephemeral data storage, the method1600 advances to block 1622, in which the orchestrator server 1240 determines the availability of the requested ephemeral data storage. In doing so, as indicated in block 1624, the orchestrator server 1240 compares the requested amount of ephemeral data storage to the ephemeral data storage availability information received in block 1604. As described above, in the illustrative embodiment, the received ephemeral data storage availability information is stored in the ephemeral data map 1402. In comparing the requested amount to the ephemeral data storage availability information, the orchestrator server 1240, in the illustrative embodiment, determines whether the managed nodes 1260, as a whole, have the requested amount of ephemeral data storage available (e.g., a portion of the requested amount may be available on one managed node 1260 and another portion of the requested amount may be available on another one of the managed nodes 1260). Further, as indicated in block 1626, the orchestrator server 1240 may also compare the requested type of ephemeral data storage to the ephemeral data storage availability information to determine whether the requested type of ephemeral data storage is available on the managed nodes 1260.[0071] In block 1628, the orchestrator server 1240 determines whether the requested storage is available, based on the determinations and comparisons made in block 1622. If not, method 1600 advances to block 1630 in which the orchestrator server 1240 sends a storage unavailability message to the managed node 1260 that originally sent the request (the "requestor managed node"), indicating that the requested ephemeral data storage is unavailable, and the method 1600 subsequently loops back to block 1604 in which the orchestrator server 1240 again receives ephemeral data storage availability information from the managed nodes 1260. Referring back to block 1628, if the orchestrator server 1240 instead determines that the requested ephemeral data storage is available, the method 1600 advances to block 1632 of FIG. 17 to allocated the requested ephemeral data storage.[0072] Referring now to FIG. 17, as indicated in block 1634, in allocating the requested ephemeral data storage, the orchestrator server 1240 sends a notification to one or more of the managed nodes 1260 (e.g., the managed nodes 1260 that, together, have sufficient available data storage to satisfy the request). The notification includes the amount of ephemeral data storage each managed node 1260 is to allocate. Further, in sending the notification to the managed nodes 1260, the orchestrator server 1240 may indicate, in the notification, the requested type of ephemeral data storage to allocate (e.g., the target performance characteristics of the data storage medium), as shown in block 1636. Additionally, as indicated in block 1638, in allocating the requested ephemeral data storage, the orchestrator server 1240 receives allocated ephemeral data storage information from each of the managed nodes 1260 that the orchestrator server 1240 sent the notifications to in block 1634. In doing so, as indicated in block 1640, the orchestrator server 1240 receives an indication of the allocated amount of ephemeral data storage and the addresses (e.g., logical block addresses) of the allocated ephemeral data storage in each managed node 1260. Further, the orchestrator server 1240 may receive information indicative of the type of allocated ephemeral data storage allocated by each managed node 1260, as shown in block 1642. In block 1644, the orchestrator server 1240 updates the ephemeral data storage availability information to indicate the allocated ephemeral data storage among the managed nodes 1260, as indicated in block 1644. In doing so, the orchestrator server 1240, in the illustrative embodiment, stores the updated information in the ephemeral data map 1402 (FIG. 14), including the addresses (e.g., logical block addresses) of the allocated ephemeral data storage in combination with the unique addresses (e.g., MAC addresses, etc.) of the corresponding managed nodes 1260 where the ephemeral data storage is physically located.[0073] Subsequent to allocating the requested ephemeral data storage, the method 1600 advances to block 1646 in which the orchestrator server 1240 sends a notification of the allocated ephemeral data storage to the requestor managed node 1260 (i.e., the managed node 1260 that sent the request in block 1614). In doing so, the orchestrator server 1240 sends the allocated ephemeral data storage information to the requestor managed node 1260, as indicated in block 1648. In sending the allocated ephemeral data storage information, the orchestrator server 1240 sends an indication of the allocated amount of ephemeral data storage and the addresses (e.g., combinations of the unique identifiers of the managed nodes 1260 and the addresses of the ephemeral data storage blocks within those managed nodes 1260 (e.g., logical block addresses) to the requestor managed node 1260, as indicated in block 1650. Further, as indicated in block 1652, the orchestrator server 1240 may send an indication of the allocated type or types of ephemeral data storage. The type information may be included in embodiments in which the requested type of ephemeral data storage is optional, such that if the available ephemeral data storage does not have the requested performance, the managed nodes 1260 may allocate the ephemeral data storage on storage media having different performance characteristics, rather than failing to allocate the ephemeral data storage at all. Subsequent to sending the notification of the allocated ephemeral data storage, the method 1600 loops back to block 1604 in which the orchestrator server 1240 again receives ephemeral data storage availability information from the managed nodes 1260.[0074] Referring now to FIG. 18, in use, a managed node 1260 may execute a method1800 for requesting the allocation of ephemeral data storage as the workloads are performed. The method 1800 begins with block 1802, in which the managed node 1260 determines whether to manage ephemeral data. In the illustrative embodiment, the managed node 1260 determines to manage ephemeral data if the managed node 1260 is powered on and has been assigned one or more workloads. In other embodiments, the managed node 1260 may determine whether to manage ephemeral data based on other factors. Regardless, in response to a determination to manage ephemeral data, the method 1800 advances to block 1804 in which the managed node 1260 determines an amount of ephemeral data to request. In doing so, in the illustrative embodiment, the managed node 1260 determines an amount of available (i.e., unallocated) local ephemeral data storage, as indicated in block 1806. As described above, some managed nodes 1260 may have little or no local data storage devices (e.g., physical storage resources 205-1) while others may have a relatively large amount. Additionally, as indicated in block 1808, the managed node 1260 determines the amount of unused remote ephemeral data storage that has already been allocated to the managed node 1260. In block 1810, the managed node 1260 sums (i.e., adds) the amounts of available local ephemeral data storage and the unused remote ephemeral data storage that has already been allocated to the managed node 1260 to arrive at a summed amount. Further, in block 1812, the managed node 1260 compares (e.g., determines the difference between) the summed amount and an amount of ephemeral data storage to be used by the one or more workloads assigned to the present managed node 1260 (e.g., as a result of a write request or an allocation request issued by an executed workload, or based on metadata associated with the workload that indicates the ephemeral data usage patterns of the workload). The managed node 1260 may additionally determine a type of ephemeral data storage to request, as indicated in block 1814. In doing so, the managed node 1260 may determine the type of ephemeral data storage to request based on the types of operations that are performed by one or more of the assigned workloads (e.g., whether one or more of the workloads makes frequent use of the ephemeral data, etc.), as described above with reference to block 1620 of FIG. 16.[0075] In block 1816, the managed node 1260 determines whether to request additional storage, such as by determining whether the amount determined in block 1804 is greater than zero. If not, the method 1800 advances to block 1840 of FIG. 19 in which the managed node 1260 uses the already allocated ephemeral data storage. Otherwise, the method 1800 advances to block 1820, in which the managed node 1260 sends a request for the determined amount of ephemeral data storage. In doing so, the managed node 1260 may send the request to the orchestrator server 1240, as indicated in block 1822. Alternatively, as indicated in block 1824, the managed node 1260 may send the request to one or more other managed nodes 1260, such as managed nodes 1260 that have already been identified to the present managed node 1260, such as in a configuration file, as likely to have available ephemeral data storage. In sending the request, the managed node 1260 may send a request that indicates the type of ephemeral data storage to be allocated, as indicated in block 1826. In doing so, the managed node 1260 may send a request that indicates a target performance of the underlying storage medium that is to provide the requested ephemeral data storage. Subsequently, the method 1800 advances to block 1830 of FIG. 19 in which the managed node 1260 receives a response to the request.[0076] Referring now to FIG. 19, in receiving the response to the request, the managed node 1260 may receive an indication of the amount of ephemeral data storage that has been allocated, as shown in block 1832. In some instances, the amount may be zero, meaning that the requested ephemeral data storage is presently unavailable. Additionally, the managed node 1260 may receive the addresses of the remotely allocated ephemeral data storage, as indicated in block 1834. The addresses may be internal addresses used by the managed nodes 1260 that responded to the request and allocated ephemeral data storage, combined with the unique address (e.g., MAC address or other unique identifier) of the managed node 1260 where the ephemeral data storage is physically located. Additionally, as indicated in block 1836, the managed node 1260 may receive an indication of the type or types (e.g., if allocated on multiple different ephemeral data storage media) of ephemeral data storage that was allocated.[0077] In block 1838, the managed node 1260 determines whether the request ephemeral data storage was allocated (e.g., whether the amount of ephemeral data allocated is greater than zero). If not, the method 1800 returns to block 1804 in which the managed node 1260 re-determines the amount of ephemeral data storage to request. For example, as the workloads are executed, their ephemeral data storage requirements may change. Furthermore, during the repeated determination of the amount of ephemeral data storage to request, the conditions among the other managed nodes 1260 may change such that the next time the present managed node 1260 requests the ephemeral data storage, it may be available. Referring back to block 1838, if the managed node 1260 instead determines that the requested ephemeral data storage was allocated, the method 1800 advances to block 1840 in which the managed node 1260 writes to the allocated remote ephemeral data storage. As indicated in block 1842, in writing to the remote ephemeral data storage, the managed node 1260, in the illustrative embodiment, translates a local address (e.g., a local address specified in a write request issued by a workload) to the corresponding address of the remote ephemeral data storage received in block 1834.[0078] In block 1844, the managed node 1260 reads from the allocated remote ephemeral data storage. In doing so, in the illustrative embodiment and as indicated in block 1846, the managed node 1260 again translates the local address indicated in a read request from a workload to the corresponding address of the remote ephemeral data storage received in block 1834. While one write and one read are shown in the method 1800, it should be understood that the managed node 1260 may perform any number and sequence of reads and writes to the allocated ephemeral data storage. For example, as shown in block 1848, the managed node 1260 determines whether to deallocate the remotely allocated ephemeral data storage. If not, the method 1800 loops back to block 1840 to perform another write and/or read from the remotely allocated ephemeral data storage. Otherwise, the method 1800 advances to block 1850 in which the managed node 1260 sends a message to the managed nodes 1260 that allocated the ephemeral data storage to deallocate all or at least a portion the remote ephemeral data storage. In doing so, as indicated in block 1852, the managed node 1260 may also send a notification to the orchestrator server 1240 that the remote ephemeral data storage has been deallocated. Doing so may enable the orchestrator server 1240 to quickly update the ephemeral data map 1402 (i.e., the ephemeral data storage availability information). Subsequently, the method 1800 returns to block 1804 in which the managed node 1260 again determines an amount of ephemeral data storage to request.[0079] Referring now to FIG. 20, in use, a managed node 1260 may execute a method2000 for responding to a request to allocate ephemeral data storage as the workloads are performed. The method 2000 begins with block 2002, in which the managed node 1260 determines whether to manage ephemeral data. The managed node 1260 may make this determination in a manner similar to that described in reference to block 1802 of FIG. 18. In the illustrative embodiment, the method 2000 may execute concurrently with the method 1800, such as in separate threads or processes. In response to a determination to manage ephemeral data, the method 2000 advances to block 2004 in which the managed node 1260 receives a request from another compute device to allocate ephemeral data storage. In receiving the request, the managed node 1260 may receive the request from the orchestrator server 1240 (e.g., as the notification sent in block 1634 of FIG. 17), as shown in block 2006. Alternatively, the managed node 1260 may receive the request from another managed node 1260, as indicated in block 2008. Further, as indicated in block 2010, the managed node 1260 receives a request that specifies an amount of ephemeral data storage to allocate (e.g., a number of blocks or bytes). Furthermore, the request may specify a type of ephemeral data storage to allocate, as indicated in block 2012. As described above, in the illustrative embodiment, an indication of a type of data storage may be embodied as any data indicative of a requested performance of the underlying data storage medium that is to provide the ephemeral data storage.[0080] After receiving the request, the method 2000 advances to block 2014, in which the managed node 1260 determines an availability of the requested ephemeral data storage. In doing so, the managed node 1260 compares the requested amount to the available amount of local ephemeral data storage, as indicated in block 2016. Further, the managed node 1260 may compare the requested type to the types of available ephemeral data storage (e.g., different SSDs or other ephemeral data storage devices) local to the managed node 1260, as indicated in block 2018. Subsequently, the method 2000 advances to block 2020, in which the managed node 1260 determines whether the requested ephemeral data storage is available (e.g., whether the available amount is at least equal to the requested amount and, in some embodiments, whether the available types satisfy the requested type). In response to a determination that the requested ephemeral data storage is unavailable, the method 2000 advances to block 2022, in which the managed node 1260 sends a response message indicating that the requested ephemeral data storage is unavailable. Otherwise, the method 2000 advances to block 2024 of FIG. 21, in which the managed node 1260 allocates the requested ephemeral data storage.[0081] Referring now to FIG. 21, in allocating the requested ephemeral data storage, the managed node 1260, in the illustrative embodiment, allocates the amount of ephemeral data storage specified in the request, as indicated in block 2026. Further, as indicated in block 2028, the managed node 1260 may also allocate the type of requested ephemeral data storage, if the request specified a type and the managed node 1260 includes an ephemeral data storage medium that satisfies the type. Subsequently, the method 2000 advances to block 2030 in which the managed node 1260 sends a response message confirming allocation of the requested ephemeral data storage. In doing so, as indicated in block 2032, the managed node 1260 may send a response that indicates the addresses of the allocated ephemeral data storage, as also described with reference to block 1640 of FIG. 17. Further, as indicated in block 2034, the managed node 1260 may send an indication of the type of allocated ephemeral data storage, as also described with reference to block 1642 of FIG. 17.EXAMPLES[0082] Illustrative examples of the technologies disclosed herein are provided below.An embodiment of the technologies may include any one or more, and any combination of, the examples described below.[0083] Example 1 includes an orchestrator server to manage the allocation of ephemeral data storage among a plurality of managed nodes, the orchestrator server comprising a network communicator to receive ephemeral data storage availability information from the plurality of managed nodes, wherein the ephemeral data storage availability information is indicative of at least an amount of ephemeral data storage available for allocation in the corresponding managed node and receive a request from a first managed node of the plurality of managed nodes to allocate an amount of ephemeral data storage as the first managed node executes one or more workloads; an ephemeral data storage request manager to determine, as a function of the ephemeral data storage availability information, an availability of the requested amount of ephemeral data storage and allocate, in response to a determination that the requested amount of ephemeral data storage is available from one or more other managed nodes of the plurality of managed nodes, the requested amount of ephemeral data storage to the first managed node as the first managed node executes the one or more workloads.[0084] Example 2 includes the subject matter of Example 1, and wherein the network communicator is further to send, to the first managed node, a notification indicative of the amount of allocated ephemeral data storage.[0085] Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to receive a request from a first managed node comprises to receive a request indicative of a type of ephemeral data storage to be allocated, wherein the type is indicative of a performance of a data storage medium to provide the ephemeral data storage.[0086] Example 4 includes the subject matter of any of Examples 1-3, and wherein the network communicator is further to send, in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of a type of the allocated ephemeral data storage, wherein the type is indicative of a performance of a data storage medium to provide the allocated ephemeral data storage.[0087] Example 5 includes the subject matter of any of Examples 1-4, and wherein the network communicator is further to send, in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of one or more addresses of the allocated ephemeral data storage.[0088] Example 6 includes the subject matter of any of Examples 1-5, and wherein to receive the ephemeral data storage availability information comprises to receive a deallocation notification that ephemeral data storage has been deallocated by at least one of the managed nodes.[0089] Example 7 includes the subject matter of any of Examples 1-6, and wherein to receive the ephemeral data storage availability information comprises to receive information indicative of a type of available ephemeral data storage, wherein the type is indicative of a performance of a data storage device associated with the available ephemeral data storage medium.[0090] Example 8 includes the subject matter of any of Examples 1-7, and wherein to determine the availability of the requested amount of ephemeral data storage comprises to compare the requested amount of ephemeral data storage to the ephemeral data storage availability information. [0091] Example 9 includes the subject matter of any of Examples 1-8, and wherein to determine the availability of the requested amount of ephemeral data storage further comprises to compare a requested type of ephemeral data storage to one or more types of ephemeral data storage indicated in the ephemeral data storage availability information.[0092] Example 10 includes the subject matter of any of Examples 1-9, and wherein to allocate the ephemeral data storage comprises to send a notification to the one or more other managed nodes to allocate at least a portion of the requested amount of ephemeral data storage.[0093] Example 11 includes the subject matter of any of Examples 1-10, and wherein to send a notification comprises to send multiple notifications to each of multiple managed nodes to allocate portions of the requested amount of ephemeral data storage.[0094] Example 12 includes the subject matter of any of Examples 1-11, and wherein to send the notification to the one or more other managed nodes further comprises to send a notification of a requested type of ephemeral data storage to the one or more other managed nodes.[0095] Example 13 includes the subject matter of any of Examples 1-12, and wherein the ephemeral data storage request manager is further to update, in response to allocation of the ephemeral data storage, the ephemeral data storage availability information to indicate the amount of allocated ephemeral data storage.[0096] Example 14 includes the subject matter of any of Examples 1-13, and wherein to update the ephemeral data storage availability information further comprises to update the ephemeral data storage availability information to indicate at least one of an address or a type of the allocated ephemeral data storage.[0097] Example 15 includes a method for managing the allocation of ephemeral data storage among a plurality of managed nodes, the method comprising receiving, by an orchestrator server, ephemeral data storage availability information from the plurality of managed nodes, wherein the ephemeral data storage availability information is indicative of at least an amount of ephemeral data storage available for allocation in the corresponding managed node; receiving, by the orchestrator server, a request from a first managed node of the plurality of managed nodes to allocate an amount of ephemeral data storage as the first managed node executes one or more workloads; determining, by the orchestrator server and as a function of the ephemeral data storage availability information, an availability of the requested amount of ephemeral data storage; and allocating, by the orchestrator server and in response to a determination that the requested amount of ephemeral data storage is available from at least a one or more other managed nodes of the plurality of managed nodes, the requested amount of ephemeral data storage to the first managed node as the first managed node executes the one or more workloads.[0098] Example 16 includes the subject matter of Example 15, and further including sending, by the orchestrator server to the first managed node, a notification indicative of the amount of allocated ephemeral data storage.[0099] Example 17 includes the subject matter of any of Examples 15 and 16, and wherein receiving a request from a first managed node comprises receiving a request indicative of a type of ephemeral data storage to be allocated, wherein the type is indicative of a performance of a data storage medium to provide the ephemeral data storage.[00100] Example 18 includes the subject matter of any of Examples 15-17, and further including sending, by the orchestrator server and in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of a type of the allocated ephemeral data storage, wherein the type is indicative of a performance of a data storage medium to provide the allocated ephemeral data storage.[00101] Example 19 includes the subject matter of any of Examples 15-18, and further including sending, by the orchestrator server in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of one or more addresses of the allocated ephemeral data storage.[00102] Example 20 includes the subject matter of any of Examples 15-19, and wherein receiving the ephemeral data storage availability information comprises receiving a deallocation notification that ephemeral data storage has been deallocated by at least one of the managed nodes.[00103] Example 21 includes the subject matter of any of Examples 15-20, and wherein receiving the ephemeral data storage availability information comprises receiving information indicative of a type of available ephemeral data storage, wherein the type is indicative of a performance of a data storage medium associated with the available ephemeral data storage.[00104] Example 22 includes the subject matter of any of Examples 15-21, and wherein determining the availability of the requested amount of ephemeral data storage comprises comparing the requested amount of ephemeral data storage to the ephemeral data storage availability information.[00105] Example 23 includes the subject matter of any of Examples 15-22, and wherein determining the availability of the requested amount of ephemeral data storage further comprises comparing a requested type of ephemeral data storage to one or more types of ephemeral data storage indicated in the ephemeral data storage availability information. [00106] Example 24 includes the subject matter of any of Examples 15-23, and wherein allocating the ephemeral data storage comprises sending a notification to the one or more other managed nodes to allocate at least a portion of the requested amount of ephemeral data storage.[00107] Example 25 includes the subject matter of any of Examples 15-24, and wherein sending a notification comprises sending multiple notifications to each of multiple managed nodes to allocate portions of the requested amount of ephemeral data storage.[00108] Example 26 includes the subject matter of any of Examples 15-25, and wherein sending the notification to the one or more other managed nodes further comprises sending a notification of a requested type of ephemeral data storage to the one or more other managed nodes.[00109] Example 27 includes the subject matter of any of Examples 15-26, and further including updating, by the orchestrator server and in response to allocation of the ephemeral data storage, the ephemeral data storage availability information to indicate the amount of allocated ephemeral data storage.[00110] Example 28 includes the subject matter of any of Examples 15-27, and wherein updating the ephemeral data storage availability information further comprises updating the ephemeral data storage availability information to indicate at least one of an address or a type of the allocated ephemeral data storage.[00111] Example 29 includes one or more computer-readable storage media comprising a plurality of instructions that, when executed by an orchestrator server, cause the orchestrator server to perform the method of any of Examples 15-28.[00112] Example 30 includes an orchestrator server to manage the allocation of ephemeral data storage among a plurality of managed nodes, the orchestrator server comprising means for receiving ephemeral data storage availability information from the plurality of managed nodes, wherein the ephemeral data storage availability information is indicative of at least an amount of ephemeral data storage available for allocation in the corresponding managed node; means for receiving a request from a first managed node of the plurality of managed nodes to allocate an amount of ephemeral data storage as the first managed node executes one or more workloads; means for determining, as a function of the ephemeral data storage availability information, an availability of the requested amount of ephemeral data storage; and means for allocating, in response to a determination that the requested amount of ephemeral data storage is available from at least a one or more other managed nodes of the plurality of managed nodes, the requested amount of ephemeral data storage to the first managed node as the first managed node executes the one or more workloads. [00113] Example 31 includes the subject matter of Example 30, and further including means for sending, to the first managed node, a notification indicative of the amount of allocated ephemeral data storage.[00114] Example 32 includes the subject matter of any of Examples 30 and 31, and wherein the means for receiving a request from a first managed node comprises means for receiving a request indicative of a type of ephemeral data storage to be allocated, wherein the type is indicative of a performance of a data storage medium to provide the ephemeral data storage.[00115] Example 33 includes the subject matter of any of Examples 30-32, and further including means for sending, in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of a type of the allocated ephemeral data storage, wherein the type is indicative of a performance of a data storage medium to provide the allocated ephemeral data storage.[00116] Example 34 includes the subject matter of any of Examples 30-33, and further including means for sending, in response to allocation of the ephemeral data storage, a notification to the first managed node indicative of one or more addresses of the allocated ephemeral data storage.[00117] Example 35 includes the subject matter of any of Examples 30-34, and wherein the means for receiving the ephemeral data storage availability information comprises means for receiving a deallocation notification that ephemeral data storage has been deallocated by at least one of the managed nodes.[00118] Example 36 includes the subject matter of any of Examples 30-35, and wherein the means for receiving the ephemeral data storage availability information comprises means for receiving information indicative of a type of available ephemeral data storage, wherein the type is indicative of a performance of a data storage medium associated with the available ephemeral data storage.[00119] Example 37 includes the subject matter of any of Examples 30-36, and wherein the means for determining the availability of the requested amount of ephemeral data storage comprises means for comparing the requested amount of ephemeral data storage to the ephemeral data storage availability information.[00120] Example 38 includes the subject matter of any of Examples 30-37, and wherein the means for determining the availability of the requested amount of ephemeral data storage further comprises means for comparing a requested type of ephemeral data storage to one or more types of ephemeral data storage indicated in the ephemeral data storage availability information. [00121] Example 39 includes the subject matter of any of Examples 30-38, and wherein the means for allocating the ephemeral data storage comprises means for sending a notification to the one or more other managed nodes to allocate at least a portion of the requested amount of ephemeral data storage.[00122] Example 40 includes the subject matter of any of Examples 30-39, and wherein the means for sending a notification comprises means for sending multiple notifications to each of multiple managed nodes to allocate portions of the requested amount of ephemeral data storage.[00123] Example 41 includes the subject matter of any of Examples 30-40, and wherein the means for sending the notification to the one or more other managed nodes further comprises means for sending a notification of a requested type of ephemeral data storage to the one or more other managed nodes.[00124] Example 42 includes the subject matter of any of Examples 30-41, and further including means for updating, in response to allocation of the ephemeral data storage, the ephemeral data storage availability information to indicate the amount of allocated ephemeral data storage.[00125] Example 43 includes the subject matter of any of Examples 30-42, and wherein the means for updating the ephemeral data storage availability information further comprises means for updating the ephemeral data storage availability information to indicate at least one of an address or a type of the allocated ephemeral data storage.[00126] Example 44 includes a managed node of a set of managed nodes to dynamically allocate ephemeral data storage, the managed node comprising an ephemeral data storage manager to determine, as the managed node executes one or more workloads, an amount of ephemeral data storage to allocate from one or more other managed nodes of the set; and a network communicator to send a request for allocation of the determined amount of ephemeral data storage and receive a response to the request, wherein the response includes addresses of the allocated ephemeral data storage on the one or more other managed nodes of the set.[00127] Example 45 includes the subject matter of Example 44, and wherein the ephemeral data storage manager is further to write to the allocated ephemeral data storage at one or more of the addresses after receipt of the response.[00128] Example 46 includes the subject matter of any of Examples 44 and 45, and wherein to write to the allocated ephemeral data storage comprises to translate a local data storage address to one of the addresses included in the response.[00129] Example 47 includes the subject matter of any of Examples 44-46, and wherein the ephemeral data storage manager is further to read from the allocated ephemeral data storage. [00130] Example 48 includes the subject matter of any of Examples 44-47, and wherein to read from the allocated ephemeral data storage comprises to translate a local data storage address to one of the addresses included in the response.[00131] Example 49 includes the subject matter of any of Examples 44-48, and wherein to send the request to allocate comprises to send the request to an orchestrator server in communication with the set of managed nodes.[00132] Example 50 includes the subject matter of any of Examples 44-49, and wherein to send the request to allocate comprises to send the request to one or more other managed nodes in the set.[00133] Example 51 includes the subject matter of any of Examples 44-50, and wherein the ephemeral data storage manager is further to determine whether to deallocate the allocated ephemeral data storage; and the network communicator is further to send, in response to a determination to deallocate the allocated ephemeral data storage, a message to each managed node where the ephemeral data storage was allocated to deallocate the allocated ephemeral data storage.[00134] Example 52 includes the subject matter of any of Examples 44-51, and wherein the network communicator is further to send a notification to an orchestrator node in communication with the set of managed nodes that the ephemeral data storage has been deallocated.[00135] Example 53 includes the subject matter of any of Examples 44-52, and wherein the ephemeral data storage manager is further to determine a type of ephemeral data storage to allocate, wherein the type is indicative of a target performance of a data storage medium associated with the ephemeral data storage.[00136] Example 54 includes the subject matter of any of Examples 44-53, and wherein to send the request to allocate comprises to include an indication of the determined type of ephemeral data storage in the request.[00137] Example 55 includes the subject matter of any of Examples 44-54, and wherein the network communicator is further to receive a request to allocate ephemeral data storage that is local to the managed node for another managed node in the set; and the ephemeral data storage manager is further to allocate, in response to the received request, the local ephemeral data storage.[00138] Example 56 includes the subject matter of any of Examples 44-55, and wherein to receive the request comprises to receive the request from the other managed node or from an orchestrator server in communication with the set of managed nodes. [00139] Example 57 includes a method for dynamically allocating ephemeral data storage, the method comprising determining, by a managed node as the managed node executes one or more workloads, an amount of ephemeral data storage to allocate from one or more other managed nodes of a set of managed nodes; sending, by the managed node, a request for allocation of the determined amount of ephemeral data storage; and receiving, by the managed node, a response to the request, wherein the response includes addresses of the allocated ephemeral data storage on the one or more other managed nodes of the set.[00140] Example 58 includes the subject matter of Example 57, and further including writing, by the managed node, to the allocated ephemeral data storage at one or more of the addresses after receipt of the response.[00141] Example 59 includes the subject matter of any of Examples 57 and 58, and wherein writing to the allocated ephemeral data storage comprises translating a local data storage address to one of the addresses included in the response.[00142] Example 60 includes the subject matter of any of Examples 57-59, and further including reading, by the managed node, from the allocated ephemeral data storage.[00143] Example 61 includes the subject matter of any of Examples 57-60, and wherein reading from the allocated ephemeral data storage comprises translating a local data storage address to one of the addresses included in the response.[00144] Example 62 includes the subject matter of any of Examples 57-61, and wherein sending the request to allocate comprises sending the request to an orchestrator server in communication with the set of managed nodes.[00145] Example 63 includes the subject matter of any of Examples 57-62, and wherein sending the request to allocate comprises sending the request to one or more other managed nodes in the set.[00146] Example 64 includes the subject matter of any of Examples 57-63, and further including determining, by the managed node, whether to deallocate the allocated ephemeral data storage; and sending, by the managed node and in response to a determination to deallocate the allocated ephemeral data storage, a message to each managed node where the ephemeral data storage was allocated to deallocate the allocated ephemeral data storage.[00147] Example 65 includes the subject matter of any of Examples 57-64, and further including sending, by the managed node, a notification to an orchestrator node in communication with the set of managed nodes that the ephemeral data storage has been deallocated.[00148] Example 66 includes the subject matter of any of Examples 57-65, and further including determining, by the managed node, a type of ephemeral data storage to allocate, wherein the type is indicative of a target performance of a data storage medium associated with the ephemeral data storage.[00149] Example 67 includes the subject matter of any of Examples 57-66, and wherein sending the request to allocate comprises including an indication of the determined type of ephemeral data storage in the request.[00150] Example 68 includes the subject matter of any of Examples 57-67, and further including receiving, by the managed node, a request to allocate ephemeral data storage that is local to the managed node for another managed node in the set; and allocating, by the managed node and in response to the received request, the local ephemeral data storage.[00151] Example 69 includes the subject matter of any of Examples 57-68, and wherein receiving the request comprises receiving the request from the other managed node or from an orchestrator server in communication with the set of managed nodes.[00152] Example 70 includes one or more computer-readable storage media comprising a plurality of instructions that, when executed by a managed node, cause the managed node to perform the method of any of Examples 57-69.[00153] Example 71 includes a managed node comprising means for determining, as the managed node executes one or more workloads, an amount of ephemeral data storage to allocate from one or more other managed nodes of a set of managed nodes; means for sending a request for allocation of the determined amount of ephemeral data storage; and means for receiving a response to the request, wherein the response includes addresses of the allocated ephemeral data storage on the one or more other managed nodes of the set.[00154] Example 72 includes the subject matter of Example 71, and further including means for writing to the allocated ephemeral data storage at one or more of the addresses after receipt of the response.[00155] Example 73 includes the subject matter of any of Examples 71 and 72, and wherein the means for writing to the allocated ephemeral data storage comprises means for translating a local data storage address to one of the addresses included in the response.[00156] Example 74 includes the subject matter of any of Examples 71-73, and further including means for reading from the allocated ephemeral data storage.[00157] Example 75 includes the subject matter of any of Examples 71-74, and wherein the means for reading from the allocated ephemeral data storage comprises means for translating a local data storage address to one of the addresses included in the response.[00158] Example 76 includes the subject matter of any of Examples 71-75, and wherein the means for sending the request to allocate comprises means for sending the request to an orchestrator server in communication with the set of managed nodes. [00159] Example 77 includes the subject matter of any of Examples 71-76, and wherein the means for sending the request to allocate comprises means for sending the request to one or more other managed nodes in the set.[00160] Example 78 includes the subject matter of any of Examples 71-77, and further including means for determining whether to deallocate the allocated ephemeral data storage; and means for sending, in response to a determination to deallocate the allocated ephemeral data storage, a message to each managed node where the ephemeral data storage was allocated to deallocate the allocated ephemeral data storage.[00161] Example 79 includes the subject matter of any of Examples 71-78, and further including means for sending a notification to an orchestrator node in communication with the set of managed nodes that the ephemeral data storage has been deallocated.[00162] Example 80 includes the subject matter of any of Examples 71-79, and further including means for determining a type of ephemeral data storage to allocate, wherein the type is indicative of a target performance of a data storage medium associated with the ephemeral data storage.[00163] Example 81 includes the subject matter of any of Examples 71-80, and wherein the means for sending the request to allocate comprises means for including an indication of the determined type of ephemeral data storage in the request.[00164] Example 82 includes the subject matter of any of Examples 71-81, and further including means for receiving a request to allocate ephemeral data storage that is local to the managed node for another managed node in the set; and means for allocating, in response to the received request, the local ephemeral data storage.[00165] Example 83 includes the subject matter of any of Examples 71-82, and wherein the means for receiving the request comprises means for receiving the request from the other managed node or from an orchestrator server in communication with the set of managed nodes.
The present disclosure relates to a suspicious activity monitoring memory system. A duration of time that a memory device operates in excess of an operational parameter may be tracked via intentionaldegradation to a transistor. One or more signals that result from the intentional degradation to the transistor may be leveraged to generate alarms and/or be otherwise used in a memory device controlcircuit and/or system.
1.A device including:A first recording circuit configured to track the first duration of time the circuit experiences a first operating condition via a first performance degradation; andA first detector circuit configured to generate a first alarm at least in part by comparing an output from the first recording circuit with a first alarm reference voltage.2.The apparatus of claim 1, wherein the first detector circuit is configured to be coupled to a control system.3.The apparatus of claim 2, wherein the control system performs preventive operations in response to receiving the first alarm.4.The device of claim 1, comprising a second detector circuit configured to at least partially compare the output from the first recording circuit with a second alarm reference voltage Generate a second alert.5.The device according to claim 4, wherein the second alarm reference voltage is at least twice as large as the first alarm reference voltage.6.The apparatus of claim 1, wherein the first performance degradation is tracked via the first recording circuit in response to the circuit operating under the first operating condition, thereby allowing a stress voltage to be applied to a transistor gate.7.The apparatus according to claim 1, comprising a plurality of additional backup transistors for responding to the operation of the circuit under the first operating condition for an operating time period longer than one transistor Track the duration.8.8. The apparatus of claim 1, comprising a second recording circuit configured to track a second duration of time that the circuit experiences a second operating condition via a second performance degradation of the transistor.9.8. The apparatus of claim 8, wherein the first recording circuit and the second recording circuit are configured to output respective voltage values to the first detector circuit.10.A method including:Apply a stress voltage to the gate of the transistor within the duration;Outputting the drain voltage of the transistor to a sensitivity detector, wherein the amplitude of the drain voltage is adjusted in response to the duration;Determining whether the drain voltage is greater than or equal to the reference alarm voltage; andAn alarm signal is generated in response to determining that the drain voltage is greater than or equal to the reference alarm voltage.11.The method of claim 10, which includes performing a preventive operation in response to the alarm signal.12.The method of claim 10, wherein the stress voltage is configured to be applied in response to a switch closing, wherein the switch is configured to close in response to receiving a suspicious activity signal.13.The method of claim 10, wherein the amplitude of the drain voltage increases in proportion to the duration.14.The method of claim 10, comprising generating a higher priority alarm signal in response to determining that the drain voltage is greater than or equal to a reference alarm voltage greater than the reference alarm voltage.15.A system including:A memory device configured to be operated to exceed a first operating parameter;A recording circuit configured to track the duration of time during which the memory device is operated to exceed the first operating parameter; andA sensitivity detector configured to determine whether to generate an alarm based at least in part on a comparison of an output from the recording circuit with a first alarm reference voltage.16.The system of claim 15 including a control system communicatively coupled to the sensitivity detector, wherein the control system performs preventive operations in response to receiving the alarm.17.The system of claim 16, wherein the control system is configured to change the operation of the memory device in such a way as to reduce the possibility that the memory device is operated to exceed the first operating parameter.18.The system of claim 16, comprising:A further recording circuit configured to track the duration of time during which the memory device is operated to exceed the second operating parameter; andAn additional sensitivity detector, the additional sensitivity detector being configured to determine whether to generate an additional alarm based at least in part on a comparison of additional outputs from the additional recording circuit.19.The system of claim 18, wherein the control system is configured to perform additional preventive operations in response to receiving the additional alarm, wherein the preventive operations are not the same as the additional preventive operations Operation.20.The system according to claim 15, wherein the first operating parameter includes a hammering parameter, an access parameter, a temperature parameter, a voltage load parameter, a current load parameter, a stress parameter, or any combination thereof.
Suspicious activity monitoring memory systemTechnical fieldThe present disclosure generally relates to semiconductor devices, and in particular, to a memory device having a data recording mechanism.Background techniqueSemiconductor devices (e.g., processors, memory systems, etc.) may include semiconductor circuits for storing and/or processing information. An example semiconductor device is a memory device. The memory device may include a volatile memory device, a non-volatile memory device, or a combination device. Memory devices such as dynamic random access memory (DRAM) can use electrical energy to store and/or access data. For example, the memory device may include a DDR random access memory (RAM) device that uses a double data rate (DDR) interface connection scheme (eg, DDR4, DDR5) for high-speed data transfer.In order to facilitate the collection of data on the utilization and real-world operating parameters of the semiconductor device, it is helpful to use a data logger in the memory device during use to monitor and record such data for subsequent retrieval. This data can be used for diagnostic operations to collect demographic data to improve understanding of the conditions and/or environment under which the product can be used. However, data recording in volatile memory devices is quite challenging.Summary of the inventionAccording to an aspect of the subject application, an apparatus is provided. The apparatus includes: a first recording circuit configured to track a first duration of time that the circuit experiences a first operating condition via a first performance degradation; and a first detector circuit, the first detector The circuit is configured to generate a first alarm at least in part by comparing the output from the first recording circuit with a first alarm reference voltage.According to another aspect of the subject application, a method is provided. The method includes: applying a stress voltage to the gate of a transistor for a duration; outputting a drain voltage of the transistor to a sensitivity detector, wherein the amplitude of the drain voltage is adjusted in response to the duration; determining the drain Whether the drain voltage is greater than or equal to a reference alarm voltage; and generating an alarm signal in response to determining that the drain voltage is greater than or equal to the reference alarm voltage.According to another aspect of the subject application, a system is provided. The system includes: a memory device configured to be operated to exceed a first operating parameter; a recording circuit configured to track the memory device being operated to exceed the first operating parameter Duration; and a sensitivity detector configured to determine whether to generate an alarm based at least in part on a comparison of the output from the recording circuit with a first alarm reference voltage.Description of the drawingsVarious aspects of the present disclosure can be better understood by reading the following detailed description and referring to the accompanying drawings. In the accompanying drawings:FIG. 1 is a block diagram of a memory device according to an embodiment of the present disclosure;Figure 2 is a block diagram of a suspicious activity detection block according to an embodiment of the present disclosure;3 is a block diagram of an example of the suspicious activity detection block of FIG. 2 according to the second embodiment of the present disclosure;FIG. 4 is a flowchart for operating the suspicious activity detection block of FIG. 3 according to the second embodiment of the present disclosure;FIG. 5 is a block diagram of a second example of the suspicious activity detection block of FIG. 2 according to the third embodiment of the present disclosure; andFIG. 6 is a block diagram of a third example of the suspicious activity detection block of FIG. 2 according to the fourth embodiment of the present disclosure.Detailed waysAs an alternative to non-volatile memory-based data loggers, it may be possible to store relevant operating data in a way that is less power and space efficient. For example, if the desired type of operating data relates to the duration of the operating parameters experienced (for example, the device has been operating within a certain operating temperature range for several hours), a data recording circuit that utilizes time-dependent changes in material properties can be used . One such data recording circuit involves a complementary metal oxide semiconductor (CMOS) device (for example, a p-channel CMOS (PMOS) device or an n-channel (NMOS) device) circuit that undergoes material degradation and the gate electrode The time for applying a known voltage is proportional. By using this data recording circuit based on CMOS degradation to measure the duration of the device experiencing different operating parameters, a large amount of valuable operating data can be obtained with a small circuit space and power input.As described in more detail below, the technology disclosed herein relates to electronic systems, including memory devices, systems with memory devices, and related methods for storing their conditions and/or usage information. An electronic system (e.g., a dynamic random access memory (DRAM) device) may include a continuation that is configured to collect and store information about the electronic system experiencing different operating characteristics (e.g., device mode) and/or environmental conditions (e.g., device operating temperature) Degradation-based storage circuits for time information (for example, CMOS-degraded data loggers).The degradation-based storage circuit can be used as a low-cost embedded data logger that records various information related to the use of electronic devices/systems by end users. The recorded usage information (for example, the duration of experiencing different temperature ranges, operating modes, asserted signals, utilized addresses, etc.) can be used for diagnostic operations, improving usage models, revising design specifications, etc.In some embodiments, the degradation-based storage circuits may each include a trigger circuit corresponding to a desired parameter or parameter combination, and for the desired parameter or parameter combination, the duration is measured to couple a predetermined voltage to the gate of the CMOS device. Extremely resulting in degradation during the duration of the target condition or standard being active. By degrading the corresponding CMOS device every time one or more target conditions occur during the operation of the electronic device/system, the cumulative degradation of the CMOS device (the cumulative degradation can be measured by a circuit that measures the trigger voltage of the CMOS device) can be Used to determine the cumulative duration of the corresponding trigger condition is active.Because the potential degradation of a single CMOS device is not unlimited, various embodiments may provide various connection topologies for coupling multiple CMOS devices to one or more trigger circuits (e.g., backup transistors) such that depleted or The defective CMOS device can be replaced with a new CMOS device that has not yet degraded. This can allow the monitoring duration to continue longer than a single CMOS can allow (for example, the operation time period of one transistor). Discussed in the co-pending and co-assigned U.S. Patent Application No. 16/138,900 entitled "A SEMICONDUCTOR DEVICE WITH A DATA-RECORDING MECHANISM" filed on September 21, 2018 Examples of these different embodiments and other information related to degradation-based monitoring techniques and/or descriptions related to the specific sensing circuitry described generally herein (see FIG. 3). This application is incorporated herein by reference in its entirety.Because the cumulative degradation of the CMOS device is predictable, the circuit that outputs and/or determines the trigger voltage of the CMOS device and/or the output and/or determination corresponds to the application of stress to the circuit (eg, CMOS device) from the recording circuit. The output of a signal of material degradation proportional to the duration of the voltage) can be determined to correspond to different alarm levels and can therefore be used in larger control systems and/or monitoring systems. In this way, the recording circuit output can be provided to a sensitivity detector in the detection block to generate an alarm signal based on a reference voltage in the detection block that defines when the alarm signal is generated. It should be noted that, in addition to the specific examples described herein, any number of recording circuits, outputs, sensitivity detectors, detection blocks, and generated alarms can be used in any suitable combination with each other. The described alarm system can benefit from every benefit provided to the electronic device/system (eg, the operation of the electronic device/system) and can therefore provide improved monitoring at least in the manner described above and at least with respect to current monitoring and detection technologies And/or detection circuit.In view of the foregoing, FIG. 1 is a block diagram of an electronic device (for example, a semiconductor memory device such as a DRAM device). The memory device 10 may include an array of memory cells, such as the memory array 12. The memory array 12 may include memory banks (e.g., memory banks 0-15 in the example of FIG. 1). Each memory bank may include a word line (WL), a bit line (BL), and memory cells arranged at the intersection of the word line and the bit line. The memory cell can include any of many different memory media types, including capacitive, magnetoresistive, ferroelectric, phase change, and so on. The selection of word lines can be performed by the row decoder 14 and the selection of bit lines can be performed by the column decoder 16. A sense amplifier (SAMP) can be provided for the corresponding bit line and can be connected to at least one corresponding local input/output (I/O) line pair (LIOT/B), and the local I/O line pair can in turn be The transmission gate (TG) operated as a switch is coupled to at least one corresponding main I/O line pair (MIOT/B). The memory array 12 may also include board lines and corresponding circuitry for managing the operation of the board lines.The memory device 10 may use external terminals including a command terminal and an address terminal coupled to a command bus and an address bus to receive a command signal (CMD) and an address signal (ADDR), respectively. The memory device 10 may further include a chip select terminal for receiving a chip select signal (CS), a clock terminal for receiving clock signals (CK and CKF), a data clock terminal for receiving data clock signals (WCK and WCKF), Data terminals (DQ, RDQS, DBI and DMI) and power terminals (VDD, VSS, VDDQ and VSSQ).The command terminal and address terminal can be supplied with address signals and bank address signals from the outside. The address signal and bank address signal supplied to the address terminal may be transmitted to the address decoder 18 through the command address input circuit 22. The address decoder 18 may receive an address signal and supply a decoded row address signal (XADD) to the row decoder 14 and a decoded column address signal (YADD) to the column decoder 16. The address decoder 18 may also receive a bank address signal (BADD) and supply the bank address signal to both the row decoder 14 and the column decoder 16.The command signal (CMD), address signal (ADDR), and chip select signal (CS) can be supplied from the memory controller to the command terminal and the address terminal. The command signal may indicate various memory commands from the memory controller (for example, including an access command, which may include a read command and/or a write command). The chip select signal may be used to select the memory device 10 to respond to commands and addresses provided to the command terminal and address terminal. When a valid chip select signal is provided to the memory device 10, the command and address can be decoded and the memory operation can be performed. The command signal may be supplied to the command decoder 20 through the command address input circuit 22 as an internal command signal (ICMD). The command decoder 20 may include various internal signals and commands for decoding internal command signals to generate memory operations (such as row command signals for selecting word lines and column command signals for selecting bit lines) Circuit. The internal command signal can also include output and input activation commands, such as clock control commands (CMDCK). The command decoder 20 may further include one or more registers for tracking various counts or values (eg, counts of refresh commands received by the memory device 10 and/or self-refresh operations performed by the memory device 10).When a read command is issued and the row address and column address are supplied with the read command in time, the read data is read from the memory cell specified by the row address and column address in the memory array 12. The read command can be received by the command decoder 20, which can provide internal commands to the I/O circuit 26 so that the read data can be read from the read/write amplifier 28 and the I/O circuit 26 according to the clock signal. Data terminal output. The read data may be provided at a time defined by the read latency (RL) information that can be programmed in the memory device 10 (such as in a mode register (not shown in FIG. 1)). The read latency information can be defined according to the clock cycle of the clock signal (CK). For example, the read latency information may be multiple clock cycles of the clock signal (for example, CK) after the read command is received by the memory device 10 when the associated read data is provided.When a write command is issued and the row address and column address are supplied with the write command in time, the data terminal may be supplied with write data according to clock signals (for example, WCK and WCKF). The write command may be received by the command decoder 20, which may provide an internal command to the I/O circuit 26 so that the write data is received by the data receiver in the I/O circuit 26 and passed through the I/O circuit 26 And the read/write amplifier 28 is supplied to the memory array 12. The write data can be written in the memory cell specified by the row address and the column address. The write data can be provided to the data terminal at a time defined by the write wait time (WL) information. The write latency information can be programmed in the memory device 10, such as a mode register (not shown in FIG. 1). The writing latency information can be defined according to the clock period of the clock signal (CK). For example, the write latency information may be multiple clock cycles of the clock signal (CK) after the write command is received by the memory device 10 when the associated write data is received.The power supply potential (VDD and VSS) can be supplied to the power supply terminal. These power supply potentials (VDD and VSS) can be supplied to the internal voltage generator circuit 30. The internal voltage generator circuit 30 can generate various internal potentials (VPP, VOD, VARY, VPERI, etc.) based on the power supply potentials (VDD and VSS). The internal potential (VPP) can be used in the row decoder 14, the internal potential (VOD and VARY) can be used in a sense amplifier included in the memory array 12, and the internal potential (VPERI) can be used in many other circuit blocks .The power supply potential (VDDQ) can also be supplied to the power supply terminal. The power supply potential (VDDQ) may be supplied to the I/O circuit 26 together with the power supply potential (VSS). In an embodiment of the present technology, the power supply potential (VDDQ) may be the same potential as the power supply potential (VDD). In another embodiment of the present technology, the power supply potential (VDDQ) may be a potential different from the power supply potential (VDD). However, a dedicated power supply potential (VDDQ) can be used for the I/O circuit 26 so that the power supply noise generated by the I/O circuit 26 does not propagate to other circuit blocks.The clock terminal and the data clock terminal can be supplied with external clock signals and complementary external clock signals. External clock signals (CK, CKF, WCK, and WCKF) can be supplied to the clock input circuit 32. Some clock signals (CK and CKF, WCK and WCKF) can be complementary. Complementary clock signals can have opposite clock levels and transition between opposite clock levels at the same time. For example, when the clock signal is at a low clock level, the complementary clock signal is at a high level, and when the clock signal is at a high clock level, the complementary clock signal is at a low clock level. In addition, when the clock signal transitions from a low clock level to a high clock level, the complementary clock signal transitions from a high clock level to a low clock level, and when the clock signal transitions from a high clock level to a low clock level, the complementary clock signal The signal transitions from a low clock level to a high clock level.The input buffer included in the clock input circuit 32 can receive an external clock signal. For example, when enabled by a signal (CKE) from the command decoder 20, the input buffer may receive clock signals (CK, CKF, WCK, and WCKF). The clock input circuit 32 may receive an external clock signal to generate an internal clock signal (ICLK). The internal clock signal may be supplied to the internal clock circuit 34. The internal clock circuit 34 may provide various phase and frequency controlled internal clock signals based on the received internal clock signal and the clock enable signal (CKE) from the command address input circuit 22. For example, the internal clock circuit 34 may include a clock path (not shown in FIG. 1) that receives internal clock signals and provides various clock signals to the command decoder 20. The internal clock circuit 34 may further provide an input/output (I/O) clock signal. An I/O clock signal used as a timing signal may be supplied to the I/O circuit 26 for determining the output timing of read data and the input timing of write data. The I/O clock signal can be provided at multiple clock frequencies so that data can be output from and/or input to the memory device 10 at different data rates. When high memory speeds are desired, a higher clock frequency may be desirable. When lower power consumption is desired, a lower clock frequency may be desirable. The internal clock signal can also be supplied to the timing generator 36 and used to generate various internal clock signals.The memory device 10 may be coupled to any suitable electronic device that uses at least a part of the memory for temporary and/or persistent storage of information as a host device. For example, the host device may include a desktop computer or a portable computer, a server, a handheld device (for example, a mobile phone, a tablet computer, a digital reader, a digital media player), or at least a part of a processing circuit system, such as a central processing unit, a coprocessor , Dedicated memory controller, etc. The host device can sometimes be a networked device (for example, a switch, a router) or a digital image, audio and/or video recorder, a vehicle, an electrical appliance, a toy, or any of a variety of other products. In one embodiment, the host device may be directly connected to the memory device 10, but in other embodiments, the host device may be indirectly connected to the memory device 10 (e.g., through a network connection or through communication with an intermediate device).The memory device 10 may include a data recording circuit 38 (data logger) for recording data from one or more sensors 40 and/or from other components of the device (eg, address command input circuit 22, decoder 14/16/18/ One or more decoders in 20) data. The data recording circuit 38 may include a degradation (for example, degradation based on negative bias temperature instability (NBTI-based) and/or degradation based on carrier thermal channel (CHC-based)) Complementary metal oxide semiconductor (CMOS) devices (for example, p-channel CMOS (PMOS) devices or n-channel (NMOS) devices). In this way, the data recording circuit 38 is a network of sensors, measurement circuitry, and recording circuitry. The memory device 10 may further adjust and/or change the amount of degradation that occurs each time to compensate for other factors or conditions (for example, operating temperature) that affect the degradation. In some embodiments, the memory device 10 may adjust the amount of degradation by adjusting the stress voltage used to degrade CMOS. In some embodiments, the memory device 10 may adjust the duty cycle by adjusting the stress input used to degrade CMOS. Although shown as separate functional blocks in FIG. 1, the memory device 10 may include the data recording circuit 38 within any of the other components described above (such as the command address input circuit 22, the I/O circuit 26, etc.). Furthermore, the memory device 10 may include other connections for the data recording circuit 38. For example, the data recording circuit 38 may be coupled to other circuits, such as the address command input circuit 22, one or more decoders in the decoder 14/16/18/20, etc., to include trigger conditions from the circuits.In view of the foregoing, the data recording circuit 38 may generally be referred to as a recording circuit that receives suspicious activity (SA) signals from one or more sensors 40. Although described in terms of SA detection and general preventive operations, it should be understood that the data recording circuit 38 can record and/or track the output from the sensor 40 for various applications and/or purposes and can use any of the aforementioned changes to record and / Or tracking parameters.This relationship can be summarized in Figure 2. FIG. 2 is a block diagram of the suspicious activity (SA) detection block 50. An example of the SA detection block 50 of FIG. 2 includes a recording circuit 52 and a detection circuit system 54. The recording circuit 52 may be any suitable data recording and/or recording circuit system suitable for use with a memory device (eg, the memory device 10). The detection circuit system 54 can detect whether a specific input or a specific operating condition of the memory device 10 and/or the memory array 12 is at an appropriate level to cause an alarm to be generated. The specific input may include and/or be associated with line hammering parameters, access parameters, temperature parameters, voltage load parameters, current load parameters, stress parameters, etc. In this way, the detection circuit system 54 may include detectors with different sensitivities for detecting different alarm levels or urgency. For example, the detection circuit system 54 includes a high-sensitivity detector 56A, a medium-sensitivity detector 56B, and a low-sensitivity detector 56C, each of which can output a separate alarm corresponding to the corresponding urgency of the input signal.The suspicious activity (SA) signal can be received via the recording circuit 52. In response to receiving the SA signal, the recording circuit 52 may record one or more parameters related to the signal and change its output to the detection circuitry 54 based at least in part on the one or more parameters as recorded over time. The detection circuit system 54 can receive the output from the recording circuit 52 and determine whether the value of the output is appropriately high to activate one of its alarms.An example of the SA detection block 50 is shown in FIG. 3. Fig. 3 is a block diagram of a first example SA detection block 50A. The SA detection block 50A includes an example of a recording circuit 52A that uses at least some of the data recording circuits 38 to record one or more parameters associated with the SA signal. As depicted, the SA signal actuates the switch 70 in the recording circuit 52A. The switch 70 may be any suitable device that is actuated in response to a control signal. Therefore, the parameter of the SA signal recorded by this example of the recording circuit 52A is the duration of enabling the SA signal (for example, the logic of the circuit system is high and/or "1").When the switch 70 is actuated, a stress voltage 72 is applied to the gate of the transistor 74. Once the SA signal is disabled (eg, the logic of the circuit system is low and/or "0"), the switch 70 returns to its zero state and the application of the stress voltage 72 to the gate of the transistor 74 is stopped. Therefore, during the duration of the SA signal being enabled, the gate of the transistor 74 receives the stress voltage 72.It should be noted that although the transistor 74 is depicted as a single transistor, the transistor 74 may have any suitable components that can be degraded by a predictable amount. For example, the transistor 74 may represent a CMOS degradation-based sensor that includes one or more PMOS and/or NMOS devices. Furthermore, in some embodiments, the transistor 74 may include one or more PMOS devices and/or NMOS devices. When the transistor 74 is a PMOS device, the transistor 74 can be degraded according to NBTI, and the gate of the PMOS device can be coupled to the stress voltage 72 and/or intermediate logic (and/or components) between the PMOS device and the stress voltage 72, such as开关70。 Switch 70. In addition, the drain of the PMOS device may be coupled to the detection circuitry 54, a resistor to ground, a feedback line to intermediate logic and/or components, etc. When the transistor 74 is an NMOS device, the transistor 74 may be degraded according to CHC. In these cases, the gate of the NMOS device may be coupled to the stress voltage 72 and/or the intermediate logic (and/or component) between the NMOS device and the stress voltage 72, the drain may be coupled to the stress voltage 72, and the source It may be coupled to detection circuitry 54, resistors to ground, feedback lines to intermediate logic and/or components, and so on.Furthermore, in some embodiments, the transistor 74 may include an NMOS device and is degraded according to channel induced secondary electron generation (CHISEL). In this case, the gate of the NMOS device can be directly coupled to the drain, wherein both the gate and the drain are directly coupled to the stress voltage. The source of the NMOS device can be coupled to a relatively large resistor, thereby allowing a relatively large drain-body voltage (e.g., high electric field). However, since the large resistor is coupled to the source, the current used to degrade the NMOS device is kept relatively small, thereby enabling the power consumption to be kept relatively low during the degradation.In each of these described examples, transistor 74 may degrade by a predictable amount based at least in part on the amount of time the stress voltage 72 is received via the gate. The output from the transistor 74 (eg, the voltage output from the drain and/or source of the transistor 74) can generally be based at least in part on the accumulation of the transistor 74 while the drive voltage (V) (eg, refer to arrow 76) remains substantially constant The amount of degradation experienced by the land. Therefore, the specific sensitivity detector 56D of the detection circuitry 54 may use the output from the transistor 74 to determine whether an alarm is generated in response to the time the SA signal has been activated over time. For example, the sensitivity detector 56D may include a comparison circuit that compares the output from the transistor 74 with a reference value (e.g., reference voltage 86, appropriate reference current) to determine whether the output from the transistor 74 is large enough to activate an alarm (e.g., Comparator 84).When the suspicious activity occurs longer than a suitable amount of time, the generated alarm can be used to notify the operator. The suitable amount can be defined by the sensitivity detector 56D and/or the reference value. In this way, the alarm output from the comparator 84 can be used to drive an alarm circuit, can be used as a signal to initiate notification to an operator, can be received by a control system and/or control circuit that changes the operation of the memory device 10 in response to the alarm, and so on.FIG. 4 is a flowchart of a method 100 for operating the SA detection block 50. Generally, the method 100 includes the SA detection block receiving a suspicious activity (SA) signal (block 102), applying a stress voltage (block 104), and sending the sensitivity detector to the sensitivity detector based at least in part on the duration of the total time that the stress voltage was previously applied to the transistor. Output an analog voltage (box 106), compare the output voltage with at least one reference voltage (box 108), generate an alarm in response to the result of the comparison (box 110), and perform preventive and/or Precautionary operations (block 112). It should be understood that although the specific operations of method 100 are described in a specific order, these operations may be performed in any suitable order. In addition, although the method 100 is described as being performed by the SA detection block 50, it should be understood that any suitable system and/or circuit system (for example, a circuit combined with a control system) can perform the described operations.At block 102, the SA detection block 50 may receive a suspicious activity (SA) signal. The SA detection block 50 can receive the SA signal at the recording circuit 52. In response to receiving the SA signal, the SA detection block 50 may apply a stress voltage 72 to the transistor at block 104, such as at least to the gate of the transistor 74 (eg, gate contact). Application of the stress voltage 72 can degrade the transistor 74 by a predictable amount.At block 106, the analog output voltage from the recording circuit 52 (eg, the voltage output from the transistor 74) may be transmitted to the sensitivity detector 56. The voltage value of the output voltage may be based at least in part on the cumulative duration of the previous application of the stress voltage 72 to the transistor 74.In block 108, the SA detection block 50 may compare the output voltage with at least one reference voltage 86. The SA detection block 50 can use the sensitivity detector 56 (including the comparator 84) to compare at least the output voltage with the reference voltage 86. The reference voltage 86 may be selected to correspond to the amount of degradation corresponding to the specific duration of application of the stress voltage 72 to the transistor 74. In this way, in the case of a short duration (applied stress voltage) guaranteeing an alarm or monitoring, when the output voltage exceeds a relatively low reference voltage 86, the sensitivity detector 56 can provide an alarm, generate an alarm, generate an alarm, Trigger alarms, etc., and in the case of long duration (applied stress voltage) ensuring alarms or monitoring, when the output voltage exceeds a relatively high reference voltage 86, the sensitivity detector 56 can provide alarms, generate alarms, and generate alarms , Trigger an alarm, etc.In block 110, the SA detection block 50 may generate an alert in response to the result of the comparison at block 108. The alarm may be a voltage signal from the comparator 84 indicating that the output voltage from the recording circuit 52 is greater than the reference voltage 86. The alarm may be output from the comparator 84 and received by a control system inside the memory device 10 (eg, inside the DRAM) and/or outside the memory device 10 (eg, outside the DRAM). The control system may perform preventive actions in response to receiving the alert. The control system inside the memory device 10 can respond to the alarm by changing one or more operations of the memory device 10. However, the control system external to the memory device 10 may respond to the alarm by performing one or more external operations with respect to the memory device 10. For example, the control system can be set up for additional alarms handled by the operator (e.g., audible alarms, visual alarms, etc.). The control system may additionally or alternatively perform additional computing actions, such as generating email notifications, pop-up notifications, or otherwise computer-generated alarms to be presented to the operator through a graphical user interface (GUI). In addition, the control system can respond by tracking alarms over time and use this information to monitor system or network-wide behavior, so that alarms between different storage devices 10 can be correlated and/or compared to improve network monitoring .At block 112, the SA detection block 50 may sometimes perform preventive or preventive actions in response to the alarm. In these cases, the SA detection block 50 may include a response circuit (not shown in the figure of the SA detection block 50). The operation of the response circuit can reduce the possibility that the memory device 10 is operated to exceed the first operating parameter. The SA detection block 50 via the response circuit can perform different operations in response to different alarms, and any suitable combination of alarms and operations can be used in the systems and methods described herein.Furthermore, in response to determining that the recording circuit 52 is operating long enough to undergo degradation or otherwise operate undesirably, some instances of the SA detection block 50 may reset the recording circuit 52. The recording circuit 52 can be reset by reversing the degradation of the transistor 74, replacing the transistor 74 with a fresh and/or new transistor, and the like.In view of the foregoing, FIG. 5 is a block diagram of a second example of the SA detection block 50, that is, the SA detection block 50C. In some cases, the recording circuit 52 of any suitable structure can be used with the detection circuit system 54. The detection circuitry 54 may have any suitable number of sensitivity detectors 56. For example, the detection circuit system 54 may have two or more sensitivity detectors 56.In this depicted example, the SA detection block 50C shares a recording circuit 52 between the two sensitivity detectors 56. Each sensitivity detector 56 can monitor different parameters to be exceeded. For example, the first sensitivity detector 56E may compare the output voltage with the first reference voltage 86 indicating that the stress voltage (for example, the stress voltage 72 or other suitable stress voltage) has been applied for a duration exceeding half of the recording period, and The second sensitivity detector 56F can compare the output voltage with the second reference voltage 86 indicating that 75% of the recording period of the stress voltage 72 is applied. In this example, the stress voltage 72 is applied in response to performing a row hammering operation on the memory device 10. The row hammering operation may include detecting the same row address that repeatedly accesses the memory device 10 within a certain duration. In this way, the SA signal can be transmitted in response to the detection of specific row addressing suspicious activity, which can be used to deliberately degrade the transistor 74 to capture the duration in which the specific row addressing occurs. Further, the reference voltage 86 may correspond to a voltage level indicating an output voltage corresponding to the expected output voltage of the transistor 74 that has been intentionally degraded for a certain duration (eg, a sensing period or 75% of a sensing duration). Therefore, the SA detection block 50C can monitor a specific line of hammering behavior that exceeds a specific duration.Using a similar method, other parameters corresponding to the suspicious activity can be tracked over time through the SA detection block 50. For example, a memory device 10 operating at a temperature greater than the maximum recommended operating temperature for a duration of at least half the sensing duration (eg, greater than or equal to 50% of the sensing duration) may indicate suspicious activity and may be expected to be monitored . It should be noted that the sensing duration can be any suitable limited amount of time, including the total operating time of the memory device 10 or any suitable components of the memory device 10. As another example, a memory device 10 operating at a voltage greater than the maximum recommended operating voltage for at least a duration of more than half the sensing duration (eg, greater than or equal to 50% of the sensing duration) may indicate suspicious activity and may Expect to be monitored.FIG. 6 is a block diagram of a third example of the SA detection block 50, that is, the SA detection block 50D. The SA detection block 50D may include any number of recording circuits 52 (52A, 52B,..., 52C) and any number of sensitivity detectors (56A1, 56B1, 56C1, 56A2, 56A1, 56B1, 56C1, 56A2, etc.) corresponding to the recording circuit 52 in the detection circuit system 54 ......, 56C3). In this way, various parameters can be used as enabling inputs to the recording circuit 52 to compare the stress voltage (for example, the stress voltage 72 or other suitable stress voltage) with the specific operation or correspondingly operating the memory device 10 in other ways. Duration is related.For example, the SA signal may be generated at least in part through a pre-threshold processing operation and/or other conversion operations (for example, two or more conditions are implemented before the SA signal is allowed to be generated). In this way, the SA signal can correspond to an over-temperature condition, where the presence of the SA signal can indicate that the threshold temperature has been exceeded for a certain length of time during which the memory device 10 is allowed to operate at that temperature (even if the threshold temperature is directly sensed) The measurement output may not be suitable for transmission to the recording circuit 52, and temperature monitoring is also allowed). In the described example, while the counting circuit system tracks the duration of the memory device 10 undergoing an over-temperature operation, the over-temperature condition is monitored so that when the memory device 10 operates under the over-temperature for too long (for example, passing the threshold of the duration) When the amount is limited), the example in which the specific operation occurs can initiate the generation of the SA signal and the transmission of the SA signal to the recording circuit 52.The sensitivity detector 56 can output respective alarms. When multiple sensitivity detectors 56 are used, the output alarms may correspond to different alarm intensities (for example, low, medium, high, etc.), and the alarm intensities may correspond to different operations, or correspond to instructions used to indicate to the operator Different signaling techniques for the presence of alarm conditions. For example, a low alarm may at least partially energize a light indicator, while a high alarm may at least partially energize a light indicator and another light indicator and/or issue an audible alarm, etc. In addition, in some embodiments, the alarm intensity can be used to initiate the generation of email notifications, pop-up notifications, or otherwise computer-generated alarms to be presented to the operator through a graphical user interface (GUI). These previously described responses can be alarm responses outside of the DRAM. In some cases, the alert response may include changing the operation inside the DRAM. For example, adjusting voltage, adjusting timing delay, etc.In some embodiments, the sensitivity detector 56 may use a reference voltage 86 defined relative to other voltages and/or other reference voltages 86. In this way, the first sensitivity detector 56 can use a reference voltage 86 that is at least twice the voltage value of the second sensitivity detector 56, a reference voltage 86 that is 1.5 times the voltage value of the second sensitivity detector 56, or a reference voltage. Any suitable combination of voltages. Since the degradation rate of the transistor 74 is not linear with time, the reference voltage 86 can be adjusted to take into account the non-linearity (for example, the reference voltage 86 may not be an exact multiple of each other). In addition, in some embodiments, the high sensitivity detector 56 may output an alarm associated with a higher priority than the alarm generated by the low sensitivity detector 56. The relative priority between alarms can change the way the control system receiving the alarm adjusts the operation of the memory device 10 in response to the alarm.Therefore, the technical effects of the present disclosure include facilitating improved monitoring operations of the memory device to prevent undesired operation and/or unauthorized access of the memory device. These technologies describe systems and methods for monitoring the operation of a memory device through a recording circuit that utilizes the cumulative degradation performance of the semiconductor device. As described, this article discusses the use of CMOS devices for these monitoring operations. However, any suitable semiconductor device that can be utilized in cumulative degradation operations can be used in these systems and methods for monitoring memory devices. The output from the recording circuit may have a value proportional to or otherwise related to the cumulative degradation experienced by the semiconductor device. In this way, the recording circuit output can be provided to the sensitivity detector to monitor the operation of the memory device and generate an alarm signal when guaranteed. The described alarm system can benefit from every benefit provided to the electronic device/system (e.g., the operation of the electronic device/system) and can therefore provide improved monitoring in at least the manner described above and at least with respect to current monitoring and detection technologies And/or detection circuit.The above specific embodiments have been shown by examples, and it should be understood that these embodiments can be easily modified and substituted in various forms. It should be further understood that the claims are not intended to be limited to the specific forms disclosed, but cover all modifications, equivalents, and alternatives that fall within the spirit and scope of the present disclosure.In this way, in the above example, the memory device is described in the context of a DRAM device. However, in addition to or instead of DRAM devices, memory devices configured according to other embodiments of the present technology may include other types of suitable storage media, such as incorporating NAND-based and/or NOR-based non-volatile Storage media (for example, NAND flash memory), magnetic storage media, phase change storage media, ferroelectric storage media, and other devices.The technology proposed and required herein is cited and applied to material objects and concrete examples with practical properties, which have significantly improved the technical field and are therefore not abstract, intangible or purely theoretical. Further, if any of the claims attached to the end of this specification contains one or more designated as "means for [execute]...[function]" or "steps for [execute]...[function]" The element is intended to be interpreted in accordance with 35U.SC112(f). However, for any claim containing an element specified in any other way, it is intended that such an element should not be interpreted in accordance with 35 U.S.C. 112(f).
A system and method for fabricating metal insulator metal capacitors while managing semiconductor processing yield and increasing capacitance per area are described. A semiconductor device fabricationprocess places a polysilicon layer on top of an oxide layer which is on top of a metal layer. The process etches trenches into areas of the polysilicon layer where the repeated trenches determine a frequency of an oscillating wave structure to be formed later. The top and bottom corners of the trenches are rounded. The process deposits a bottom metal, a dielectric, and a top metal on the polysilicon layer both on areas with the trenches and on areas without the trenches. A series of a barrier metal and a second polysilicon layer is deposited on the oscillating structure. The process completesthe MIM capacitor with metal nodes contacting each of the top metal and the bottom metal of the oscillating structure.
1.A semiconductor device manufacturing process includes:Forming an oxide layer on top of the first metal layer;Depositing a polysilicon layer on top of the oxide layer;Forming a photoresist layer on top of the polysilicon layer;Etching the photoresist layer at a plurality of substantially equally spaced positions;Etching a trench into the polysilicon layer that is not protected by the photoresist layer, wherein the trenches occur at the plurality of approximately equally spaced locations;Stripping the photoresist layer;A series of layers are deposited on the polysilicon layer to form a metal-insulator-metal capacitor with an oscillating pattern, the series of layers including a bottom metal layer, a dielectric layer, and a top metal layer.2.The semiconductor device manufacturing process of claim 1, wherein the process further comprises rounding a top angle and a bottom angle of the trench before depositing the series of layers.3.The semiconductor device manufacturing process of claim 2, wherein the rounding includes performing one or more cycles of relatively high temperature oxidation and oxidation removal of the polysilicon layer.4.The semiconductor device manufacturing process of claim 1, wherein the process further comprises depositing an additional polysilicon layer and a barrier metal layer on the top metal layer.5.The semiconductor device manufacturing process according to claim 4, wherein the process further comprises etching each layer above the bottom metal layer in a first position without an oscillating pattern, wherein the first position is used for contacting with the The first through hole in contact with the bottom metal layer.6.The process for manufacturing a semiconductor device according to claim 4, wherein the process further comprises placing a second through hole at a second position having an oscillating pattern, wherein the second through hole passes through the barrier metal layer and the An additional polysilicon layer is in contact with the top metal layer.7.The semiconductor device manufacturing process according to claim 4, wherein the oscillating pattern is located between the first metal layer and a second metal layer deposited on a via hole for forming a node of the metal-insulator-metal capacitor Approximately middle position.8.The semiconductor device manufacturing process according to claim 6, wherein the process further comprises:Growing an additional oxide layer on the oscillating pattern; andEtching trenches are used for vias in the further oxide layer.9.A semiconductor device includes:First metal layerAn oxide layer on top of the first metal layer;A polysilicon layer on top of the oxide layer; andA series of layers on the polysilicon layer, the series of layers including a bottom metal layer, a dielectric layer, and a top metal layer, wherein the polysilicon layer includes a region having a trench to form an For a patterned metal-insulator-metal capacitor, the given frequency is equal to the appearance frequency of the trenches in the polysilicon layer.10.The semiconductor device of claim 9, wherein the top corner and the bottom corner of the trench in the polysilicon layer are rounded.11.The semiconductor device of claim 9, further comprising a series of additional polysilicon layers and a barrier metal layer on the top metal layer.12.The semiconductor device according to claim 11, wherein the oscillating pattern includes a first position in which the oscillating pattern is interrupted by a non-oscillating pattern.13.The semiconductor device according to claim 12, wherein the oscillating pattern includes a second position in which the oscillating pattern is interrupted by a non-oscillating pattern.14.The semiconductor device according to claim 13, further comprising a first via hole and a second via hole, the first via hole being in contact with the bottom metal layer at the first position, and the second via hole A hole makes contact with the top metal layer through the barrier metal layer and the additional polysilicon layer at the second location.15.The semiconductor device according to claim 11, wherein the oscillating pattern is located approximately in the middle of the first metal layer and a second metal layer deposited on a via hole for forming a node of the metal-insulator-metal capacitor. position.16.A non-transitory computer-readable storage medium storing program instructions, wherein the program instructions for performing a semiconductor manufacturing process can be executed by a processor to:Forming an oxide layer on top of the first metal layer;Depositing a polysilicon layer on top of the oxide layer;Forming a photoresist layer on top of the polysilicon layer;Etching the photoresist layer at a plurality of substantially equally spaced positions;Etching a trench into the polysilicon layer that is not protected by the photoresist layer, wherein the trenches occur at the plurality of approximately equally spaced locations;Stripping the photoresist layer;A series of layers are deposited on the polysilicon layer, the series of layers including a bottom metal layer, a dielectric layer, and a top metal layer to form a metal-insulator-metal capacitor with an oscillating pattern.17.The non-transitory computer-readable storage medium of claim 16, wherein the program instructions are further executable by a processor to round the top and bottom corners of the trench before depositing the series of layers.18.The non-transitory computer-readable storage medium of claim 17, wherein the program instructions are further executable by a processor to deposit an additional polysilicon layer and a barrier metal layer on the top metal layer.19.The non-transitory computer-readable storage medium of claim 18, wherein the program instructions are further executable by a processor to etch each layer above the bottom metal layer in a first location without an oscillating pattern, wherein The first position is used for a first through hole in contact with the bottom metal layer.20.The non-transitory computer-readable storage medium of claim 18, wherein the program instructions are further executable by a processor to place a second through hole at a second position having an oscillating pattern, wherein the second through hole The top metal layer is contacted through the barrier metal layer and the additional polysilicon layer.
Oscillating capacitor architecture for improving capacitance in polysiliconBackground techniqueRelated technical descriptionWith the advancement of semiconductor manufacturing processes and the reduction in on-chip geometries, semiconductor chips provide more functionality and performance while consuming less space. Despite many advances, modern technology in processing and integrated circuit design still presents design issues that may limit potential benefits. For example, as the number and size of passive components used in a design increases, the area consumed by these components also increases. Impedance matching circuits, harmonic filters, decoupling capacitors, bypass capacitors, etc. are examples of these components.Many manufacturing processes use metal insulator metal (MIM) capacitors to provide capacitance in on-chip integrated circuits and off-chip integrated passive device (IPD) packages. A MIM capacitor is formed, which has two parallel metal plates separated by a dielectric layer. In general, each of the two metal plates and the dielectric layer is parallel to the surface of the semiconductor substrate. Such MIM capacitors are used as decoupling capacitors in various integrated circuits including oscillators and phase-shift networks in radio frequency (RF) integrated circuits to reduce noise in mixed-signal integrated circuits and microprocessors, and as micro-processors. Bypass capacitors near active devices in the processor to limit parasitic inductance and so on. MIM capacitors can also be used as memory cells in dynamic RAM.Manufacturing MIM capacitors is a challenging process. The choice of materials for the dielectric layer is limited because many materials for the dielectric layer can be diffused together with the metal layer for the parallel metal plate. This limited choice also reduces the capacitance per unit area that would otherwise be achievable. In addition, the dielectric layer is typically larger than the gate oxide layer used for active devices such as transistors. As a result, MIM capacitors are relatively large and sometimes larger than transistors used on the die. When the MIM capacitor size is increased to provide the required unit area capacitance (density), less space is available for other components on the device. In addition, when etching to form a space for a through hole of a parallel metal plate for connecting the MIM capacitor, other through holes must be connected to a lower metal layer below the MIM capacitor. Therefore, the chance of an etch stop problem increases.In view of the above, there is a need for an effective method and system for manufacturing metal insulator metal capacitors while managing semiconductor processing output and increasing capacitance per unit area.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a schematic diagram of a cross-sectional view of a portion of a semiconductor passive component being manufactured.FIG. 2 is a generalized view of another cross-sectional view of a portion of a semiconductor passive component being manufactured.FIG. 3 is a generalized view of another cross-sectional view of a portion of a semiconductor passive component being manufactured.FIG. 4 is a generalized view of another cross-sectional view of a portion of a semiconductor passive component being manufactured.FIG. 5 is a generalized view of another cross-sectional view of a portion of a semiconductor passive component being manufactured.FIG. 6 is a generalized view of another cross-sectional view of a portion of a semiconductor passive component being manufactured.FIG. 7 is a generalized view of another cross-sectional view of a portion of a semiconductor passive component being manufactured.FIG. 8 is a generalized view of another cross-sectional view of a portion of a semiconductor passive component being manufactured.FIG. 9 is a generalized view of another cross-sectional view of a portion of a semiconductor passive component being manufactured.FIG. 10 is a generalized view of another cross-sectional view of a portion of a semiconductor passive component being manufactured.11 is a schematic diagram of a cross-sectional view of a manufactured semiconductor metal-insulator-metal (MIM) capacitor having an oscillating pattern.FIG. 12 is a generalized view of another cross-sectional view of a manufactured semiconductor metal-insulator-metal (MIM) capacitor having an oscillating pattern.FIG. 13 is a generalized view of another cross-sectional view of a manufactured semiconductor metal-insulator-metal (MIM) capacitor having an oscillating pattern.FIG. 14 is a schematic diagram of a method for manufacturing a semiconductor metal-insulator-metal (MIM) capacitor having an oscillating pattern.FIG. 15 is a schematic diagram of a method for manufacturing a semiconductor metal-insulator-metal (MIM) capacitor having an oscillating pattern.Although the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the drawings and detailed description thereof are not intended to limit the invention to the particular forms disclosed, but on the contrary, the invention is intended to cover the scope of the invention as defined by the appended claims. All modifications, equivalents, and alternatives.Detailed waysIn the following description, numerous specific details are set forth to provide a thorough understanding of the invention. However, one of ordinary skill in the art will recognize that the invention may be practiced without these specific details. In some cases, well-known circuits, structures, and techniques have not been shown in detail so as not to obscure the present invention. In addition, it should be understood that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some elements are exaggerated relative to other elements.Systems and methods for manufacturing metal insulator metal capacitors while managing semiconductor processing yields and increasing capacitance per unit area are contemplated. In various embodiments, the semiconductor device manufacturing process places an oxide layer on top of the metal layer, and then places a polysilicon layer on top of the oxide layer. A photoresist layer is formed on top of the polysilicon layer and etched at repeated intervals that determine the frequency of the oscillating wave structure to be formed later and used as a metal-insulator-metal (MIM) capacitor. In various embodiments, the oscillating wave structure is approximately sine wave in nature. In other embodiments, the oscillating wave structure is not sinusoidal. In contrast, the oscillating wave structure can be approximated by a square wave, a sawtooth wave, or other oscillating wave structures. In various embodiments, the plurality of trenches are formed at approximately equal intervals by etching. One of a number of lithographic techniques is used to reduce the pitch of the trenches (increase the frequency).The process etches trenches into regions of the polysilicon layer that are not protected by the photoresist layer, and then strips the photoresist layer. The top and bottom corners of the trench are rounded to form the basis of the oscillating wave structure. In some embodiments, the process performs one or more cycles of relatively high temperature oxidation and oxidative removal of the polysilicon layer to round the corners. The process deposits a series of layers on a polysilicon layer, including the bottom metal, dielectric and top metal of a MIM capacitor. The deposition of the series of layers is performed on regions with trenches and regions without trenches.A second polysilicon layer is deposited on the top metal layer of the oscillating wave structure, and then a barrier metal layer is deposited on the second polysilicon layer. The locations for the vias are etched through multiple layers of the barrier metal, the second polysilicon layer, the top metal of the oscillating pattern, and the dielectric layer of the oscillating pattern. An insulating oxide layer is deposited over the barrier metal and the trench is etched at the location for the via. The trench is etched into the insulating oxide to place a via. One via is in contact with the bottom metal of the oscillating pattern, and the second via is in contact with the top metal of the oscillating pattern through the barrier metal and the polysilicon layer above the top metal.In the following description of FIGS. 1 to 15, manufacturing steps for a metal-insulator-metal (MIM) capacitor while managing semiconductor processing yield and increasing capacitance per unit area are described. Turning to FIG. 1, a generalized block diagram of a cross-sectional view of a portion of a semiconductor passive component being manufactured is shown. Unlike an active component such as a field effect transistor, a passive component does not conduct until the current is controlled by another signal such as a voltage signal. Passive components have no threshold voltage.Metal insulator metal (MIM) capacitors are being manufactured. Here, the metal layer 102 is deposited on an interlayer dielectric (ILD) (not shown). In various embodiments, ILD is used for metal layer insulation for interconnections. In some embodiments, the ILD is silica. In other embodiments, the ILD is one of a variety of low-k dielectrics containing carbon or fluorine. Low-k dielectrics provide lower capacitance between metal layers, thereby reducing performance loss, power consumption, and crosstalk between interconnects. A chemical mechanical planarization (CMP) step is used to remove unwanted ILD and polish the remaining ILD. The CMP step achieves a nearly completely flat and smooth surface on which other layers are built. For example, the metal layer 102 is deposited next.In one embodiment, the metal layer 102 is copper. In another embodiment, the metal layer 102 is aluminum or a mixture of copper and aluminum. In some implementations, the metal layer 102 is formed by a dual damascene process. In other embodiments, the metal layer 102 is formed by a single damascene process. These and other techniques can be expected. In an embodiment using copper as the metal layer 102, a pad using a tantalum (Ta) -based barrier material is deposited on the dielectric before forming the metal layer 102. The gasket prevents copper from diffusing into the dielectric and acts as an adhesion layer for copper. Next, a thin copper seed layer is deposited by physical vapor phase diffusion (PVD), and then copper is plated. Next, the excess copper metal layer 102 is chemically mechanically polished, and a capping layer, usually silicon nitride (SiN), is deposited. Then, a controlled thickness oxide layer 104 is formed. In various embodiments, the oxide layer 104 is silicon dioxide.In various embodiments, a plasma enhanced chemical vapor deposition (PECVD) process is used to deposit a thin film of silicon dioxide from a gaseous (vapor) state to a solid state on the metal layer 102. The PECVD process introduces a reactive gas between a ground electrode and a parallel radio frequency (RF) excitation electrode. The capacitive coupling between the electrodes excites the reactive gas into a plasma, which causes a chemical reaction and causes the reaction products to be deposited on the metal layer 102.In some embodiments, the oxide layer 104 is typically deposited using a combination of a gas, such as dichlorosilane or a silane, and an oxygen precursor, such as oxygen and nitrous oxide, at a pressure of several millitorr to several torr. The thickness of the oxide layer 104 is relatively thick. For example, the thickness of the oxide layer 104 is at least an order of magnitude greater than the thickness of a thin gate silicon dioxide layer formed by an active device such as a transistor. In some implementations, no further polishing is performed on the oxide layer 104. In other embodiments, a chemical mechanical planarization (CMP) step is used to remove unwanted silicon dioxide and polish the remaining oxide layer 104.After the oxide layer 104 is deposited, a polysilicon layer 106 is deposited on the oxide layer 104. In some embodiments, low pressure chemical vapor deposition (LPCVD) is used to deposit the polysilicon layer 106. In other embodiments, the polysilicon layer 106 is deposited by thermally decomposing or pyrolyzing the silane at a relatively high temperature. Next, a photoresist layer 108 is placed on the polysilicon layer 106 and the pattern is removed. The removed pattern was used to initially define the oscillating shape of the metal insulator metal (MIM) capacitor being manufactured.In some implementations, extreme ultraviolet lithography (EUV) technology is used to provide a resolution of each of the width 110 and the pitch 120. EUV technology uses extreme ultraviolet wavelengths to achieve resolutions below 40 nanometers. The extreme ultraviolet wavelength is approximately 13.5 nm. Relatively high temperature and high density plasma is used to provide the EUV beam. In other embodiments, directed self-assembly (DSA) lithography is used to provide a resolution of each of the width 110 and the pitch 120. DSA technology takes advantage of the self-assembling properties of materials to achieve nanoscale dimensions.In yet other embodiments, the resolution of each of the width 110 and the pitch 120 in the photoresist layer 108 is set by an immersion lithography technique. Immersion lithography uses a liquid medium, such as purified water, between the lens of the imaging equipment and the surface of the wafer. Previously, the interstitial space was just air. The resolution achieved by this technique is the resolution of the imaging equipment that is increased by the refractive index of the liquid medium. In some examples, the increased resolution falls above 80 nanometers.In other embodiments, a double patterning technique is used to provide a resolution of each of the width 110 and the pitch 120. Double patterning technology uses immersion lithography systems to define features with resolutions between 40 nm and 80 nm. Any of self-aligned double patterning (SADP) technology or lithography-etch-lithography-etch (LELE) technology is used. The double patterning technology cancels the diffraction effect in optical lithography. When the minimum size of the features on the wafer is smaller than the 193 nm wavelength of the light source, diffraction occurs. Other examples of techniques used to counteract diffraction effects in optical lithography are phase shift masks, optical proximity correction (OPC) techniques, optical equipment improvements, and computational lithography.When choosing between immersion lithography, double patterning, EUV and DSA technologies, and other technologies, costs need to be considered because costs continue to increase from immersion lithography to EUV. However, over time, the cost of these technologies will also adjust, and additional and newer technologies have been developed to provide relatively high resolution for width 110 and pitch 120. Therefore, one of various lithographic techniques is used to provide a relatively high resolution for the width 110 and the pitch 120. As will be described later, the relatively high resolution of the width 110 and the pitch 120 provides a high unit area density capacitance for the metal-insulator-metal (MIM) capacitors being manufactured.Referring to FIG. 2, a generalized block diagram of another cross-sectional view of a portion of a semiconductor passive component being manufactured is shown. The materials and layers previously described are numbered the same. As shown, a region of the polysilicon layer 106 is etched. The etched trench 210 is placed in an area of the polysilicon layer 106 that is not protected by the photoresist layer 108. In some embodiments, an anisotropic wet etch is performed to form a trapezoidal or vertical wall for the trench 210. Potassium hydroxide (KOH) or tetramethylammonium hydroxide (TMAH) is used for this type of etching. The etch time and etch rate are monitored to determine the etch depth. In addition to the width and pitch of the trench, the depth of the trench determines the density per unit area capacitance of the MIM capacitor being manufactured.In still other embodiments, a dry etch process is used to provide the etched trenches 210. A reactive ion etching (RIE) process removes material by generating a plasma through an electromagnetic field at a relatively low pressure. The RIE process is a relatively high anisotropic etching process for forming and trenches. The portions of the polysilicon layer 106 that are not protected by the photoresist layer 108 are immersed in a plasma, which is a reactive gas. Unprotected portions of the polysilicon layer 108 are removed by chemical reactions and / or ion bombardment. The reaction products are carried away in a gas stream.By adjusting the parameters of the etching process, the plasma etching process can be operated in one of a number of modes. Some plasma etching processes operate at pressures from 0.1 Torr to 5 Torr. In various embodiments, the source gas of the plasma contains chlorine or fluorine. For example, it is known that a mixture containing chlorine and bromine suppresses lateral etching and thus provides vertical walls for the trench 210. Examples of the etching mixture are tetrafluoromethane (CF4), trifluoromethane (CHF3), and octafluorocyclobutane (C4F8).As shown, the etched trench 210 has an acute angle. However, in other embodiments, the parameters for the plasma etching process are adjusted to provide rounded corners to the etched trench 210. Rounded corners help provide consistency in subsequent processing steps, where metal and dielectric are deposited on the surface of the trench to build a MIM capacitor. Briefly describe these deposition steps. In addition, acute angles cause electric field concentration in later-produced MIM capacitors, so rounded corners help reduce this effect.Turning now to FIG. 3, there is shown a generalized block diagram of another cross-sectional view of a portion of a semiconductor passive component being manufactured. Here, the photoresist layer 108 is removed. The source gas for the oxygen-containing plasma is used to oxidize ("ash") the photoresist, which facilitates removal of the photoresist. Referring to FIG. 4, a generalized block diagram of another cross-sectional view of a portion of a semiconductor passive component being manufactured is shown. As shown, the trench has both a rounded corner and a rounded corner. As mentioned earlier, the rounded corners of the trench 410 help provide consistency in subsequent processing steps, where metal and dielectric are deposited on the surface of the trench to build a MIM capacitor. In addition, acute angles cause electric field concentration in later-produced MIM capacitors, so rounded corners help reduce this effect.In some embodiments, as previously described, the fillet of the trench has been formed or partially formed by adjusting parameters for an earlier plasma etching process on the polysilicon layer 106. In other embodiments, relatively high temperature oxidation is also used. For example, it is repeated several times: a relatively high temperature oxidation step and then dry etching to remove the oxide in order to round the corners. In some embodiments, atomic layer etching (ALE) technology is used to round the dome and rounded corners of the trench 410. ALE technology is also called digital etching and single-layer etching. Various other techniques for rounding the top and bottom corners of the trench are possible and are contemplated.Trench fillets (both top and bottom corners) provide a sine wave-like waveform in the polysilicon layer 106. In various embodiments, the waveform has no symmetrical shape. In some embodiments, the width of the top "half" of the wave is different from the bottom "half" of the wave. Similarly, in one embodiment, the height of the top "half" of the wave is different from the height (depth) of the bottom "half" of the wave. In other embodiments, the angle of the left slope of the wave is different from the right slope of the wave. Although in some embodiments the waveform does not have an exact sinusoidal shape or is sometimes even symmetrical, as used herein, a waveform with rounded corners is described as a sinusoidal shape or waveform and also described as an oscillating pattern.The "frequency" of this sinusoidal shape is based on the width 110 and the pitch 120 described previously in FIG. 1. As mentioned earlier, one of various lithographic techniques (such as immersion lithography, double patterning, EUV and DSA techniques, etc.) is used to define the trench width 110 and the pitch 120. The sinusoidal waveform is used to form an oscillating structure that is used as a MIM capacitor having a relatively high density (capacitance per unit area).Turning now to FIGS. 5 to 6, there are shown block diagrams of cross-sectional views of a portion of a semiconductor passive component being manufactured. Specifically, a metal-insulator-metal layer is deposited for a MIM capacitor. As shown in Figure 5, a bottom metal is formed for the MIM capacitor. In some embodiments, the bottom metal 510 is tantalum nitride (TaN), while in other embodiments, the bottom metal 510 is titanium nitride (TiN). In various embodiments, the bottom metal 510 is placed by atomic layer deposition (ALD). In other embodiments, physical vapor deposition (PVD) is used, such as sputtering techniques.Next, as shown in FIG. 6, a relatively high-K oxide dielectric 610 is formed on the bottom metal 510. Examples of oxide 610 are hafnium oxide (HfO2) and other rare earth metal oxides. In various embodiments, the dielectric 610 is placed using atomic layer deposition (ALD). The bottom metal 510 is deposited using the same metal compound and similar techniques to deposit the top metal 620 on the dielectric 610. The combination of the bottom metal 510, the dielectric 610, and the top metal 620 provides a metal-insulator-metal (MIM) capacitor. As shown, the sinusoidal structure used to provide the MIM capacitor is located at a significant distance 630 from the metal layer 102. In some embodiments, the distance 630 is located approximately midway between the metal layer and a second metal layer that is later deposited on the vias used to form the nodes of the MIM capacitor. In such implementations, a MIM capacitor with an oscillating pattern is not significantly closer to the metal layer 102 than the second metal layer, and therefore, a significant distance 630 reduces the coupling capacitance to the metal layer 102, thereby reducing MIM Crosstalk between a capacitor and a signal line on the same metal layer as the metal layer 102. In addition, reducing the coupling capacitance due to the significant distance 630 helps reduce power consumption and improve performance.Turning now to FIGS. 7 to 10, there are shown block diagrams of cross-sectional views of a MIM capacitor being manufactured. FIG. 7 illustrates the deposition of an additional polysilicon layer 710 on the top metal layer 620. In various embodiments, the deposition and CMP steps previously used for the polysilicon layer 106 are reused. Next, a barrier metal layer 720 is deposited on the polysilicon layer 710. In various embodiments, titanium nitride (TiN) is used. FIG. 8 illustrates a relatively thin, uniform coating of the photoresist layer 810 formed on the barrier metal 720. As mentioned previously, the UV light is transmitted through a photomask, which contains a pattern for placing through-holes. In these areas, the photoresist layer 810 is removed.Next, in the region having the etched photoresist 810, each of the barrier metal 720, the polysilicon 710, the top metal 620, and the dielectric 610 is removed, as shown in FIG. In FIG. 10, the photoresist layer 810 is removed. Each etching step from the photoresist layer 810 to the dielectric layer 610 in the area is performed by one of a variety of methods. For example, in some embodiments, at least one method described previously is used.Referring to FIGS. 11 to 12, there is shown a general block diagram of a cross-sectional view of a manufactured semiconductor MIM capacitor having an oscillating pattern. As shown in FIG. 11, an oxide layer 1110 is deposited over the barrier metal layer 720 and the exposed bottom metal 510. Examples of the oxide layer 1110 are TEOS, silicon dioxide, or one of various low-k dielectrics containing carbon or fluorine. In an embodiment where aluminum is used for the metal layer, each of the vias 1120 and 1122 is polished by etching the trench into the oxide layer 1110, filling the trench with aluminum, and performing a chemical mechanical planarization (CMP) step to polish Surface. Next, each of the metals 1130 and 1132 is formed on the through holes 1120 and 1122.In embodiments where copper is used for the metal layer, a damascene process is used. The trenches for the metal layers 1130 and 1132 are etched into the oxide layer 1110, the photoresist is placed in the trenches, and the patterns for the through holes 1120 and 1122 are etched. The oxide layer 1110 is etched using these patterns It is thought that a space is formed for the through holes 1120 and 1122, and the photoresist is removed and the formed space is filled with copper.When the through hole 1120 is in contact with the bottom metal layer 510 and the through hole 1122 is in contact with the blocking top metal layer 720, the MIM capacitor is formed with metal layers 1130 and 1132 that provide a voltage node. Through the through-hole 1122, the voltage node on the metal layer 1132 passes through each of the conductive barrier metal 720 and the conductive polysilicon layer 710 and contacts the top metal layer 620 of the MIM capacitor.As shown in FIG. 12, although the same layers 1110 to 1132 are formed as described above for FIG. 11, the oscillation pattern of the MIM capacitor continues below the through hole 1122 without being flattened. Depending on the width of the through-hole 1122, a significant increase in the capacitance per unit area (density) of the MIM capacitor is achieved using a continuous oscillation pattern.Referring to FIG. 13, there is shown a generalized block diagram of another cross-sectional view of a manufactured semiconductor MIM capacitor having an oscillating pattern. Although the same layers 1110 to 1132 are formed as described above for FIG. 11, the polysilicon layer 710 and the barrier metal layer 720 are not present. In this embodiment, layers 710 and 720 are not formed. Instead, a photoresist layer is formed on the top metal layer 620 without forming an additional polysilicon layer 710. The UV light passes through a photomask containing a pattern for placing a through hole. In these areas, the photoresist layer is removed, and each of the top metal 620 and the dielectric 610 is etched in this area, and then an oxide layer 1110 is deposited.The oxide layer 1110 is etched to form a space for the through hole 1120, which is in contact with the bottom metal layer 510. In this embodiment, the oscillation structure of the MIM capacitor is placed closer to the metal layer 102 than when the polysilicon layer 710 is used. Thus, in various embodiments, the distance 1330 is less than the distance 630 shown in the previous cross-sectional view. The oscillating structure is moved closer to the metal layer 102 to prevent the etching step for the oxide layer 1110 from additionally etching the top metal layer 620. When the polysilicon layer 710 is used as in the previous example, the etch rate and etch depth of the oxide layer 1110 are relatively easier to control. Therefore, the oscillating structure can be placed further away from the metal layer 102 and closer to the oxide layer 1110 Top surface.Turning now to FIG. 14, one embodiment of a method 1400 for manufacturing a semiconductor metal-insulator-metal (MIM) capacitor with an oscillating pattern is shown. For discussion purposes, the steps in this embodiment (and in Figure 15) are shown in order. However, in other embodiments, some steps occur in a different order than shown, some steps are performed simultaneously, some steps are combined with other steps, and some steps are absent.If multiple polysilicon layers are to be used in the process ("Yes" branch of condition block 1402), an oxide layer is formed on top of the metal layer with a first thickness greater than a threshold (block 1404). In various embodiments, the oxide layer is silicon dioxide grown on top of copper. In some embodiments, a plasma enhanced chemical vapor deposition (PECVD) process is used to place the oxide layer on copper. In other embodiments, the metal layer is a mixture of copper and aluminum. In various implementations, the threshold is selected based on the ease of later etching of the layer controlled on top of the bottom metal layer of the MIM capacitor.When a polysilicon layer is used later for the etching performed to form the via, the oxide layer is grown with a first thickness greater than a threshold. Although aggressive etching is used at a later time to make space for the vias, a polysilicon layer is used to prevent etching the top metal layer including the MIM capacitor. Referring again to FIG. 6 to allow the oxide to be grown using a thickness of distance 630. The distance 630 is greater than the threshold.If multiple polysilicon layers are not used in the process ("No" branch of condition block 1402), an oxide layer is grown on top of the metal layer with a second thickness that is less than a threshold (block 1406). When a polysilicon layer is not present later for the etching performed for the formation of the via, the oxide layer is grown with a second thickness smaller than a threshold. During the use of aggressive etching at a later time to form space for the vias, there is no polysilicon layer to prevent etching of the bottom metal layer including the MIM capacitor. Referring again to FIG. 13 to allow the oxide to be grown using a thickness of distance 1330, where distance 1330 is less than the previous distance 630. The distance 1330 is also less than the threshold.A polysilicon layer is deposited on top of the oxide layer (block 1408). A photoresist layer is then formed on top of the polysilicon layer (block 1410). The photoresist layer is etched (block 1412). Etching occurs at repeated intervals that determine the frequency of the oscillating waves that are formed later and used in the MIM capacitor. One of a number of lithographic techniques is used to reduce the pitch of the trenches (increase the frequency). For example, one of immersion lithography, double patterning, EUV and DSA technologies, and others are used to form spaces in the photoresist layer.The trench is etched into an area of the polysilicon layer that is not protected by the photoresist layer (block 1414). Next, the photoresist layer is peeled off (block 1416). The top and bottom corners of the trench are rounded to form the basis of the oscillating wave (block 1418). In some embodiments, one or more cycles of relatively high temperature oxidation and oxidation removal of the polysilicon layer are performed to round the corners. In addition, in some embodiments, atomic layer etching (ALE) technology is used. In yet other embodiments, as previously described, the fillet of a trench has been formed or partially formed by adjusting parameters for an earlier plasma etching process on a polysilicon layer. Combinations of these techniques are used in other embodiments. The bottom metal, dielectric and top metal of the MIM capacitor are deposited on the polysilicon layer on areas with rounded trenches and on areas without trenches (block 1420).Referring to FIG. 15, one embodiment of a method 1500 for manufacturing a semiconductor metal-insulator-metal (MIM) capacitor with an oscillating pattern is shown. If multiple polysilicon layers are used in the process ("Yes" branch of condition block 1502), a polysilicon layer is deposited on top of the metal layer of the oscillating wave (block 1504). Otherwise, a photoresist layer is formed on the top metal layer of the oscillating wave (block 1506). The control flow of method 1500 then moves from 1506 to block 1512, which will be briefly described. Control flows from block 1504 to block 1508, where a barrier metal layer is deposited on the polysilicon layer.A photoresist layer is formed on the polysilicon layer (block 1510). Next, a photoresist layer is etched at a location for placing a via (block 1512). At that location, each layer above the bottom metal layer of the MIM capacitor is etched (block 1514). When multiple polysilicon layers are used, the layers etched at the location of the vias are the barrier metal layer, the polysilicon layer, the top metal of the MIM capacitor, and the dielectric of the MIM capacitor. When multiple polysilicon layers are not used, the layers etched at the via locations are the top metal of the MIM capacitor and the dielectric of the MIM capacitor.Then, the photoresist layer is peeled off (block 1516). The MIM capacitor is completed with the metal node contacting each of the top metal and the bottom metal of the oscillating pattern (block 1518). When multiple polysilicon layers are used, an insulating oxide is deposited over the barrier metal, or when a single polysilicon layer is used, an insulating oxide is deposited over the oscillating pattern. The trench is etched into the insulating oxide to place a via. One via is in contact with the bottom metal of the oscillating pattern, and the second via is in contact with the top metal of the oscillating pattern through the barrier metal and the polysilicon layer above the top metal.Note that one or more of the embodiments described above include software. In such embodiments, the program instructions implementing the methods and / or mechanisms are transmitted or stored on a computer-readable medium. Many types of media configured to store program instructions are available and include hard disks, floppy disks, CD-ROMs, DVDs, flash memory, programmable ROM (PROM), random access memory (RAM), and various other forms Volatile or non-volatile storage. Generally, a computer-accessible storage medium includes any storage medium that the computer can access to provide instructions and / or data to the computer during use. For example, computer-accessible storage media include storage media such as magnetic or optical media such as magnetic disks (fixed or removable disks), magnetic tapes, CD-ROM or DVD-ROM, CD-R, CD-RW , DVD-R, DVD-RW, or Blu-ray. Storage media also include volatile or non-volatile storage media such as RAM (e.g., synchronous dynamic RAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.), SDRAM, low power DDR (LPDDR2 Etc.), SDRAM, RambusDRAM (RDRAM), static RAM (SRAM), etc.), ROM, flash memory, non-volatile memory (e.g. flash memory) accessed via peripheral interfaces such as universal serial bus (USB) interface, etc. Memory) and so on. Storage media include micro-electro-mechanical systems (MEMS) and storage media accessible via communication media such as networks and / or wireless links.Additionally, in various embodiments, the program instructions include behaviors on hardware functionality in a high-level programming language (such as C) or a design language (HDL) (such as Verilog, VHDL) or a database format (such as GDS II Stream Format (GDSII)) Level description or register transfer level (RTL) description. In some cases, the description is read by a synthesis tool that synthesizes the description to generate a netlist including a series of gates from a synthesis library. The netlist includes a collection of gates that also represent the functionality of the hardware including the system. The netlist is then placed and routed to produce a data set describing the geometry to be applied to the mask. The mask is then used in various semiconductor manufacturing steps to produce one or more semiconductor circuits corresponding to the system. Optionally, as required, the instructions on the computer-accessible storage medium are a netlist (with or without a comprehensive library) or a data set. In addition, the instructions are used for the purpose of simulation by a hardware-based type of simulator from a vendor such asand Mentor.Although the above embodiments have been described in considerable detail, various changes and modifications will become apparent to those skilled in the art once the above disclosure is fully understood. It is intended that the appended claims be construed to cover all such changes and modifications.
An integrated circuit including a fabricated die having a cyanate ester buffer coating material thereon. The cyanate ester buffer coating material includes one or more openings for access to the die. A package device may be connected to the die bond pads through such openings. Further, an integrated circuit device is provided that includes a fabricated wafer including a plurality of integrated circuits fabricated thereon. The fabricated wafer has an upper surface with a cyanate ester buffer coating material cured on the upper surface of the fabricated integrated circuit device. Further, a method of producing an integrated circuit device includes providing a fabricated wafer including a plurality of integrated circuits and applying a cyanate ester coating material on a surface of the fabricated wafer. The application of cyanate ester coating material may include spinning the cyanate ester coating material on the surface of the fabricated wafer to form a buffer coat.
What is claimed is: 1. A method of producing an integrated circuit device, comprising:providing a fabricated wafer including a plurality of integrated circuits; applying a cyanate ester resin on a surface of the fabricated wafer; and curing the cyanate ester resin. 2. The method according to claim 1, wherein the surface of the fabricated wafer is a spinning the cyanate ester resin on the surface of the fabricated wafer.3. The method according to claim 1, wherein the surface of the fabricated wafer is a substantially planar surface.4. The method according to claim 1, wherein the surface of the fabricated wafer is a nonplanar surface.5. The method according to claim 1, wherein the method further comprises defining one or more openings in the cured cyanate ester resin to provide access to the fabricated wafer.6. The method according to claim 5, wherein the defining one or more openings includes photo masking the cured cyanate ester resin and etching the cured cyanate ester resin to expose die bond pads.7. The method according to claim 6, wherein the method further comprises separating at least one integrated circuit die from the fabricated wafer and wherein the at least one integrated circuit die is bonded to a packaging device via the exposed die bond pads and encapsulant is applied to at least a portion of the cured cyanate ester resin.8. A method of forming an integrated circuit device, the method comprising:providing a fabricated wafer including a plurality of integrated circuit dice, the fabricated wafer having an upper surface; applying a cyanate ester buffer coating material on the upper surface of the fabricated wafer, wherein the cyanate ester buffer coating material includes a photoactive compound and a cyanate ester resin: curing the cyanate ester buffer coating material; defining one or more openings in the cured cyanate ester buffer coating material to provide access to die bond pads of one or more integrated circuit dice of the fabricated wafer; separating at least one integrated circuit die from the fabricated wafer; electrically connecting a die package device to die bond pads of the at least one integrated circuit die; and encapsulating at least a portion of the cyanate ester buffer coating material with an encapsulant, the cyanate ester buffer coating material having a coefficient of thermal expansion selected to match the coefficient of thermal expansion of the encapsulant. 9. The method according to claim 8, wherein applying the cyanate ester buffer coating material includes spinning the cyanate ester buffer coating material on the surface of the fabricated wafer.10. The method according to claim 8, wherein the surface of the fabricated wafer is a substantially planar surface.11. The method according to claim 8, wherein the surface of the fabricated wafer is a nonplanar surface.12. The method according to claim 8, wherein defining the one or more openings includes photo masking the cured cyanate ester buffer coating material and etching the cured cyanate ester buffer coating material to expose die bond pads.
This is a continuation of application Ser. No. 09/257,402, filed Feb. 25, 1999, issued as U.S. Pat. No. 6,060,343, which is a division of application Ser. No. 08/604,219, filed Feb. 20, 1996, issued as U.S. Pat No. 5,93,046, which are incorporated herein by reference.FIELD OF THE INVENTIONThe present invention relates to an integrated circuit device and a method of producing an integrated circuit device. More particularly, the present invention relates to an integrated circuit device having a cyanate ester buffer coat and a method of producing the integrated circuit device.BACKGROUND OF THE INVENTIONBoth high density and lower density integrated circuits are fabricated on wafers utilizing numerous fabrication techniques, including, but not limited to photolithography, masking, diffusion, ion implantation, etc. After the wafers are fabricated, with the wafer including a plurality of integrated circuit dies, a die coat is commonly used to protect the plurality of integrated circuit dies from damage during the remainder of the manufacturing process. It is commonly known to use polyimides as the buffer or die coat when fabricating such devices or wafers.Thermosetting resins, such as cyanate esters, have been used in various applications that span electronic, structural aerospace, and microwave transparent composites as well as encapsualants and adhesives. Cyanate esters are described in the paper Arocy(R) Cyanate Ester Resins Chemistry, Properties and Applications, by D. A. Shimp, J. R. Christerson, and S. J. Ising (Second Edition-January 1990). Some examples of uses of cyanate esters include spinning cyanate ester onto a wafer for the purpose of making a durable base for building electric conductive metal features and also circuit board configurations.Polyimides utilized as a spin-on die coat are somewhat expensive. Many polyimides have a high dielectric constant and to not cure very quickly. Cyanate esters on the other hand have a lower dielectric constant than most polyimides and further cure more quickly than polyimides. In addition, polyimide buffer coats do not have extremely consistent photo-imageable characteristics. For example, when using photo-masking or photolithography techniques with polyimides, such techniques are not always highly successful or reliable. Therefore, in view of the above, there is a need for improved buffer coats for the fabrication process and improved integrated circuit devices resulting therefrom.SUMMARY OF THE INVENTIONIn accordance with the present invention, an integrated circuit includes a fabricated die having a cyanate ester buffer coating material thereon. The cyanate ester buffer coating material has one or more openings for access to the die. In another embodiment of the integrated circuit, die bond pads are connected to a die package device through such openings.In accordance with another embodiment of the invention, the integrated circuit device includes a fabricated wafer including a plurality of integrated circuits fabricated thereon. The fabricated wafer includes an upper surface having a cyanate ester buffer coating material cured thereon.In a further embodiment, the cyanate ester coating material may be cured on a substantially planar or nonplanar surface of the fabricated die. Further, the upper surface of the fabricated wafer may be a substantially planar or nonplanar surface.In the method of the present invention, integrated circuit devices are produced by providing a fabricated wafer including a plurality of integrated circuits. The cyanate ester coating material is applied and cured on a surface of the fabricated wafer.In further embodiments of the method, the cyanate ester coating material may be spun on the surface of the fabricated wafer to form a buffer coat, the surface of the fabricated wafer may be a substantially planar or nonplanar surface, and/or the buffer coat may be a photosensitive buffer coat.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a top view of an integrated circuit wafer, as singulated, in accordance with the present invention;FIG. 2 is a sectional view of a part of an illustrative integrated circuit in accordance with the present invention:FIG. 3 is a flow diagram showing a process in which a buffer coat is applied to the fabricated integrated circuit wafer of FIG. 1;FIG. 4 is a flow diagram showing a process of interconnection of individual integrated circuit die having a buffer coat thereon to a packaging device; andFIG. 5A and 5B (collectively herein "FIG. 5") are illustrations showing connection of an individual integrated circuit die of the wafer of FIG. 1 to a packaging substrate.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTWith reference to FIG. 1 and 2, the integrated circuit device, or wafer 10, including the individual integrated circuit dice 20 shall be described. Integrated circuit device 10 of FIG. 1 is shown in a separated or singulated state wherein the individual integrated circuits or dice 20 are separated. The integrated circuit device or fabricated wafer 10 includes on its top surface, in an unseparated state (not shown), a buffer or die coat 30 as shown in FIG. 2. The buffer or die coat 30 is a cyanate ester coating material such as cyanate ester resins available from Ciba Polymers, a division Ciba-Geigy, having a place of business in Hawhtorne, N.Y.The cyanate ester buffer coat 30 is applied to the fabricated integrated circuit 21 to form the buffer coated integrated circuit device or wafer 10, which includes the plurality of buffer coated individual integrated circuits 20. The uncoated fabricated integrated circuit 21 of FIG. 2 is fabricated in accordance with typical integrated circuit fabrication techniques, as are well known to one skilled in the art.In FIG. 2, for illustration only, the integrated circuit 21 includes silicon substrate 25, field oxide elements 26 for isolation between transistors and polysilicon gates 28. Metalization 22 is fabricated over a doped oxide 27 which extends over the silicon substrate 25 and other elements 26 and 28. Oxide 24 separates the metalization 22. As stated above, the detail with respect to the integrated circuit 21 is shown for illustration purposes only, and the present invention is not limited to such configuration but only as described in the accompanying claims. The present invention of the integrated circuit device or wafer 10 and the individual circuits 20 including a cyanate ester buffer coat is applicable to any conceivable circuit configuration of a fabricated integrated circuit 21 as would be readily apparent to skilled in the art.The metalization 22 include bond pads 23 for connecting the individual integrated circuit dies 20 to a packaging device or substrate as is well known to one skilled in the art. The cyanate ester buffer coating material 30 is applied to entirely cover the fabricated integrated circuits 20, including bond pads 23. The cyanate ester buffer coating material forms the buffer coat 30 when dried or cured.The integrated circuit 21, shown in FIG. 2, has a substantially planar surface to be coated with the cyanate ester coating material 30. However, the cyanate ester coating material may be applied to non-planar fabricated integrated circuits as well as planar surfaces. In non-planar integrated circuits or multi-layer circuits, the cyanate ester coating material will flow into the gaps or valleys between leads and the die. Because of the low dielectric characteristics of a cyanate ester coating material, capacitance between such leads will be reduced.The cured cyanate ester coating material 30, as shown in FIG. 2, is formed from cyanate ester resin, such as that available from Ciba Polymers, a division of Ciba-Geigy Corporation under the trade designation of Arocy, such as AroCy M resins. These cyanate ester resins are described in the publication "Acrocy(R) Cyanates Ester Resins Chemistry, Properties, and Applications." by D. A. Shimp, J. R. Christenson, and S. J. Ising (Second Edition-January 1990) herein incorporated by reference thereto.Cyanate esters are a family of thermosetting resins. Examples of cyanate esters are disclosed in U.S. Pat. Nos. 4,330,658, 4,022,755, 4,740,584, 3,994,949, and 3,744,403, which are incorporated herein by reverence. Preferably, suitable cyanate esters are those cured cyanate esters that have low dielectric loss characteristics such as those having dielectric constants in the range of about 2.0 to about 3.0 at 1 MHZ. Such suitable resins should have dimensional stability at molten solder temperatures, high purity, and excellent adhesion to conductor metals at temperatures up to about 250[deg.] C. Cured cyanate ester coating materials, such as those available from Ciba-Geigy, have dielectric constants in the preferred range. However, suitable cyanate ester coating materials with lower dielectric constants and thus, low dissipation factors, are contemplated in accordance with the present invention as described in the accompanying claims. In addition, suitable cured cyanate ester coating material should be extremely durable and tough, having, for example, a free standing tensile elongation of about 2.5 to about 25%. Further, the cured cyanate esters should have a low tensile modules in the range of about 1 to about 5 Gpa and have a low H2O absorption characteristic in the range of about 0.5% to about 3.0%. The resins should also be processable under the conditions of standard semiconductor manufacturing processes and stable under the conditions of processing such as spin coating, photolithography, development and curing. Moreover, the cured cyanate esters should have a low coefficient of thermal expansion in the range of about 20 to about 70 PPM[deg.] C.A particularly suitable group of cyanate ester resins are bisphenol derivatives containing a ring-forming cyanate functional group (i.e. -O-C-N) in place of the two -OH groups on the bisphenol derivatives. Generally, this family of thermosetting dicyanate monomers and prepolymer resins, are esters of bisphenol and cyanic acid which cyclotrimerize to substituted triazine rings upon heating. Conversion or curing to thermoset plastics forms three-dimensional networks of oxygen-linked triazine rings and bisphenol units. Such networks are termed polycyanurates. A preferred dieyanate monomer can be represented as follows: wherein the bisphenol linkage (X) may be any of those commonly incorporated into cyanate esters, such as -O-, -CH2OCH2-, -S-, -C(O), -O-C(O) -O-, -SO2-, -S(O) -, as well as (C1-C25)alkyls, (C5-C18)cycloalkyls, (C5-C18)aryls, or -(R<1>)C(R<2>) - wherein R<1 >and R<2 >independently represent H, a (C1-C4)alkyl group, or a fluorinated C1-C4)alkyl group. Preferably, X is -S-, a (C1-C4)alkyl group, a (C5-C18) cycloalkyl group (including fused ring systems) -(R<1>)C(R<2>)- wherein R<1 >and R<2 >are independently a (C1-C4)alkyl group or a perfluoro (C1-C4)alkyl group. More preferably, X is -(CH3)C(CH3) -, -CH2-, -S-, -(CH3)CH- as listed in Table 2 of the publication "Arocy(R) Cyanates Ester Resins Chemistry Properties, and Applications," by D. A. Shimp, J. R. Christenson, and S. J. Ising (Second Edition-January 1990). The ring substituent (R), which may be the same or different, may be hydrogen, a (C1-C4)alkyl group, a (C1-C4)alkoxy group, CH, Br, or any other substituent typically incorporated in cyanate esters. Preferably R is H or a (C1-C4)alkyl. More preferably, R is H or CH4, wherein the CH4 groups are in the ortho position relative to the cyanate groups.Cyanate ester coating materials are available as dieyanate monomers and also as partially cyclotrimerized dieyanate monomers or prepolymer resins from a number of sources. Cyanate ester prepolymer resins develop cured stat properties which are substantially identical to those of the corresponding cured dieyanate monomers. Thus, the dieyanate monomers as well as the cyclotrimerized prepolymer resins are suitable for use in the present invention. Such materials are available from Ciba Polymers a division of Ciba-Geigy Corporation, Dow Chemical Company, Mitsubishi Gas chemical Company, and Hi-Tek PolymersThe cured cyanate ester buffer coat 30, because of its durability, is particularly useful in protection of the integrated circuits 21 after fabrication. Such a cyanate ester buffer coat protects the integrated circuits 21 even after singulation of the wafer 10 during the manufacturing process. As further described below with reference to FIG. 4, the photoimageable characteristics of cyanate ester coating materials are better than other die coats such as polyimides providing for more consistent photomasking and etching results. Cyanate ester coating materials also cure faster as compared to polyimide die coats providing for a faster coating process and, as a result, an increase in output of integrated circuit devices.The buffer coating process for the integrated circuit devices shall be described with reference to FIG. 3 and the process of connecting the individual integrated circuit or individual die to a package device shall be described with reference to FIGS. 4 and 5. As shown in FIG. 3, an uncoated fabricated integrated circuit device, such as a wafer, including circuits 21, is provided to initiate the process as represented by block 50. As indicated previously, any uncoated fabricated integrated circuit device, whether having an upper surface that is planar or nonplanar, can be coated with a cyanate ester coating material as represented in block 52. The cyanate ester coating material is spun onto the integrated circuit device or fabricated wafer as is well known to one skilled in the art. Any other application technique for covering the upper surface of the integrated circuits 21 may be substituted for the spinning technique. Such alternate techniques of applying the cyanate ester coating material may include, for example, die dispense, extrusion, screen printing, and spray coat.As is well known to one skilled in the art, when spinning on a coating material such as cyanate ester coating material, the coating material is applied on the wafer surface to be coated and the wafer is then spun such that the coating material is distributed over the wafer by centrifugal force. The final thickness of the layer of coating material on the wafer is based on, at least n part, the spin rate, the viscosity of the coating material, temperature, pressure, etc. The preferred thickness of the cyanate ester coating material applied on the wafer is in the range of about 1 micron to about 24 microns. More preferably, the thickness of the cyanate ester coating material is in the range of about 5 microns to about 15 microns.The spinning process can be carried out in numerous different steps. For example, the coating material can be dispensed on the wafer while the wafer while the wafer is standing still and then the speed is increased to a particular speed for distributing the material over a period of time. Any number of intermediate spinning steps could be utilized such as going from stand still to an intermediate speed for a particular period of time and then further increasing the spinning speed. It will be readily apparent that a multitude of spinning parameters are contemplated in accordance with the present invention as described in the accompanying claims. The spinning process can be carried out with any member of different spin coating systems.After the cyanate ester coating material is applied and processed as is known to one skilled in the art, the coating material is cured as represented in block 54. Curing is performed in a furnace or by some other heating unit. The cyanate ester coating material may be cured at a temperature in the range of about 250-290[deg.] C. The curing process may vary in temperature or duration and the curing process for the cyanate ester resins provided from the various manufacturers may differ greatly. The curing process may also take place in a number of different atmospheres, including air, nitrogen, or other forming gases. Such curing may also be done under pressure or with some sort of curing catalyst. Further, the cured cyanate ester buffer coat may be machined, ground or milled, if desired, to a specific thickness, such as by chemical mechanical polishing or planarization (CMP).The connection of buffer coated individual integrated circuits 20 of the wafer 10 to packaging devices, such as package substrates, can be accomplished in accordance with the procedure of FIG. 4. The buffer coated wafer 10 is provided from the process described with reference to FIG. 3 and as represented by block 60. The bond pads 23 of the various individual integrated circuits 20 may be opened to access the bond pads as represented in block 62. One or more openings in the buffer coat are made using photo-masking and etching as is known to one skilled in the art. A photo resist is applied to the wafer, and the desired pattern of the photo resist is polymerized by ultraviolet light through a photo mask. The unpolymerized areas are removed and an etchant is used to etch the buffer coat 30 to form the one or more openings. One opening in the buffer coat may provide access to one or more die bond pads. The photo resist remaining is then removed as is known to one skilled in the art. The one or more openings may be one of any size or shape. Further, the one or more openings may provide access to the die for purposes other than connection to packaging devices, for example, such access to the die may be utilized for repair, test, etc.Cyanate ester coating material may also be converted to a photosensitive buffer coat by the addition of photosensitive ingredient's, e.g. a photoactive compound (PAC). This would reduce the number of process steps for opening the buffer coat to access the fabricated water under the buffer coat.As represented by block 64, the individual integrated circuits 20 are separated or singulated by techniques as known to one skilled in the art, such as etching, sawing, or scribing with a stylus or laser, and then breaking the wafer apart into small squares or rectangles comprising the individual integrated circuit 20. Any of individual, circuits can be connected to a packaging device or substrate which may include a lead frame, or some other device for connecting the bond pads 23 of the integrated circuit device 21 to the packaging device or substrate. The connection is illustrated in FIG. 5 and is represented block 66 of FIG. 4.FIG. 5A and 5B show two configurations of mounting die to a packaging substrate. Such configurations are described for illustrative purposes only as the invention is limited only by the accompanying claims. Many other connection techniques are known to those skilled in the art and fall within the scope of the accompanying claims.FIG. 5A illustrates a wire bonding connection. An individual integrated circuit 20 is attached to a substrate or base 34 such as a lead frame via adhesive 36. The integrated circuit 20 includes the cyanate ester buffer coat 30 with openings in the buffer coat over the bond pads 23 created as per block 62. The bond pads 23 are then connected by leads 38 to metalization on substrate 54.FIG. 5B illustrates a flip TAB face down connection. An individual integrated circuit 20 is attached to a substrate or base 34 via adhesive 36. The integrated circuit 20 includes the cyanate ester buffer coat 30 with openings in the buffer coat over the bond pads 23 created as per block 62. The bond pads are then connected by tape leads 70 to metalization on substrate 34. FIGS. 5A and 5B illustrate only two types of package interconnection and other types, such as additional TAB bonding, flip bonding of interconnection methods may be used as alternatives.The cyanate ester buffer coat 39 barriers stress buildup in a package due to mismatching coefficients of thermal expansion. For example, with bond pads interconnected to the substrate using the tiip TAB configuration of FIG. 5B with a filler (not shown) utilized next to the buffer coat 30, if coefficients of thermal expansion between the filter and the buffer coat are not matched to within some predetermined limits, stress on the leads develops in the package containing the integrated circuit 20. The characteristics of the cyanate ester buffer coat 30, because of its compatible coefficients of thermal expansion to substrates, relieves substantial stress in such and like configurations where mismatch of the coefficients would create such stress. Further, the device mounted on a lead frame may be packaged by an encapsulant (not shown) about the buffer coat30 of FIG. 5A to form the package, such as a DIP package. If the coefficients of expansion between the encapsulant and the buffer coat is mismatched, stress may develop in the package. The characteristics of the cyanate ester buffer coat 30, because or its compatible coefficients of thermal expansion to encapsulants, such as Sunikon 6300, available from Sumitoma Bakelite, JP, relieve substantial stress in such packages where mismatch of the coefficients would create such stress inside the package.Although the invention has been described with particular reference to a preferred embodiment thereof, variations and modifications of the present invention can be made within a contemplated scope of the following claims as is readily known to one skilled the art.
Various die stacks and methods of creating the same are disclosed. In one aspect, a method of manufacturing is provided that includes mounting a first semiconductor die (40) on a second semiconductor die (35) of a first semiconductor wafer (185). The second semiconductor die is singulated from the first semiconductor wafer to yield a first die stack. The second semiconductor die of the first die stack is mounted on a third semiconductor die (30) of a second semiconductor wafer (205). The third semiconductor die is singulated from the second semiconductor wafer to yield a second die stack. The second die stack is mounted on a fourth semiconductor die (25) of a third semiconductor wafer (225).
CLAIMSWhat is claimed is:1. A method of manufacturing, comprising:mounting a first semiconductor die (40) on a second semiconductor die (35) of a firstsemiconductor wafer (185);singulating the second semiconductor die from the first semiconductor wafer to yield a first die stack;mounting the second semiconductor die of the first die stack on a third semiconductor die (30) of a second semiconductor wafer (205);singulating the third semiconductor die from the second semiconductor wafer to yield asecond die stack; andmounting the second die stack on a fourth semiconductor die (25) of a third semiconductor wafer (225).2. The method of claim 1, comprising mounting a first dummy component (110) on the third semiconductor wafer adjacent a first side of the second die stack and a second dummy component (115) on the third semiconductor wafer adjacent a second side of the second die stack opposite to the first side.3. The method of claim 1, comprising singulating the fourth semiconductor die from the third semiconductor wafer to yield a third die stack.4. The method of claim 1, comprising mounting the first semiconductor wafer to a first carrier wafer (200) and revealing plural through-die -vias (155) of the second semiconductor die prior to mounting the first semiconductor die on the second semiconductor die.5. The method of claim 4, comprising mounting the second semiconductor wafer to a second carrier (220) wafer and revealing plural through-die-vias (150) of the third semiconductor die prior to mounting the second semiconductor die on the third semiconductor die.6. The method of claim 1, comprising fabricating plural interconnects (85) between the first semiconductor die and the second semiconductor die.7. The method of claim 1, wherein the mounting the first semiconductor die to the second semiconductor die comprises forming an insulating bonding layer having a first glass layer (175) and a second glass layer (180) between and bonding the first semiconductor die and the second semiconductor die and annealing to bond the first glass layer to the second glass layer and to metallurgically bond conductor structures of the first semiconductor die and conductor structures of the second semiconductor die.8. The method of claim 1, comprising molding a molding material (130) to at least partially encase the second die stack.9. The method of claim 1, wherein the fourth semiconductor die has a first side facing the third semiconductor die and another side opposite the first side, comprising fabricating plural I/Os (140) on the another side.10. A method of manufacturing, comprising:mounting a first semiconductor wafer (185) on a first carrier wafer (200);revealing plural through-die-vias (155) of a first semiconductor die (35) of the firstsemiconductor wafer;mounting a second semiconductor die (40) on the first semiconductor die after the revealing of the through-die-vias;singulating the first semiconductor die from the first semiconductor wafer to yield a first die stack;mounting a second semiconductor wafer (205) on a second carrier wafer (220);revealing plural through-die-vias (150) of a third semiconductor die (205) of the second semiconductor wafer;mounting the first semiconductor die of the first die stack on the third semiconductor die after the revealing of the through-die-vias of the third semiconductor die; singulating the third semiconductor die from the second semiconductor wafer to yield asecond die stack; andmounting the second die stack on a fourth semiconductor die (25) of a third semiconductor wafer (225).11. The method of claim 10 comprising mounting a first dummy component (110) on the third semiconductor wafer adjacent a first side of the second die stack and a second dummy component (115) on the third semiconductor wafer adjacent a second side of the second die stack opposite to the first side.12. The method of claim 10, comprising singulating the fourth semiconductor die from the third semiconductor wafer to yield a third die stack.13. The method of claim 10, comprising fabricating plural interconnects (75) between each of the dies of the second die stack.14. The method of claim 10, wherein the mounting the second semiconductor die to the first semiconductor die comprises forming an insulating bonding layer having a first glass layer (175) and a second glass layer (180) between and bonding the first semiconductor die and the second semiconductor die and annealing to bond the first glass layer to the second glass layer.15. The method of claim 10, comprising molding a molding material (130) to at least partially encase the second die stack.16. The method of claim 10, wherein the first semiconductor die has a first side facing the second semiconductor die and another side opposite the first side, comprising fabricating plural I/Os (140) on the another side.17. A semiconductor die device, comprising:a first semiconductor die (20);a stack (15) of plural semiconductor dies Opositioned on the first semiconductor die, each two adjacent semiconductor dies of the stack of plural semiconductor dies being electrically connected by plural interconnects;a first dummy component (110) positioned opposite a first side of the stack of semiconductor dies and separated from the stack of plural semiconductor dies by a first gap and a second dummy component (115) positioned opposite a second side of the stack of plural semiconductor dies and separated from the stack of plural semiconductor dies by a second gap; anda molding material (130) positioned in the first and second gaps and at least partially encasing the stack of plural semiconductor dies.18. The semiconductor die device of claim 17, wherein each two adjacent semiconductor dies of the stack of plural semiconductor dies is physically connected by an insulating bonding layer, the insulating bonding layer including a first insulating layer and a second insulating layer bonded to the first insulating layer.19. The semiconductor die device of claim 18, wherein the interconnects comprises bumpless interconnects.20. The semiconductor die device of claim 17, wherein the first semiconductor die has a first side facing a lowermost semiconductor die of the stack of plural semiconductor dies and another side opposite the first side, and plural I/Os (140) on the another side.
DIE STACKING FOR MULTI-TIER 3D INTEGRATIONBACKGROUND OF THE INVENTION [0001] Many current integrated circuits are formed as multiple dies on a common wafer. After the basic process steps to form the circuits on the dies are complete, the individual dies are singulated from the wafer. The singulated dies are then usually mounted to structures, such as circuit boards, or packaged in some form of enclosure.[0002] One frequently-used package consists of a substrate upon which a die is mounted. The upper surface of the substrate includes electrical interconnects. The die is manufactured with a plurality of bond pads. A collection of solder joints are provided between the bond pads of the die and the substrate interconnects to establish ohmic contact. After the die is mounted to the substrate, a lid is attached to the substrate to cover the die. Some conventional integrated circuits, such asmicroprocessors, generate sizeable quantities of heat that must be transferred away to avoid device shutdown or damage. The lid serves as both a protective cover and a heat transfer pathway.[0003] Stacked dies arrangements involve placing or stacking one or more semiconductor dies on a base semiconductor chip. In some conventional variants, the base semiconductor die is a high heat dissipating device, such as a microprocessor. The stacked dies are sometimes memory devices. In a typical conventional manufacturing process the dies are stacked one at a time on the base die. Die-to- die electrical connections are by way of bumps and through-chip-vias.BRIEF DESCRIPTION OF THE DRAWINGS[0004] The foregoing and other advantages of the invention will become apparent upon reading the following detailed description and upon reference to the drawings in which:[0005] FIG. 1 is a sectional view of an exemplary arrangement of a semiconductor die device with die stacking;[0006] FIG. 2 is a portion of FIG. 1 shown at greater magnification;[0007] FIG. 3 is a sectional view of a portion of an exemplary semiconductor wafer;[0008] FIG. 4 is a sectional view depicting exemplary mounting of the semiconductor wafer on a carrier wafer;[0009] FIG. 5 is a sectional view depicting exemplary wafer thinning;[0010] FIG. 6 is a sectional view depicting exemplary mounting of a semiconductor die on a semiconductor die of the semiconductor wafer;[0011] FIG. 7 is a sectional view of a singulated die stack;[0012] FIG. 8 is a sectional view of a portion of another exemplary semiconductor wafer;[0013] FIG. 9 is a sectional view depicting exemplary mounting of the semiconductor wafer on another carrier wafer;[0014] FIG. 10 is a sectional view depicting exemplary wafer thinning;[0015] FIG. 11 is a sectional view depicting exemplary mounting of the die stack on asemiconductor die of the semiconductor wafer;[0016] FIG. 12 is a sectional view of another singulated die stack;[0017] FIG. 13 is a sectional view depicting exemplary mounting of the die stack on asemiconductor die of the semiconductor wafer;[0018] FIG. 14 is a sectional view of a singulated die stack;[0019] FIG. 15 is a sectional view of a portion of another exemplary semiconductor wafer;[0020] FIG. 16 is a sectional view depicting exemplary mounting of the semiconductor wafer on another carrier wafer;[0021] FIG. 17 is a sectional view depicting exemplary wafer thinning;[0022] FIG. 18 is a sectional view depicting exemplary mounting of the die stack on asemiconductor die of the semiconductor wafer;[0023] FIG. 19 is a sectional view depicting the mounted die stack on a die of the semiconductor wafer;[0024] FIG. 20 is a sectional view depicting exemplary dummy component mounting;[0025] FIG. 21 is a sectional view depicting exemplary molding material molding; and [0026] FIG. 22 is a sectional view depicting exemplary I/O mounting.DETAILED DESCRIPTION[0027] A conventional die stacking technique stacks dies sequentially, one die on top of the first die and so on up to the top die of the stack. Where through-die -vias (TDVs) are used for die to die electrical connections, a reveal process is necessary to reveal the TDVs of one die before the next die is mounted. This is typically done in one conventional process by creating a reconstituted wafer of previously singulated dies and then performing the reveal process on the reconstituted wafer. Often times a gap filling process is necessary to avoid adversely affecting the lower dies in the stack during reveal of the TDVs of the current topmost die in the stack. However, the techniques disclosed herein enable the creation of die stacks where TDV reveals can always be performed at the wafer level without the need to resort to reconstitution. Gap filling processes during stack creation are not necessary.[0028] In accordance with one aspect of the present invention, a method of manufacturing is provided that includes mounting a first semiconductor die on a second semiconductor die of a first semiconductor wafer. The second semiconductor die is singulated from the first semiconductor wafer to yield a first die stack. The second semiconductor die of the first die stack is mounted on a third semiconductor die of a second semiconductor wafer. The third semiconductor die is singulated from the second semiconductor wafer to yield a second die stack. The second die stack is mounted on a fourth semiconductor die of a third semiconductor wafer.[0029] The method including mounting a first dummy component on the third semiconductor wafer adjacent a first side of the second die stack and a second dummy component on the thirdsemiconductor wafer adjacent a second side of the second die stack opposite to the first side.[0030] The method including singulating the fourth semiconductor die from the third semiconductor wafer to yield a third die stack.[0031] The method including mounting the first semiconductor wafer to a first carrier wafer and revealing plural through-die-vias of the second semiconductor die prior to mounting the first semiconductor die on the second semiconductor die.[0032] The method including mounting the second semiconductor wafer to a second carrier wafer and revealing plural through-die-vias of the third semiconductor die prior to mounting the second semiconductor die on the third semiconductor die.[0033] The method including fabricating plural interconnects between the first semiconductor die and the second semiconductor die.[0034] The method wherein the mounting the first semiconductor die to the second semiconductor die includes forming an insulating bonding layer having a first glass layer and a second glass layer between and bonding the first semiconductor die and the second semiconductor die and annealing to bond the first glass layer to the second glass layer and to metallurgically bond conductor structures of the first semiconductor die and conductor structures of the second semiconductor die.[0035] The method including molding a molding material to at least partially encase the second die stack.[0036] The method wherein the fourth semiconductor die has a first side facing the third semiconductor die and another side opposite the first side, and including fabricating plural I/Os on the another side.[0037] In accordance with another aspect of the present invention, a method of manufacturing is provided that includes mounting a first semiconductor wafer on a first carrier wafer, revealing plural through-die -vias of a first semiconductor die of the first semiconductor wafer, mounting a second semiconductor die on the first semiconductor die after the revealing of the through-die-vias, singulating the first semiconductor die from the first semiconductor wafer to yield a first die stack, mounting a second semiconductor wafer on a second carrier wafer, revealing plural through-die-vias of a third semiconductor die of the second semiconductor wafer, mounting the first semiconductor die of the first die stack on the third semiconductor die after the revealing of the through-die-vias of the third semiconductor die, singulating the third semiconductor die from the second semiconductor wafer to yield a second die stack, and mounting the second die stack on a fourth semiconductor die of a third semiconductor wafer.[0038] The method including mounting a first dummy component on the third semiconductor wafer adjacent a first side of the second die stack and a second dummy component on the thirdsemiconductor wafer adjacent a second side of the second die stack opposite to the first side.[0039] The method including singulating the fourth semiconductor die from the third semiconductor wafer to yield a third die stack.[0040] The method including fabricating plural interconnects between each of the dies of the second die stack.[0041] The method wherein the mounting the second semiconductor die to the first semiconductor die includes forming an insulating bonding layer having a first glass layer and a second glass layer between and bonding the first semiconductor die and the second semiconductor die and annealing to bond the first glass layer to the second glass layer.[0042] The method including molding a molding material to at least partially encase the second die stack.[0043] The method wherein the first semiconductor die has a first side facing the second semiconductor die and another side opposite the first side, including fabricating plural I/Os on the another side. [0044] In accordance with another aspect of the present invention, a semiconductor die device is provided that includes a first semiconductor die, a stack of plural semiconductor dies positioned on the first semiconductor die, where each two adjacent semiconductor dies of the stack of plural semiconductor dies is electrically connected by plural interconnects, a first dummy component positioned opposite a first side of the stack of semiconductor dies and separated from the stack of plural semiconductor dies by a first gap and a second dummy component positioned opposite a second side of the stack of plural semiconductor dies and separated from the stack of plural semiconductor dies by a second gap, and a molding material positioned in the first and second gaps and at least partially encasing the stack of plural semiconductor dies.[0045] The semiconductor die device wherein each two adjacent semiconductor dies of the stack of plural semiconductor dies is physically connected by an insulating bonding layer, the insulating bonding layer including a first insulating layer and a second insulating layer bonded to the first insulating layer.[0046] The semiconductor die device wherein the interconnects comprises bumpless interconnects.[0047] The semiconductor die device wherein the first semiconductor die has a first side facing a lowermost semiconductor die of the stack of plural semiconductor dies and another side opposite the first side, and plural I/Os on the another side.[0048] In the drawings described below, reference numerals are generally repeated where identical elements appear in more than one figure. Turning now to the drawings, and in particular to FIG. 1 which is a sectional view of an exemplary semiconductor die device 10 that includes a stack 15 of multiple semiconductor dies mounted on another semiconductor die 20. The semiconductor die device 10 can be mounted on a circuit board (not shown), such as a package substrates, a system board, a daughter board, a circuit cards or other. The stack 15 in this illustrative arrangement consists of four semiconductor dies 25, 30, 35 and 40, but of course, other numbers are possible. The semiconductor dies 20, 25, 30, 35 and 40 include respective back end of line structures (BEOL) 45, 50, 55, 60 and 65. The BEOLs 45, 50, 55, 60 and 65 consist of strata of logic and other devices that make up the functionalities of the semiconductor dies 20, 25, 30, 35 and 40 as well as plural metallization and interlevel dielectric layers. The semiconductor dies 25, 30, 35 and 40 of the semiconductor die stack 15 can have different footprints or approximately the same footprint. In the illustrated arrangement, the semiconductor dies 25, 30, 35 and 40 of the semiconductor die stack 15 can have successively smaller footprints, that is, the semiconductor die 40 is smaller than the semiconductor die 35, which in-tum is smaller than the semiconductor die 30 and so on.[0049] Electrical connections between the semiconductor die 25 and the semiconductor die 20 are by way of plural interconnects 70. The semiconductor die 30 is electrically connected to the semiconductor die 25 by way of plural interconnects 75. In addition, sets of interconnects 80 and 85 establish electrical conductivity between the semiconductor dies 35 and 30 and 40 and 35, respectively. Insulating layers 90, 95, 100 and 105 are positioned between the semiconductor die 25 and semiconductor die 20, the semiconductor die 30 and the semiconductor die 25, the semiconductor die 35 and the semiconductor die 30 and the semiconductor die 40 and the semiconductor die 35 respectively. The insulating layers 90, 95, 100 and 105 can be unitary or multiple layer structures as described in more detail below. The interconnects 70, 75, 80 and 85 can be hybrid bonds, conductive pillars, solder bumps, solder micro bumps or other types of interconnects.[0050] The semiconductor dies 20, 25, 30, 35 and 40 can be any of a variety of integrated circuits.A non-exhaustive list of examples includes processors, such as microprocessors, graphics processing units, accelerated processing units that combine aspects of both, memory devices, an application integrated specific circuit or other. In one arrangement, the semiconductor die 20 can be a processor and the semiconductor dies 25, 30, 35 and 40 can be memory dies, such as DRAM, SRAM or other.[0051] To facilitate heat transfer from the semiconductor die 20, dummy components 110 and 115 can be mounted on the semiconductor die 20 and secured thereto by way of adhesive layers 120 and 125, respectively. The dummy components 110 and 115 can be composed of silicon, germanium, or other type of semiconductor or even a dielectric material and serves as a heat transfer avenue for conducting heat away from the semiconductor die 20 and other components of the semiconductor die device 10. The adhesive layers 120 and 125 can be various types of organic adhesives, inorganic bonding layers, glass-based adhesives or even solder materials in other arrangements. A non- exhaustive list includes epoxies, an organic TIM, such as silicone rubber mixed with aluminum particles and zinc oxide. Compliant base materials other than silicone rubber and thermally conductive particles other than aluminum may be used. Thermal greases and gold, platinum and silver represent a few examples. In other arrangements the adhesive layers 120 and 125 can be nanofoils composed of layers of aluminum and nickel.[0052] A molding material 130 at least laterally encases the semiconductor die stack 15 and is positioned between the semiconductor die stack 15 and the dummy components 110 and 115. In an exemplary arrangement the materials for the molding material 130 can have a molding temperature of about 165 °C. Two commercial variants are Sumitomo EME-G750 and G760. Well-known compression molding techniques can be used to mold the molding material 130.[0053] Through die electrical conductivity is provided by plural through-die -vias (TDV). For example, the semiconductor die 20 includes plural TDVs 135 that are connected to the interconnects 70 and to I/Os 140. The TDVs 135 (and any related disclosed conductors, such as pillars and pads) can be composed of various conductor materials, such as copper, aluminum, silver, gold, platinum, palladium or others. Typically, each TDV 135 is surrounded laterally by a liner layer (not shown) of SiOx or other insulator and a barrier layer of TiN or other barrier materials. The semiconductor die 25 similarly includes TDVs 145 that are connected between the interconnects 70 and 75. The semiconductor die 30 includes TDVs 150 that connect between the interconnects 75 and 80 and the semiconductor die 35 includes TDVs 155 that connect between the interconnects 80 and 85. Finally the semiconductor die 40 includes plural TDVs 160, which in this illustrative arrangement are not revealed, but of course could be revealed using the thinning/reveal processes disclosed herein to facilitate interconnection with yet another die stacked on top of the stack 15 if desired. The I/Os 140 enable the semiconductor die device 10 to interface electrically with another component such as a circuit board or other device, and can be solder bumps, balls or other types of interconnect structures. Well-known lead free solders, such as Sn-Ag, Sn-Ag-Cu or others can be used for the I/Os 140 and other solder structures disclosed herein.[0054] Additional details of an exemplary arrangement of the interconnects 75 and insulating layer 95 will be described now in conjunction with FIGS. 2. Note that FIG. 2 is the portion of FIG. 1 circumscribed by the small dashed rectangle 165 shown at greater magnification. The following description will be illustrative of the other interconnects 70, 80 and 85 and other insulating layers 90, 100 and 105 as well. As shown in FIG. 2, each of the interconnects 75 consists of a bumpless oxide hybrid bond. In this regard, the interconnect 75 between the semiconductor die 25 and the BEOL 55 of the semiconductor die 30 is made up of a metallurgical bond between a bond pad 170 of the BEOL 55 and a bond pad 172 of the semiconductor die 25. The bond pad 170 is connected to the TDV 150 and the bond pad 172 is connected to the TDV 145. In addition, the insulating structure 95 joins the semiconductor die 25 to the semiconductor die 30 and consists of a glass layer 175, such as SiOx, of the semiconductor die 30 and another glass layer 180, such as silicon oxynitride, of the semiconductor die 25. The glass layers 175 and 180 are preferably deposited on the semiconductor dies 25 and 30, respectively, by plasma enhanced chemical vapor deposition (PECVD). The bond pad 170 is positioned in the glass layer 175 and the bond pad 172 is positioned in the glass layer 180. The bond pad 170 and the bond pad 172 are metallurgically bonded by way of an anneal process. In this regard, the semiconductor die 30 is brought down or otherwise positioned on the semiconductor die 25 so that the glass layer 175 is on or in very close proximity to the glass layer 180 and the bond pad 170 is on or in very close proximity to the bond pad 172. Thereafter, an anneal process is performed, which produces a transitory thermal expansion of the bond pads 170 and 172 bringing those structures into physical contact and causing them to form a metallurgical bond that persists even after the semiconductor dies 25 and 30 are cooled and the bond pads 170 and 172 contract thermally. Copper performs well in this metal bonding process, but other conductors could be used. There is also formed an oxide/oxynitride bond between the glass layer 175 and the glass layer 180. An exemplary anneal is performed at about 300° C for about 30 to 60 minutes to form the requisite oxynitride -oxide bonds and metal-metal bonds. In another alternative, conductive pillars on each of two adjacent stacked dies can be thermal compression bonded. In another alternative arrangement, direction oxide bond and TSV last connection can be used. In this technique, facing sides of each two adjacent stacked dies each receive an oxide film. The oxide films are subsequently planarized using chemical mechanical polishing and then plasma treated to become hydrophillic. The oxide surfaces are next placed together and annealed to form a bond.[0055] An exemplary process flow for fabricating the semiconductor die device 10 depicted in FIG.1 will now be described in conjunction with FIGS. 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 and 22. Attention is initially turned to FIG. 3, which is a sectional view of a portion of a semiconductor wafer 185. The semiconductor wafer 185 can include scores or hundreds of individual semiconductor dies in addition to the semiconductor die 35. Here, the semiconductor die 35 is demarcated by dicing streets 190 and 195 where eventual singulation from the semiconductor wafer 185 will occur. Of course there are additional dicing streets that are not associated with the semiconductor die 35, which are not visible in FIG. 3. The wafer 185 has been processed to the point where the BEOL 60 of the semiconductor die 35 is complete along with the TDVs 155. However, the wafer 185 has yet to undergo a thinning process to reveal the TDVs 155.[0056] Next and as shown in FIG. 4, the wafer 185 is flipped over from the orientation depicted in FIG. 3 and mounted on, and with the BEOL 60 facing towards, a carrier wafer 200. The carrier wafer 200 can be composed of silicon, various glasses, or other types of semiconductor materials. The wafer 185 can be secured to the carrier wafer 200 by way of an adhesive 202 applied to the carrier wafer 200. The adhesive 202 is preferably a well-known reversible adhesive, such as light activated or thermally activated adhesives, that can be reversed so that later the carrier wafer 200 can be removed. Optionally, bonding agents that require chemical and/or mechanical removal techniques could be used.[0057] Next and as shown in FIG. 5, the wafer 185 undergoes a thinning process to reveal the TDVs 155. Various thinning/reveal processes can be used. In one arrangement, the reveal process is preferably a soft reveal wherein the wafer 185 and the semiconductor die 35 are subjected to a grinding process to just above the tops of the through-die vias 155, followed by an etch back to reveal the tops of the through-die -vias 155. Next, a deposition process is used to establish athin glass layer (not visible but like the glass layer 180 depicted in FIG. 2 and described above). The thin glass layer is preferably deposited using PECVD and then subjected to CMP. The carrier wafer 200 facilitates these various grinding, etching, deposition and CMP processes. In one so-called“hard reveal” technique, a grinding process is used to expose the TDVs 155 followed by an etch back of a small amount of the wafer 185 (silicon or otherwise), followed by a thin oxide growth or deposition or a thin silicon nitride deposition by CVD and again followed by a chemical mechanical planarization in order to finalize the through-die via reveal. The first technique avoids exposing the substrate semiconductor wafer 185 to loose copper or other metal particles that can be liberated during a hard reveal.[0058] Next and as shown in FIG. 6, the semiconductor die 40 is mounted on the semiconductor die 35 of the wafer 185. The semiconductor die 40 is a singulated device that was formally part of another semiconductor wafer (not shown) that was processed to establish the BEOL 65 of the semiconductor die 40 as well as the unexposed TDVs 160 thereof. The interconnects 85 and the insulating layer 105 are fabricated at this point using the techniques disclosed elsewhere herein in conjunction with FIG. 2 and for the interconnects 75 and the insulating layer 95. Of course if the aforementioned bumpless hybrid bond process described in conjunction with FIG. 2 is used, then the mounting process will be preceded by application of a glass layer (not visible but like the glass layer 175 depicted in FIG. 2 and described elsewhere herein) on the semiconductor die 40 (or the wafer of which it was formally a part). Optionally, if the interconnects 85 are solder bumps, solder micro bumps or other types of interconnects then an appropriate mounting and reflow process will be performed at this stage to establish the interconnects 85.[0059] Next and as shown in FIG. 7, the semiconductor die 35 is singulated from the wafer 185 following the initial removal of the carrier wafer 200 shown in FIG. 6 to yield the combination of the semiconductor dies 35 and 40. The removal process for the carrier wafer 200 will depend on the type of the adhesive 202. Examples include thermal release, chemical release, mechanical peel off or laser induced removal. This combination of semiconductor dies 35 and 40 is now a stackable element that will be placed on the semiconductor die 30 as described more fully below.[0060] The fabrication of the semiconductor die 30 will now be described in conjunction with FIGS. 8, 9 and 10. Another semiconductor wafer 205 includes multitudes of semiconductor dies including the semiconductor die 30 which has been processed using well known techniques to establish the BEOL 55 and the TDVs 150 thereof. Like the semiconductor wafer 185, the wafer 205 has not undergone a thinning process at this point to reveal the TDVs 150. The semiconductor die 30 is demarcated by dicing streets 210 and 215 and at least two others that are not visible in FIG. 8. Next and as shown in FIG. 9, the semiconductor wafer 205 is flipped over from the orientation shown in FIG. 8 and mounted on another carrier wafer 220 such that the BEOL 55 is facing towards the carrier wafer 220. The carrier wafer 220 can be composed of silicon, various glasses, or other types of semiconductor materials. The semiconductor wafer 205 can be secured to the carrier wafer 220 by way of an adhesive applied to the carrier wafer 220. The adhesive can be like the adhesive 202 described above, and is not shown for simplicity of illustration. Next and as shown in FIG. 10, the semiconductor wafer 205 undergoes a thinning process to reveal the TDVs 150 of the semiconductor die 30. The reveal can be by way of the thinning/reveal processes disclosed above in conjunction with FIG. 5. The wafer 205 is now ready to have the combination of the semiconductor dies 35 and 40 mounted onto the semiconductor die 30 thereof. Next and as shown in FIG. 11, the combination of the semiconductor dies 35 and 40 is mounted on the semiconductor die 30 of the wafer 205. The mounting process can be like the mounting process described above in conjunction with mounting the semiconductor die 40 on the semiconductor die 35. In this regard, the interconnects 80 and the insulating layer 100 are established at this point using the techniques described above in conjunction with the interconnects 75 and the insulating structure 95 depicted in FIGS. 1 and 2. The carrier wafer 220 is removed using a process suitable for the adhesive (not visible) that bonded to thesemiconductor wafer 205, such as the types disclosed elsewhere herein. The semiconductor die 30 is then singulated from the wafer 205 to yield the singulated combination of the semiconductor dies 30, 35 and 40 as shown in FIG. 12. Singulation can be by mechanical sawing, laser cutting or other techniques.[0061] Next and as shown in FIG. 13, the combination of the semiconductor dies 30, 35 and 40 is mounted on the semiconductor die 25, which is at this stage still part of a semiconductor wafer 225 that has been processed like the semiconductor wafers 185 and 205 described above on a carrier wafer 230 such that the wafer 225 has undergone a thinning process to reveal the TDVs 145 of the semiconductor die 25 and the BEOL 50 is facing towards the carrier wafer 230. The mounting process for the semiconductor dies 30, 35 and 40 to the semiconductor die 25 is like the process to mount the combination of the semiconductor dies 35 and 40 to the semiconductor die 30 just described. Following the mounting process, the carrier wafer 230 is removed and the semiconductor die 25 singulated from the semiconductor wafer 225 to yield the completed semiconductor die stack 15 shown in FIG. 14. The semiconductor die stack 15 consisting of the semiconductor dies 25, 30, 35 and 40 is now ready to be mounted on the semiconductor die 20 shown in FIG. 1.[0062] Referring now to FIG. 15, the semiconductor die 20 is initially part of a semiconductor wafer 235 and is demarcated by dicing streets 240 and 245 as well as two other such streets (not visible).The wafer 235 has been processed such that the BEOL 45 of the semiconductor die 20 and the TDVs 135 have been fabricated. However, the wafer 235 is yet to undergo a thinning process to reveal the TDVs 135. Next and as shown in FIG. 16, the semiconductor wafer 235 is flipped over from the orientation in FIG. 15 and mounted on a carrier wafer 250 with the BEOL 45 facing towards the carrier wafer 250. The semiconductor wafer 235 can be secured to the carrier wafer 250 using an adhesive applied to the carrier wafer 250. The adhesive can be like the adhesive 202 described above, and is not shown for simplicity of illustration. Next and as shown in FIG. 17, with the carrier wafer 250 in place, the wafer 235 undergoes a thinning process to reveal the TDVs 135 of thesemiconductor die 20. The reveal can be by way of the thinning/reveal processes disclosed above in conjunction with FIG. 5. Next and as shown in FIG. 18, the semiconductor die stack 15 is mounted on the semiconductor die 20 of the wafer 235. This mounting process establishes the interconnects 70 and the insulating structure 90 and can be by way of the aforementioned hybrid bonding process or another process if the interconnect 70 are not hybrid bonds. The mounted stack 15 is depicted on the semiconductor die 20 in FIG. 19. Next and as shown in FIG. 20, with the carrier wafer 250 still in place, the dummy components 110 and 115 are mounted on the semiconductor wafer 235 on either side of the semiconductor die stack 15. The dummy components 110 and 115 could be preformed to be mounted and dedicated specifically to the semiconductor die stack 15. However, efficiencies can be achieved if the dummy components 110 and 115 are large enough to be sub-divided into dummy components that are set aside for the semiconductor die stack 15 and other dummy components (not visible) that will be used by adjacent semiconductor die stacks (not visible) on the semiconductor wafer 235. Indeed, note that during a subsequent singulation process, the dicing streets 240 and 245 will demarcate the post singulation lateral edges of the dummy components 110 and 115.[0063] Next and as shown in FIG. 21, with the carrier wafer 250 in place, the semiconductor wafer 235 undergoes a molding process to establish the molding material 130. This molding process can establish the molding material 130 with an upper surface that is planar with the dummy components 110 and 115. Optionally, the molding material 130 can cover the dummy components 110 and 115 and even the top most semiconductor die 40 of the semiconductor die stack 15 and then a subsequent grinding process can be used to planarize the molding material 130 and the dummy components 110 and 115. Next and as shown in FIG. 22, the I/O structures 140 are fabricated or otherwise attached to the semiconductor die 20. This can entail a pick and place and reflow or a solder stencil or other process to establish the I/O structures 140. Prior to attaching or otherwise fabricating the I/O structures 140, the carrier wafer 250 depicted in FIG. 21 is removed using the carrier wafer removal techniques disclosed elsewhere herein. The semiconductor die 20 is next singulated using the techniques disclosed elsewhere herein from the semiconductor wafer 235 at the dicing streets 240 and 245 to yield the completed semiconductor die device 10 shown in FIG. 1.[0064] While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents and altematives falling within the spirit and scope of the invention as defined by the following appended claims.
Techniques are disclosed for incorporating high mobility strained channels into fin-based transistors (e.g., FinFETs such as double-gate, trigate, etc), wherein a stress material is cladded onto the channel area of the fin. In one example embodiment, silicon germanium (SiGe) is cladded onto silicon fins to provide a desired stress, although other fin and cladding materials can be used. The techniques are compatible with typical process flows, and the cladding deposition can occur at a plurality of locations within the process flow. In some cases, the built-in stress from the cladding layer may be enhanced with a source/drain stressor that compresses both the fin and cladding layers in the channel. In some cases, an optional capping layer can be provided to improve the gate dielectric / semiconductor interface. In one such embodiment, silicon is provided over a SiGe cladding layer to improve the gate dielectric / semiconductor interface.
CLAIMS 1. A semiconductor device, comprising: a fin on a substrate, the fin comprising a semiconductor material and having a channel region and corresponding source/drain regions adjacent thereto; a cladding layer of germanium or silicon germanium (SiGe) on one or more surfaces of the channel region of the fin; a gate dielectric layer over the cladding layer; a gate electrode on the gate dielectric layer; and source/drain material in each of the source/drain regions. 2. The semiconductor device of claim 1 further comprising a capping layer between the cladding layer and the gate dielectric layer. 3. The semiconductor device of claim 2 wherein the capping layer comprises silicon. 4. The semiconductor device of claim 1 wherein the source/drain material is SiGe. 5. The semiconductor device of claim 1 wherein the fin is silicon or SiGe. 6. The semiconductor device of claim 1 wherein at least one of the cladding layer and the fin comprises 10% to 90% germanium. 7. The semiconductor device of claim 1 wherein the substrate comprises a first material and the fin comprises a second material different from the first material. 8. The semiconductor device of claim 1 wherein the substrate comprises a silicon layer and the fin is SiGe and the cladding layer is germanium. 9. The semiconductor device of claim 1 wherein the cladding layer covers side portions and a top portion of the fin. 10. A mobile computing device comprising the semiconductor device of any of claims 1 through 9. 11. A semiconductor device, comprising: a fin on a substrate, the fin comprising a semiconductor material and having a channel region and corresponding source/drain regions adjacent thereto, wherein the fin is silicon or silicon germanium (SiGe); a cladding layer of germanium or SiGe on one or more surfaces of the channel region of the fin; a capping layer on the cladding layer, wherein the capping layer comprises silicon; a gate dielectric layer on the capping layer; a gate electrode on the gate dielectric layer; and source/drain material in each of the source/drain regions, wherein the source/drain material is SiGe. 12. The semiconductor device of claim 11 wherein at least one of the cladding layer and the fin comprises 10% to 90% germanium. 13. The semiconductor device of claim 1 1 wherein the substrate comprises a first material and the fin comprises a second material different from the first material. 14. The semiconductor device of claim 11 wherein the substrate comprises a silicon layer and the fin is SiGe and the cladding layer is germanium. 15. The semiconductor device of claim 1 1 wherein the fin is silicon and the cladding layer is SiGe. 16. The semiconductor device of claim 11 wherein the cladding layer covers side portions and a top portion of the fin so as to provide a tri-gate transistor. 17. A communication device comprising the semiconductor device of any of claims 1 1 through 16. 18. A mobile computing system, comprising: a printed circuit board; a processor operatively coupled to the printed circuit board; a memory operatively coupled to the printed circuit board and in communication with the processor; and a wireless communication chip operatively coupled to the printed circuit board and in communication with the processor; wherein at least one of the processor, wireless communication chip, and/or the memory comprises a semiconductor device including: a fin on a substrate, the fin comprising a semiconductor material and having a channel region and corresponding source/drain regions adjacent thereto; a cladding layer of germanium or silicon germanium (SiGe) on one or more surfaces of the channel region of the fin; a gate dielectric layer over the cladding layer; a gate electrode on the gate dielectric layer; and source/drain material in each of the source/drain regions. 19. The system of claim 18 wherein the semiconductor device further includes a capping layer between the cladding layer and the gate dielectric layer, and the capping layer comprises silicon. 20. The system of claim 18 wherein the fin is silicon, the cladding layer is SiGe, and the source/drain material is SiGe. 21. The system of claim 20 wherein the cladding layer SiGe is different from the fin SiGe. 22. The system of claim 18 wherein the substrate comprises a first material and the fin comprises a second material different from the first material. 23. The system of claim 18 wherein the substrate comprises a silicon layer and the fin is SiGe and the cladding layer is germanium. 24. The system of claim 18 wherein the cladding layer covers side portions and a top portion of the fin.
HIGH MOBILITY STRAINED CHANNELS FOR FIN-BASED TRANSISTORS BACKGROUND A FinFET is a transistor built around a thin strip of semiconductor material (generally referred to as the fin). The transistor includes the standard field effect transistor (FET) nodes, including a gate, a gate dielectric, a source region, and a drain region. The conductive channel of the device resides on the outer sides of the fin beneath the gate dielectric. Specifically, current runs along/within both sidewalls of the fin (sides perpendicular to the substrate surface) as well as along the top of the fin (side parallel to the substrate surface). Because the conductive channel of such configurations essentially resides along the three different outer, planar regions of the fin, such a FinFET design is sometimes referred to as a trigate FinFET. Other types of FinFET configurations are also available, such as so-called double-gate FinFETs, in which the conductive channel principally resides only along the two sidewalls of the fin (and not along the top of the fin). There are a number of non-trivial issues associated with fabricating such fin- based transistors. BRIEF DESCRIPTION OF THE DRAWINGS Figures 1 through 7 and 9 through 12 illustrate a method for forming a fin-based transistor structure, in accordance with an embodiment of the present invention. Figures 8a-8d illustrate a portion of the method shown in Figures 1 through 7 and 9 through 12, in accordance with another embodiment of the present invention. Figures 13a- 13b illustrate a portion of the method shown in Figures 1 through 7 and 9 through 12, in accordance with another embodiment of the present invention. Figures 14a-14b each illustrates a resulting fin-based transistor structure, in accordance with other embodiments of the present invention. Figure 15 illustrates a computing system implemented with one or more integrated circuit structures configured in accordance with an embodiment of the present invention. DETAILED DESCRIPTION Techniques are disclosed for incorporating high mobility strained channels into fin-based transistors (e.g., FinFETs such as double-gate, trigate, etc), wherein a stress material is cladded onto the channel area of the fin. In one example embodiment, silicon germanium (SiGe) is cladded onto silicon fins to provide a desired stress, although other fin and cladding materials can be used. The techniques are compatible with typical process flows, and the cladding deposition can occur at a plurality of locations within the process flow. In some cases, the built- in stress from the cladding layer may be enhanced with a source/drain stressor that compresses both the fin and cladding layers in the channel. In some cases, an optional capping layer can be provided to improve the gate dielectric / semiconductor interface. In one such embodiment, silicon is provided over a SiGe cladding layer to improve the gate dielectric / semiconductor interface. Numerous variations and embodiments will be apparent in light of this disclosure. General Overview As previously stated, there are a number of non-trivial issues associated with fabricating FinFETs. For instance, high mobility PMOS channels have been engineered using source/drain SiGe stressors for many generations now. However, the source/drain SiGe stressors are dependent on pitch so for smaller gate pitches the stress decreases for the same germanium concentration in the source/drain stressors. Such a reduction in stress effectively limits the ability to further improve channel mobility as well as further limits continued scaling to smaller pitches. Thus, and in accordance with an embodiment of the present invention, stress is built- into a silicon channel by depositing a SiGe cladding layer thereon. The SiGe cladding process can occur at various times in the flow including after trench etch during fin formation, after shallow trench isolation (STI) material recess to expose the fins, and after removal of the sacrificial gate stack (assuming a replacement metal gate flow). In this sense, the cladding deposition process and the overall process flow are highly compatible. Both selective and non-selective process routes can be used in forming the cladding layer. In some embodiments, the built-in stress from a deposited SiGe cladding layer on a silicon fin can be enhanced with a SiGe source/drain stressor that compresses both the silicon fin and SiGe cladding layers in the channel area. In some such embodiments, the SiGe cladding layer can have germanium concentration ranging from, for example, 10-70%. In some such embodiments, an optional cap of, for instance, either selective or non-selective silicon can be provided over the SiGe cladding layer to improve the interface between the semiconductor channel and the gate dielectric layer (which may be, for instance, a high-k dielectric). Once the fins are formed and the SiGe cladding layer has been provided in the channel area (which may occur at one or more times during the process), a FinFET transistor process flow can be executed to fabricate, for instance, high-k metal gate transistors. Any number of transistor types and/or formation process flows may benefit from the channel strain techniques provided herein, such as n-channel metal oxide semiconductor ( MOS) transistors, p-channel MOS (PMOS) transistors, or both PMOS and NMOS transistors within the same flow, whether configured with thin or thick gates, and with any number of geometries. As will be appreciated, compressively strained SiGe is particularly attractive for PMOS devices, whether alone or in conjunction with NMOS devices such as silicon NMOS devices. For instance, the techniques provided herein can be used in fabricating SiGe PMOS fins and silicon NMOS fins together. Likewise, numerous material systems can benefit from the techniques described herein, as will be apparent in light of this disclosure, and the claimed invention is not intended to be limited to any particular one or set. Rather, the techniques can be employed wherever built-in channel strain is helpful. The techniques can be embodied, for example, in any number of integrated circuits, such memories and processors and other such devices that are fabricated with transistors and other active junction semiconductor devices, as well as in methodologies suitable for practice at fabs where integrated circuits are made. Use of the techniques described herein manifest in a structural way. For instance, a cross-section image of transistors formed in accordance with an embodiment, such as an image provided with a transmission electron microscope (TEM), demonstrate a cladding layer on the channel portion of the fin, as compared to a conventional fin-based transistors. Variations on incorporating high mobility strained SiGe channels onto silicon fins will be apparent in light of this disclosure. For instance, another embodiment may incorporate high mobility strained germanium channels onto silicon fins, and another embodiment may incorporate high mobility strained germanium channels onto SiGe fins. Further note that the fins may be native to the substrate (and therefore the same material as the substrate) or may be formed on the substrate. One such example embodiment incorporates high mobility strained germanium channels onto SiGe fins formed on a silicon substrate. In further embodiments, note that the cladding may be on the top and two sides of the fin (tri-gate FinFET) or only on the two sides of the fin (double-gate FinFET). Fin Structure Figures 1 through 7 and 9 through 12 illustrate a method for forming a fin-based transistor structure in accordance with an embodiment of the present invention. As will be appreciated, each of the views shown in Figures 1 through 7 is a cross-sectional side view taking across the channel region and perpendicular to the fins, and each of the views shown in Figures 9 through 12 is a cross-sectional side view taking across the channel region and parallel to the fins. Figures 8a-d demonstrate an alternative methodology in accordance with another embodiment, and will be discussed in turn. As can be seen in Figure 1, a substrate is provided. Any number of suitable substrates can be used here, including bulk substrates, semiconductors on insulator substrates (XOI, where X is a semiconductor material such as Si, Ge or Ge-enriched Si), and multi-layered structures, and particularly those substrates upon which fins are formed prior to a subsequent gate patterning process. In one specific example case, the substrate is a bulk silicon substrate. In another example case, the substrate is a silicon on insulator (SOI) substrate. In another example case, the substrate is a bulk SiGe substrate. In another example case, the substrate is a multilayered substrate having a SiGe layer on a silicon layer. In another example case, the substrate is a SiGe on insulator (SiGeOI) substrate. Any number of configurations can be used, as will be apparent. Figure 1 further illustrates a patterned hardmask on the substrate, which can be carried out using standard photolithography, including deposition of hardmask materials (e.g., such as silicon dioxide, silicon nitride, and/or other suitable hardmask materials), patterning resist on a portion of the hardmask that will remain temporarily to protect an underlying region of the substrate that will become the fins, etching to remove the unmasked (no resist) portions of the hardmask (e.g., using a dry etch, or other suitable hardmask removal process), and then stripping the patterned resist material, thereby leaving the patterned hardmask as shown. Alternatively, the hardmask can be selectively deposited in an additive process that doesn't require etching. In one example embodiment, the resulting hardmask is a standard two-layer hardmask configured with a bottom layer of oxide and top layer of silicon nitride, and includes three locations, but in other embodiments, the hardmask may be configured differently, depending on the particular active device being fabricated and the number of fins to be formed. In one specific example embodiment having a silicon substrate, the hardmask is implemented with a bottom layer of native oxide (oxidation of silicon substrate) and top layer of silicon nitride (SiN). Any number of hardmask configurations can be used, as will be apparent. As can be seen in Figure 2, shallow trenches are etched into the substrate to form a plurality of fins. The shallow trench etch can be accomplished with standard photolithography include wet or dry etching, or a combination of etches if so desired. The geometry of the trenches (width, depth, shape, etc) can vary from one embodiment to the next as will be appreciated, and the claimed invention is not intended to be limited to any particular trench geometry. In one specific example embodiment having a silicon substrate and a two-layer hardmask implemented with a bottom oxide layer and a top SiN layer, a dry etch is used to form the trenches that are about lOOA to 5000Ά below the top surface of the substrate. Any number of trench configurations can be used, as will be apparent. After the fins are formed, the hardmask can be removed, as shown in the example embodiment of Figure 3. Such complete removal of the hardmask allows for the top of the fin to be cladded so as to form tri-gate structures. In other embodiments, however, note that some of the hardmask may be left behind, so that only sides of the fin are cladded (and not the top) so as to provide a double-gate structure. While the illustrated embodiment shows fins as having a width that does not vary with distance from the substrate, the fin may be narrower at the top than the bottom in another embodiment, wider at the top than the bottom in another embodiment, or having any other width variations and degrees of uniformity (or non-uniformity). Further note that the width variation may, in some embodiments, be symmetrical or asymmetrical. Also, while the fins are illustrated as all having the same width, some fins may be wider and/or otherwise shaped differently than others. For example, in an embodiment, fins to be used in the creation of NMOS transistors may be narrower than fins to be used in the creation of PMOS transistors. Other arrangements are possible, as will be appreciated. As can be seen in the example embodiment of Figure 4, a cladding layer can then be deposited. In this example case, the cladding deposition is non-selective, in that the entire fin surface area is cladded. In some such non-selective cases where there are both PMOS and NMOS fin-based devices, note that it may be desirable, for instance, to etch off any cladding material from NMOS regions. In some embodiments, the cladding layer can be an epitaxial growth of, for example, silicon germanium (SiGe) alloy of arbitrary composition, suitable for a given application or otherwise desired. In another example embodiment, the cladding layer can be an epitaxial growth of germanium. Any suitable epitaxial deposition techniques such as chemical vapor deposition (CVD), rapid thermal CVD (RT-CVD), gas-source molecular beam epitaxy (GS-MBE), etc can be used to provide the cladding material, as will be appreciated in light of this disclosure. Note that in some embodiments, the cladding layer is free of crystalline defects such as stacking faults and dislocations. While such stacking faults and dislocations may be present at some acceptably low level, their presence above such a threshold may adversely impact the desired channel strain. In this sense, there is a trade between the germanium percentage and thickness of the cladding layer. This is because the overall dislocation free (strained) thickness is generally a product of composition and layer thickness. For example, given a SiGe cladding layer of 50% germanium, a cladding layer thickness of about 100 angstroms (A) or less would be fully strained, but a SiGe cladding layer at 75% germanium might be limited to a cladding layer thickness of only about 50 A or less before onset of defective deposition. Thus, in one specific embodiment, the cladding layer is a SiGe alloy free of crystalline defects such as stacking faults and dislocations. As used herein, and in accordance with some such embodiments, 'free of crystalline defects' means that the defects in the cladding layer are less than 0.05 % by volume or otherwise do not lead to unacceptable shorting/open (yield loss) and performance loss, as measured by a given standard. Further note that the cladding layer critical thickness can vary greatly and these examples are not intended to limit the claimed invention to a particular range of layer thicknesses. As can be further seen in Figure 4, an optional capping layer can be deposited to protect the cladding layer and/or to improve the gate dielectric / semiconductor interface. In one such embodiment, a silicon capping layer is deposited over a SiGe cladding layer. The deposition techniques for providing the optional capping layer can be, for example, the same as those used in provisioning the cladding layer (e.g., CVD, RT-CVD, GS-MBE, etc). The thickness of the capping layer can also vary from one embodiment to the next. In some cases, the capping layer has a thickness in the range of 10 to 50 A. In still other cases, the capping layer has a thickness that is about 10% to 50% of the cladding layer thickness. After provisioning of the cladding layer and optional capping layer, the flow may continue in a conventional manner, in some embodiments, or in custom or proprietary manner in still other embodiments. As can be seen, Figures 5 through 12 assume that the optional capping layer was not provided. However, configurations that include the capping layer will be readily apparent in light of this disclosure. As can be seen in the example embodiment of Figure 5, the trenches are subsequently filled with an oxide material (or other suitable insulator material), using any number of standard deposition processes. In one specific example embodiment having a silicon substrate and a SiGe cladding layer, the deposited insulator material is silicon dioxide (S1O2) but any number suitable isolation oxides / insulator materials can be used to form the shallow trench isolation (STI) structures here. In general, the deposited or otherwise grown insulator material for filling the trenches can be selected, for example, based on compatibility with the native oxide of the cladding and/or optional capping material. Note that the gate trench may be circular or polygonal in nature, and any reference to trench 'sides' is intended to refer to any such configurations, and should not be interpreted to imply a particular geometric shaped structure. For instance, trench sides may refer to different locations on a circular-shaped trench or discrete sides of a polygonal- shaped trench or even different locations on one discrete side of a polygonal-shaped trench. In a more general sense, trench 'surfaces' refers to all such trench sides as well as the base (bottom) of the trench. Figure 6 demonstrates how the isolation oxide (or other suitable insulation material) is planarized using, for example, chemical mechanical planarization (CMP) or other suitable process capable of planarizing the structure. In the specific example embodiment shown, the planarization leaves at least a portion of the cladding layer. In this sense, the cladding layer can be used as an etch stop. In still other embodiments where hardmask material is left on top of the fins (for a double-gate configuration), a first layer of the hardmask (e.g., pad oxide) can be used as the etch stop, and which can also be used as a gate oxide if so desired. In still other such embodiments, the pad oxide can be completely removed, and a dummy oxide can be deposited before putting down the sacrificial gate material. In other embodiments, a high-k dielectric material can be deposited for the gate oxide at this time (or later in the process), as is sometimes done. Figure 7 demonstrates the resulting structure after the STI is recessed to below the top portion of the fin structures. Any suitable etch process (e.g., wet and/or dry etch) can be used to recess the STI. These recessed regions provide isolation for the source/drain regions of the transistor. The depth of the recess can vary from embodiment to embodiment, depending on factors such as desired gate size and height of overall fin. In some example embodiments, the STI recess depth is such that 35% to 85% of the overall fin height is exposed, although other embodiments may remove more or less of the STI material, depending on what is suitable for the intended application. In one specific example embodiment having a silicon substrate and a SiGe cladding layer and a silicon capping layer, the planarized and etched STI material is S1O2. In another specific example embodiment having a silicon substrate and a germanium cladding layer and a silicon capping layer, the planarized and etched STI material is S1O2 or germanium oxide (Ge02). In another specific example embodiment having SiGe fins and a germanium cladding layer and a silicon capping layer, the planarized and etched STI material is S1O2 or Ge02- In another specific example embodiment having SiGe fins formed on a silicon substrate and a germanium cladding layer and a silicon capping layer, the planarized and etched STI material is S1O2 or Ge02- As will be appreciated, each of these example embodiments can also be made without the capping layer, or with another suitable capping material that may include silicon or not. In some embodiments, the STI recess etching process may alter the thickness of the cladding layer that becomes exposed, such that the exposed portions of the cladding layer may be different (e.g., thinner) than the unexposed portions of the cladding layer. In some embodiments, the initial cladding layer thickness accounts for anticipated thinning due to subsequent processing. Further note that, in still other embodiments, the cladding layer may be provisioned with a non-uniform thickness, in effort to account for anticipated thinning in certain locations due to subsequent processing. Those, the initial thickness in those certain locations may be, for instance, thicker than the initial thickness in areas that will not be exposed to subsequent processing. Partial Cladding Layer Figures 8a-8d illustrate a portion of the method shown in Figures 1 through 7 and 9 through 12, in accordance with another embodiment of the present invention. As can be seen in this example case, the cladding layer is not provisioned onto the fins until after the STI recess, thereby effectively providing a partial cladding. Such a selective deposition process may be suitable, for example, when there is a desire to conserve cladding material and therefore reduce material expense and/or to decrease integration complexity. In this example embodiment, the fins are formed as shown in Figure 8a, and the previous relevant description with reference to Figures 1 through 3 is equally applicable here. Then, rather than applying the cladding layer, the flow continues with filling the trenches with a suitable insulator material (as shown in Figure 8b) and planarizing to remove any excess insulator material (as shown in Figure 8c). To this end, the previous relevant description with reference to Figures 5 and 6 is equally applicable here. The process then continues with recessing the STI, as previously discussed with reference to Figure 7 (as shown in Figure 8d). Once the fins are exposed after the desired STI recess, the cladding layer can then be provisioned as further shown in Figure 8d. The previous relevant description with reference to Figure 4 is equally applicable here. As will be appreciated in light of this disclosure, an optional capping layer (e.g., silicon) may also be provisioned over the cladding layer as previously explained, if so desired. The resulting structure can include any number of fins (one or more), isolated or otherwise surrounded by any suitable isolation material. As previously explained, the fins can be fabricated from the substrate material using photolithography. In other embodiments, the fins can be, for example, epitaxially grown such as described in U.S. Patent No. 8,017,463, titled, "Epitaxial Fabrication of Fins for FinFET Devices." In such cases, a fin is effectively formed as a layer in the manufacturing process. By forming a fin layer, fin thickness is determined through control of the process parameters used to form the fin layer rather than photolithographic processes. For instance, if the fin is grown with an epitaxial process, the fin's thickness will be determined by the growth dynamics of the epitaxy. FinFETs whose fin widths are determined through layer formation rather than photolithography may offer improved minimum feature sizes and packing densities. In other embodiments, the fins can be fabricated by removal of material by cutting or ablation, for example, using laser, or other suitable tools capable of fine-cutting semiconductor materials. Resulting fin geometries will generally vary depending on formation techniques employed. Sacrificial Gate Stack As previously explained, each of the views shown in Figures 9 through 12 is a cross- sectional side view taking across the channel region and parallel to the fins. This portion of the process effectively forms the gate stack using a remove metal gate (RMG) process, in accordance with some embodiments. The RMG process can be carried out in a conventional manner, in some such cases, or in custom or proprietary manner in still other cases. In general, and in accordance with some such embodiments, once the cladded fins are formed, a sacrificial gate material can be deposited on the cladded fins. In some cases, a sacrificial gate dielectric material may be deposited on the cladded fins, and then the sacrificial gate material is deposited on the sacrificial gate dielectric material. The deposited sacrificial gate material can then be planarized to remove any undesired topology and/or excess sacrificial gate material. A hardmask can then be provisioned and patterned on the sacrificial gate material layer, as typically done, followed by an etch process that results in the formation of sacrificial gate stacks such as the one generally shown in Figure 9. Figure 9 illustrates patterning of the sacrificial gate material, in accordance with one specific example embodiment of the present invention. In some cases, this patterning can be carried out, for example, from a single depth of focus due to pre-patterning planarization of the sacrificial material layer, and using standard photolithography including deposition of hardmask materials (e.g., such as S1O2, SiN, and/or other suitable hardmask materials) on the sacrificial gate material, patterning resist on a portion of the hardmask that will remain temporarily to protect the underlying gate region of the device, etching to remove the unmasked (no resist) portions of the hardmask (e.g., using a dry etch, or other suitable hardmask removal process), and then stripping the patterned resist, thereby leaving the patterned gate mask. In one specific example embodiment having a silicon substrate, the hardmask is implemented with SiN (e.g., IOOA to 500Ά thick). Any number of suitable hardmask configurations can be used, as will be apparent in light of this disclosure. Once the gate pattern hardmask is complete, etching can be carried out to remove the non- masked sacrificial gate material (and any remaining dummy gate dielectric material and/or pad oxide) down to the substrate and slightly into the substrate to form the source/drain regions, in accordance with some example embodiments. The etching can be accomplished with standard photolithography include, for example, dry etching or any suitable etch process or combination of etches. Note that the source/drain regions may be formed using the gate structure as a mask. In some embodiments, ion implantation may be used to dope the source/drain regions as conventionally done. The geometry of the resulting gate structure (e.g., width, depth, shape) as well as the shape and depth of source/drain regions, can vary from one embodiment to the next as will be appreciated, and the claimed invention is not intended to be limited to any particular device geometries. This gate patterning can be used to simultaneously produce a plurality of such structures where, for example, all the transistors to be formed will be the same, or some transistors are one type/configuration (e.g., PMOS) and the remainder are another type/configuration (e.g., NMOS). The deposition of gate stack materials can be carried out, for example, using CVD or other suitable process. In one specific example embodiment, the substrate is a bulk silicon substrate, the recessed STI material is S1O2, the fins are silicon (formed in the substrate), the cladding is SiGe, and the sacrificial gate material is polysilicon. Note, however, that the sacrificial gate material can be any suitable sacrificial material (e.g., polysilicon, silicon nitride, silicon carbide, etc). In some embodiments that include a sacrificial gate dielectric material, the sacrificial gate dielectric material can be, for instance, S1O2 or any other suitable dummy gate insulator material. Once the sacrificial gate stacks are formed, an RMG process and transistor formation can take place, as will now be described, in accordance with some example embodiments of the present invention. RMG Process and Transistor Formation Figures 9 through 12 further illustrate an RMG process flow and transistor formation, in accordance with an embodiment of the present invention. As can be seen, one transistor is shown, but any number of transistors can be formed using the same processes, as will be appreciated. In addition, the transistors formed may be implemented in a number of configurations (e.g., PMOS, NMOS, or both such as the case in complementary pair formation). In short, the techniques provided herein can be used with any type of transistor technology or configuration, and the claimed invention is not intended to be limited to any particular transistor type or configuration. Figure 10 illustrates a cross-sectional side view (perpendicular to the gates and parallel to the fins) of an example transistor structure formed with the patterned gate structure of Figure 9, in accordance with one embodiment of the present invention. As can be seen, a spacer material is deposited and anisotropically etched to form sidewall spacers about the gate structure walls. The spacers may be, for example, a nitride that is deposited on the order of 50A to 500A thick, in some embodiments. With respect to forming a P+ doped source/drain region for PMOS (as shown), a trench is etched into the substrate (e.g., by reactive ion etching). In this example configuration, the etching is constrained on one side by the previously formed STI neighboring each source/drain region and does not substantially isotropically undercut the gate structure on the other side. As such, an isotropic etch profile may be achieved on the inward edges of the trench, while leaving a small portion of the lightly doped source/drain region (under the spacer material, as shown). Then, an epitaxial source/drain can be grown which fills the trench and extends thereabove as indicated in Figure 10. The trench may be filled, for example, using a growth of silicon germanium having 10-40 atomic percent germanium, in some embodiments. The source/drain doping may be done, for instance, by in-situ doping using a diborane source. The epitaxial source/drain only grows in the trench because all other material is masked or covered. The source/drain is raised and continues to grow until the facets meet. Note that if fabricating a complementary device having both PMOS and NMOS, the NMOS side can be covered by an oxide mask during PMOS doping region formation, in some embodiments. A source/drain implant may be used in some embodiments. Other embodiments may employ only NMOS source/drain formation, which may involve N+ doped regions which are not grown above the surface. Any number of suitable source/drain materials, as well as formation and doping techniques, can be used. After source/drain formation and doping, an etch stop layer can be deposited (to protect doped source/drain regions during subsequent etching), if necessary. An inter-layer dielectric (ILD) is then deposited over the structure. The ILD can be, for example, any suitable low dielectric constant material such as an oxide (e.g., S1O2), and the etch stop layer can be, for instance, a nitride (e.g., SiN). In some cases, the ILD may be doped with phosphorus, boron, or other materials and may be formed by high density plasma deposition. The ILD may then be planarized down to the upper surface of the sacrificial gate material, thereby removing the hardmask and the etch stop (if applicable) to open the gate, as shown in Figure 10. As will be appreciated, the optional etch stop can be helpful in fabricating NMOS devices by acting as a tensile layer, but may degrade PMOS devices by producing undesired strain. As shown in Figure 1 1, the sacrificial gate material can be removed from between the spacers, thereby forming a gate trench over the previously provisioned cladding layer, in some embodiments (tri-gate configuration). In other embodiments, the sacrificial gate material can be removed from between the spacers, thereby forming a gate trench over the remaining pad oxide or other hardmask material left in place on the fin top (double-gate configuration). Removal of the sacrificial gate material may be done, for example, by any of a variety of suitable dry and/or wet etch techniques. In some applications having both PMOS and NMOS transistors, note that the sacrificial gate material for the NMOS and PMOS devices can be removed at the same time, or at different times using selective etching. Any number of suitable etch schemes can be used here as will be apparent. As shown in Figure 12, a high-k gate dielectric layer and then gate metal are deposited (e.g., via CVD or other suitable process) directly on the cladding layer (or the optional capping layer if present as shown in Figures 13a-b) and exposed gate trench surfaces, and any excess gate metal may be planarized to form a metal gate electrode as shown. The gate metal can be, for example, titanium, platinum, cobalt, nickel, titanium nickel, palladium, or other suitable gate metal or combination of such metals. In double-gate configurations where some of the hardmask is left on top of the fin (such as a pad oxide), after removing the sacrificial gate material, that pad oxide or other hardmask material can also be removed. Then, a high-k gate dielectric can be deposited directly on the cladding layer (or the optional capping layer if present as shown in Figures 13a-b) and exposed gate trench surfaces, and planarized or otherwise shaped as desired. The high-k gate may comprise any suitable gate dielectric material (e.g., hafnium oxide, zirconium oxide, and aluminum oxide). Any number of suitable high-k gate dielectrics and treatments can be used, as is sometimes done, depending on factors such as desired isolation. Other embodiments may employ gate dielectrics that have a dielectric constant on par with S1O2, or lower if so desired. Cladding After Sacrificial Gate Stack Removal Numerous variations on the techniques provided herein will be apparent. For instance, in another embodiment, the cladding layer can be added after the removal of the sacrificial gate stack material. In Figure 1 1, for example, assume that the cladding layer is applied to the bottom of the gate trench after the removal process. In one such embodiment, the cladding layer can be a SiGe cladding layer formed on a silicon fin top after removal of a sacrificial polysilicon gate and gate oxide. In such cases, the strained SiGe cladding layer can be selectively grown on the exposed silicon fin areas in the gate trench. Again, in some such embodiments, the cladding layer can be capped with silicon, and then high-k / metal gate processing may continue as described herein or as otherwise desired. Note that both the SiGe cladding and silicon capping layer depositions could be either selective or non-selective. Another variation on this option for adding the cladding layer after removal of the sacrificial gate stack material includes adding a fin recess etch to effectively thin the fin before adding the cladding film. Any suitable etch processes can be used to carry out this thinning (e.g., isotropic etch). Such an option would allow thin fin widths in the channel, and also allows additional surfaces of the fin to be cladded. The resulting thin cladded fin could again be capped as described herein. In one such example case have a silicon fin with SiGe cladding and a silicon capping layer, note that both the SiGe and silicon depositions can be either selective or nonselective. As can be further seen in the example embodiments shown in Figures 10-13b, an STI is provisioned and the source/drain regions have a raised faceted pointy shape. Other embodiments may not include such features, as will be appreciated. For example, Figures 14a- 14b each illustrates a resulting fin-based transistor structure, in accordance with other embodiments of the present invention. The example embodiment shown in Figure 14a includes source/drain regions that are raised and relatively flat, and include tip regions that undercut both the spacer and gate dielectric region, while the example embodiment shown in Figure 14b includes source/drain regions that are relatively flush with fin top and that only undercut the spacer and region of the gate stack. Numerous variations and features may be integrated into the structure, depending on factors such as desired performance and fab capability. For further example, the width of the spacers can vary from one case to the next and in one specific example case are one-half the gate length, although any other suitable spacer width can be used as well. The source/drain (S/D) metal may be implemented, for example, with a contact metal (or series of metals) that can then be deposited and a subsequent reaction (annealing) can be carried out to form, for example, metal silicide and/or metal germanide source and drain contacts. As will be further appreciated, the contact may be implemented as a stack including one or more of a silicide/germanide layer, an adhesion layer, and/or a metal pad layer. Example contact metals include titanium, platinum, cobalt, nickel, titanium nickel, palladium, or any suitably conductive contact metal or alloys thereof. The insulator material can be, for instance, Si02, but in other embodiments may be a low-k or high-k dielectric material that provides the desired insulation and may further provide structural integrity. As will be further appreciated in light of this disclosure, any number of other transistor features may be implemented with an embodiment of the present invention. For instance, the source/drain regions may or may not include tip regions formed in the area between the corresponding source/drain region and the channel region. Likewise, the source/drain regions may be strained or not strained. In this sense, whether a transistor structure has strained or unstrained S/D regions, or S/D tip regions or no S/D tip regions, is not particularly relevant to various embodiments of the present invention, and such embodiments are not intended to be limited to any particular such structural features. Rather, any number of fin-based transistor structures and types can benefit from employing a SiGe or germanium cladding layer in the channel region as described herein. The example embodiments shown in Figures 14a-b each include the optional capping layer as well, but other such embodiments may not include the capping layer. Likewise, other such embodiments may include some transistors that have the channel cladding layer, and other transistors on the same die can be configured without the cladding layer. Thus, Figures l-14b illustrate various example transistor structures and fabrication processes, wherein a cladding material such as strained SiGe or germanium is provisioned on the channel area of a silicon or SiGe fin. The strained cladding may be, for example, on both sides and the top of the fin (such as in a tri-gate configuration) or only on the sides of the fin (such as in a double-gate configuration) or only on the top of the fin. Numerous variations and modifications will be apparent in light of this disclosure. The various layers and features may be implemented with any suitable dimensions and other desired layer parameters, using established semiconductor processes (e.g., CVD, MBE, photolithography, and/or other such suitable processes). In general, the specific layers and dimensions of the structure will depend on factors such as the desired device performance, fab capability, and semiconductor materials used. Specific device materials, features, and characteristics are provided for example only, and are not intended to limit the claimed invention, which may be used with any number of device configurations and material systems. Simulation shows expected stress state in the fin and cladding layers and hole mobility due to that stress state. For instance, in one example embodiment, simulated stress for SiGe cladding on silicon fin structures was determined. In particular, for a SiGe cladding layer having 50% silicon and 50% germanium (SisoGeso), a large compressive stress state occurs in the SiGe along the current flow (e.g., SiGe— 3.6 GPa and Si ~ 0.65 GPa). In addition, a significant vertical stress occurs in the SiGe cladding (e.g., SiGe ~ -1.8 GPa and Si ~ 1.8 GPa). In this example case, the stress state is in between uniaxial and biaxial on the sidewalls. In some cases, expected mobility response can be determined as a function of the germanium fraction in the cladding layers. For instance, the expected mobility is less than pure uniaxial stress but it is higher than biaxial stressed SiGe. For germanium percentages greater than about 30%, there is large expected hole mobility. Note that the stress along the current flow direction and vertical vs gate length can vary from one embodiment to the next. For instance, for one embodiment assume a silicon fin is thinned at replacement metal gate location and then a strained SiGe cladding layer is provisioned. In another embodiment, assume a silicon fin is non-selectively cladded with SiGe (upfront in the process). The strain for the first embodiment (with the thinned fin) is not as high as for the second embodiment (with upfront cladding process), but still is sufficiently high and may make integration easier since a cladding layer such as germanium or SiGe is added later in the process flow. Further note the additive nature of germanium or SiGe cladding in the channel area as described herein in addition to SiGe in the source/drain regions. For instance, assume a silicon fin is non-selectively cladded with SisoGeso film and further assume that the source/drain regions are also provisioned with SisoGeso. As previously indicated, simulations indicate a large compressive stress state occurs in the SiGe cladding along the current flow (e.g., SiGe— 3.6 GPa) and a significant vertical stress occurs in the SiGe cladding (e.g., SiGe— 1.8 GPa). Addition of the SiGe source/drain regions further enhances the stress according to simulations, which indicate a larger compressive stress state occurs in the SiGe cladding along the current flow (e.g., SiGe ~ -4.9 GPa) and a vertical stress occurs in the SiGe cladding (e.g., SiGe ~ -2.6 GPa). The strain scheme may further change after removal of sacrificial gate stack materials. For instance, after polysilicon removal, simulations indicate a larger compressive stress state occurs in the SiGe cladding along the current flow (e.g., SiGe— 5.1 GPa) and a slight decrease in vertical stress occurs in the SiGe cladding (e.g., SiGe ~ -1.8 GPa). Example System Figure 15 illustrates a computing system implemented with one or more integrated circuit structures configured in accordance with an embodiment of the present invention. As can be seen, the computing system 1000 houses a motherboard 1002. The motherboard 1002 may include a number of components, including but not limited to a processor 1004 and at least one communication chip 1006 (two are shown in this example), each of which can be physically and electrically coupled to the motherboard 1002, or otherwise integrated therein. As will be appreciated, the motherboard 1002 may be, for example, any printed circuit board, whether a main board or a daughterboard mounted on a main board or the only board of system 1000, etc. Depending on its applications, computing system 1000 may include one or more other components that may or may not be physically and electrically coupled to the motherboard 1002. These other components may include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Any of the components included in computing system 1000 may include one or more integrated circuit structures configured with transistors having cladded channels as described herein. In some embodiments, multiple functions can be integrated into one or more chips (e.g., for instance, note that the communication chip 1006 can be part of or otherwise integrated into the processor 1004). The communication chip 1006 enables wireless communications for the transfer of data to and from the computing system 1000. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1006 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev- DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing system 1000 may include a plurality of communication chips 1006. For instance, a first communication chip 1006 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 1006 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others. The processor 1004 of the computing system 1000 includes an integrated circuit die packaged within the processor 1004. In some embodiments of the present invention, the integrated circuit die of the processor 1004 includes one or more transistors having SiGe or germanium cladded channels as described herein. The term "processor" may refer to any device or portion of a device that processes, for instance, electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The communication chip 1006 may also include an integrated circuit die packaged within the communication chip 1006. In accordance with some such example embodiments, the integrated circuit die of the communication chip 1006 includes one or more transistors having SiGe or germanium cladded channels as described herein. As will be appreciated in light of this disclosure, note that multi-standard wireless capability may be integrated directly into the processor 1004 (e.g., where functionality of any chips 1006 is integrated into processor 1004, rather than having separate communication chips). Further note that processor 1004 may be a chip set having such wireless capability. In short, any number of processor 1004 and/or communication chips 1006 can be used. Likewise, any one chip or chip set can have multiple functions integrated therein. In various implementations, the computing system 1000 may be a laptop, a netbook, a notebook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the system 1000 may be any other electronic device that processes data or employs transistor devices having cladded channels as described herein (e.g., PMOS transistors configured with SiGe or germanium cladded channels). As will be appreciated in light of this disclosure, various embodiments of the present invention can be used to improve performance on products fabricated at any process node (e.g., in the micron range, or sub-micron and beyond) by allowing for the use of transistors having stress-enhanced channel and increase mobility. Numerous embodiments will be apparent, and features described herein can be combined in any number of configurations. One example embodiment of the present invention provides a semiconductor device. The device includes a fin on a substrate, the fin comprising a semiconductor material and having channel region and corresponding source/drain regions adjacent thereto. The device further includes a cladding layer of germanium or silicon germanium (SiGe) on one or more surfaces of the channel region of the fin. The device further includes a gate dielectric layer over the cladding layer, and a gate electrode on the gate dielectric layer, and source/drain material in each of the source/drain regions. In some cases, the device further includes a capping layer between the cladding layer and the gate dielectric layer. In one such case, the capping layer is or otherwise comprises silicon. In some cases, the source/drain material is SiGe. In some cases, the fin is silicon or SiGe. In some cases, at least one of the cladding layer and the fin comprises 10% to 90% germanium. In some cases, the substrate comprises a first material and the fin comprises a second material different from the first material. In some cases, the substrate comprises a silicon layer and the fin is SiGe and the cladding layer is germanium. In some cases, the cladding layer covers side portions and a top portion of the fin. Numerous variations will be apparent. For instance, another embodiment provides a mobile computing device that includes the semiconductor device as variously defined in this paragraph. Another embodiment of the present invention provides a semiconductor device. In this example case, the device includes a fin on a substrate, the fin comprising a semiconductor material and having channel region and corresponding source/drain regions adjacent thereto, wherein the fin is silicon or silicon germanium (SiGe). The device further includes a cladding layer of germanium or SiGe on one or more surfaces of the channel region of the fin. The device further includes a capping layer on the cladding layer, wherein the capping layer is or otherwise comprises silicon. The device further includes a gate dielectric layer on the capping layer, a gate electrode on the gate dielectric layer, and source/drain material in each of the source/drain regions, wherein the source/drain material is SiGe. In some cases, at least one of the cladding layer and the fin comprises 10% to 90% germanium. In some cases, the substrate comprises a first material and the fin comprises a second material different from the first material. In some cases, the substrate comprises a silicon layer and the fin is SiGe and the cladding layer is germanium. In some cases, the fin is silicon and the cladding layer is SiGe. In some cases, the cladding layer covers side portions and a top portion of the fin so as to provide a tri-gate transistor. Another embodiment provides a communication device comprising the semiconductor device as variously defined in this paragraph. Another embodiment of the present invention provides a mobile computing system. The system includes a printed circuit board, a processor operatively coupled to the printed circuit board, a memory operatively coupled to the printed circuit board and in communication with the processor, and a wireless communication chip operatively coupled to the printed circuit board and in communication with the processor. At least one of the processor, wireless communication chip, and/or the memory comprises a semiconductor device. The semiconductor device includes a fin on a substrate, the fin comprising a semiconductor material and having channel region and corresponding source/drain regions adjacent thereto. The semiconductor device further includes a cladding layer of germanium or SiGe on one or more surfaces of the channel region of the fin. The semiconductor device further includes a gate dielectric layer over the cladding layer, a gate electrode on the gate dielectric layer, and source/drain material in each of the source/drain regions. In some cases, the semiconductor device further includes a capping layer between the cladding layer and the gate dielectric layer, wherein the capping layer is or otherwise comprises silicon. In some cases, the fin is silicon, the cladding layer is SiGe, and the source/drain material is SiGe. In one such case, the cladding layer SiGe is different from the fin SiGe. In some cases, the substrate comprises a first material and the fin comprises a second material different from the first material. In some cases, the substrate comprises a silicon layer and the fin is SiGe and the cladding layer is germanium. In some cases, the cladding layer covers side portions and a top portion of the fin. The foregoing description of example embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of this disclosure. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
The invention relates to self-reference sensing for memory cells. Methods, systems, and apparatuses for self-referencing sensing schemes of memory cells are described. A cell having two transistors, or other switching components, and one capacitor, such as a ferroelectric capacitor, may be sensed using a reference value that is specific to the cell. The cell may be read and sampled via one accessline, and the cell may be used to generate a reference voltage andsampled via another access line. For instance, a first access line of a cell may be connected to one read voltage while a second access line of the cell is isolated from a voltage source; then the second access line may be connected to another read voltage while the first access line is isolate from a voltage source. The resulting voltages on the respective access lines may be compared to each other and a logic value of the cell determined from the comparison.
1.A method for access operations, which includes:Activating the first switch component, which is coupled between the first read voltage source and the first access line;After activating the first switch component, sensing the value representing the first state associated with the ferroelectric memory cell at the second access line;After sensing the value representing the first state, activate a second switch element, the second switch element being coupled between a second read voltage source and a second access line;Generating a reference value based at least in part on activating the second switch assembly; andDetermining a logic value stored at the ferroelectric memory cell, wherein the logic value is determined based at least in part on comparing the value representing the first state of the ferroelectric memory cell with the reference value .2.The method of claim 1, wherein:Sensing the value representing the first state associated with the ferroelectric memory cell includes applying the first read voltage source to the first access line, wherein the first read voltage source Generating a first voltage across the second access line; andGenerating the reference value includes applying the second read voltage source to the second access line, wherein the second read voltage source generates a second voltage across the first access line.3.The method of claim 2, wherein the first read voltage source and the second read voltage source are the same voltage source.4.3. The method of claim 2, wherein the second read voltage source is a different voltage source from the first read voltage source.5.The method of claim 2, further comprising:Sampling the first voltage of the second access line at the first node of the sensing component;After sampling the first voltage, isolate the first node;Sampling the second voltage of the first access line at the second node of the sensing component; andAfter sampling the second voltage, isolate the second node.6.The method of claim 1, further comprising:After the reference value is generated, the logic value is written into the ferroelectric memory cell.7.An electronic memory device, which includes:A memory cell including a first transistor, a second transistor, and a ferroelectric capacitor, wherein the memory cell electronically communicates with a first access line via the first transistor and electronically with a second access line via the second transistor CommunicationA latch that electronically communicates with the first access line through a first switch component and electronically communicates with the second access line through a second switch component;A first voltage source that electronically communicates with the first access line and the latch via a third switch assembly; andA second voltage source electronically communicates with the second access line and the latch via a fourth gate component.8.The electronic memory device of claim 7, wherein the latch includes a plurality of inverters.9.The electronic memory device of claim 8, wherein the first access line electronically communicates with the latch at a first node, and the second access line communicates with the latch at a second node. The memory is in electronic communication, and the first node and the second node are in electronic communication via a seventh switch assembly.10.7. The electronic memory device of claim 7, wherein the first access line and ground or virtual ground are in electronic communication via a fifth switch assembly.11.10. The electronic memory device of claim 10, wherein the second access line and the ground or virtual ground are in electronic communication via a sixth switch assembly.12.7. The electronic memory device of claim 7, wherein the first voltage source is configured to provide a first read voltage, and the second voltage source is configured to provide a second read voltage, the second read voltage The taken voltage is the same as the first read voltage.13.7. The electronic memory device of claim 7, wherein the first voltage source is configured to provide a first read voltage, and the second voltage source is configured to provide a second read voltage, the second read voltage The taken voltage is different from the first read voltage.14.A device including:A memory cell coupled between the first access line and the second access line;A latch coupled to the first access line via a first switch element and coupled to the second access line via a second switch element; andA controller coupled to the first switch component and the second switch component, wherein the controller is configured to cause the device to perform the following operations:Start the second switch assembly;After activating the second switch component, sensing the value representing the first state of the memory cell;After sensing the value representing the first state, activate the first switch component;Generating a reference value based at least in part on activating the first switch component;The logical value stored at the memory cell is determined, wherein the logical value is based at least in part on comparing the value representing the first state of the memory cell with the reference value.15.The device of claim 14, wherein the controller is further configured to cause the device to perform the following operations:Applying a first voltage from a first voltage source to the first access line, wherein the second switching element is activated based at least in part on applying the first voltage to the first access line; andApplying a second voltage from a second voltage source to the second access line, wherein the first switch element is activated based at least in part on applying the second voltage to the second access line.16.The device of claim 15, wherein the controller is further configured to cause the device to perform the following operations:A third switch element is activated, the third switch element is coupled to the first access line and the first voltage source, wherein the application of the third switch element to the first access line is based at least in part on the activation of the third switch element The first voltage; andA fourth switch element is activated, the fourth switch element is coupled to the second access line and the second voltage source, wherein the application of the fourth switch element to the second access line is based at least in part on the activation of the fourth switch element The second voltage.17.The device of claim 15, wherein the controller is further configured to cause the device to perform the following operations:After applying the first voltage to the first access line and before applying the second voltage to the second access line, the first access line and the second access line Ground.18.The device of claim 17, wherein the controller is further configured to cause the device to perform the following operations:After comparing the first state of the memory cell with the reference value, the logic value is written to the memory cell.19.The device of claim 14, wherein the controller is further configured to cause the device to perform the following operations:After sensing the value representing the first state of the memory cell, deactivating the second switch component to isolate the first node of the latch; andAfter the reference value is generated, the first switch component is deactivated to isolate the second node of the latch, wherein the determination is based at least in part on deactivating the second switch component and the first switch component The logic value stored at the memory unit.20.The device of claim 14, wherein the memory cell includes a ferroelectric capacitor, a first transistor coupled to the first plate of the ferroelectric capacitor, and a second transistor coupled to the second plate of the ferroelectric capacitor .
Self-reference sensing for memory cellsInformation about divisional applicationThis case is a divisional application. The parent case of this division is an invention patent application with an application date of July 4, 2018, an application number of 201810723546.8, and an invention title of "Self-Reference Sensing for Memory Units".Cross referenceThis patent application claims the priority of U.S. Patent Application No. 15/641,783 filed on July 5, 2017 by Muzzetto titled "Self-Reference Sensing For Memory Cells" The said US patent application is assigned to the assignee, which is expressly incorporated herein by reference in its entirety.Technical fieldThe following generally relates to memory devices, and more specifically, to a self-referencing sensing scheme for memory cells.Background techniqueMemory devices are widely used to store information in various electronic devices such as computers, wireless communication devices, cameras, and digital displays. Store information by programming different states of the memory device. For example, a binary device has two states, usually marked as logic "1" or logic "0". In other systems, more than two states can be stored. To access the stored information, the electronic device can read or sense the stored state in the memory device. To store information, the electronic device can write or program the state in the memory device.There are various types of memory devices, including random access memory (RAM), read only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), Resistive RAM (RRAM), flash memory, etc. The memory device can be volatile or non-volatile. Non-volatile memory (such as flash memory) can store data for a long period of time even in the absence of an external power source. Volatile memory devices (e.g., DRAM) may lose their stored state over time unless they are periodically refreshed by an external power source. Binary memory devices may, for example, include charged or discharged capacitors. However, the charged capacitor can discharge due to leakage current over time, causing the stored information to be lost. Certain features of volatile memory can provide performance advantages, such as faster read or write speeds, while features of non-volatile memory, such as the ability to store data without periodic refresh, can be advantageous.FeRAM can use a device architecture similar to that of volatile memory, but can have non-volatile properties due to the use of ferroelectric capacitors as storage devices. Therefore, FeRAM devices may have improved performance compared to other non-volatile and volatile memory devices. However, some FeRAM sensing schemes do not consider changes within the memory cell. This can reduce the reliability of the sensing operation of the memory cell.Summary of the inventionIn one aspect, the present invention provides a method, which includes: during a first part of an access operation, applying a first read voltage to a first access line of a ferroelectric memory cell; During the second part, a second read voltage is applied to the second access line of the ferroelectric memory cell; during the third part of the access operation, the first voltage of the second access line is Compare with the second voltage of the first access line, wherein the first voltage is based at least in part on the application of the first read voltage, and the second voltage is based at least in part on the first The application of two read voltages; and based at least in part on comparing the first voltage of the second access line with the second voltage of the first access line, determining the difference with the iron The logical value associated with the electrical memory cell.In another aspect, the present invention provides a method for access operation, which includes: activating a first switch element coupled between a first read voltage source and a first access line; After a switch component, sense the value representing the first state associated with the ferroelectric memory cell at the second access line; after sensing the value representing the first state, enable coupling to the second read A second switching element between the voltage source and the second access line; generating a reference value based at least in part on activating the second switching element; and determining the logic value stored at the ferroelectric memory cell, wherein The logic value is based at least in part on comparing the value representing the first state of the ferroelectric memory cell with the reference value.In another aspect, the present invention provides an electronic memory device including: a memory cell including a first transistor, a second transistor, and a ferroelectric capacitor, wherein the memory cell is connected to a first memory via the first transistor. Fetch wire electronic communication and electronically communicate with the second access wire via the second transistor; a latch, which electronically communicates with the first access wire via a first switch component, and communicates with the second access wire via a second switch component The second access line electronically communicates; a first voltage source that electronically communicates with the first access line and the latch via a third switch assembly; and a second voltage source that communicates with the latch via a fourth switch assembly The second access line electronically communicates with the latch.In another aspect, the present invention provides an electronic memory device comprising: a ferroelectric memory cell including a first transistor, a second transistor, and a ferroelectric capacitor, wherein the ferroelectric memory cell is connected via the first transistor In electronic communication with the first access line and via the second transistor with the second access line; and a controller in electronic communication with the ferroelectric memory unit, wherein the controller is operable to: During the first part of the fetch operation, a first read voltage is applied to the first access line; during the second part of the access operation, a second read voltage is applied to the second access line And during the third part of the access operation, comparing the first voltage of the second access line with the second voltage of the first access line, wherein the first voltage is at least partially Based on the first read voltage, and the second voltage is based at least in part on the second read voltage; and based at least in part on combining the first voltage of the first access line with the first voltage The second voltage of the two access lines is compared to determine the logic value associated with the ferroelectric memory cell.In another aspect, the present invention provides an electronic memory device, which includes: means for applying a first read voltage to a first access line during a first part of an access operation; During the second part of the access operation, a means for applying a second read voltage to the second access line; and during the third part of the access operation, the second access line A means for comparing a voltage with a second voltage of the first access line, wherein the first voltage is based at least in part on the first read voltage, and the second voltage is based at least in part on the first Two read voltages; and for determining a correlation with a ferroelectric memory cell based at least in part on comparing the first voltage of the first access line with the second voltage of the second access line The logical value component of the connection.Description of the drawingsThe disclosure in this article mentions and includes the following figures:Figure 1 illustrates an example memory array supporting a self-reference sensing scheme according to an example of the present disclosure;Figure 2 illustrates an example circuit supporting a self-reference sensing scheme according to an example of the present disclosure;Figure 3 illustrates an example hysteresis curve of a unit supporting a self-reference sensing scheme according to an example of the present disclosure;4 illustrates an example circuit supporting a self-reference sensing scheme according to an example of the present disclosure;Figure 5 illustrates an example ferroelectric memory array supporting a self-referencing sense amplifier according to an example of the present disclosure;FIG. 6 illustrates a device including a memory array supporting a self-referenced sense amplifier according to an example of the present disclosure; andFigures 7-8 are flowcharts illustrating one or more methods for self-referencing sensing schemes according to examples of the present disclosure.detailed descriptionIncreased sensing reliability of the memory cell can be achieved by providing a sensing scheme that is specific to the cell or based on the voltage reference of the selected memory cell. By first reading a specific memory cell and generating a reference value based on the read operation, a wider read tolerance can be obtained. As described below, cell-specific reference values can be used to sense a cell with two transistors or other switching components and one capacitor, such as a ferroelectric capacitor. The cell can be read and sampled via one access line, and the cell can be used to generate a reference voltage and sample the cell via another access line. For example, several voltages can be applied across the first access line and the second access line. The voltage can result in a certain voltage value across the two access lines. These voltage values can be used in determining the stored logic state of the memory cell.By way of example, the capacitor of the memory cell may store charge representing a specific logic state (eg, logic "1" or logic "0"). When the read signal of the memory cell is generated, the read voltage can be provided to the first of the two access lines. The voltage of the second access line may depend on the parasitic capacitance of the second access line. The value across the second access line can then be provided to a sense amplifier for use in determining the logic state of a particular ferroelectric memory cell, and potentially for subsequent write operations.Subsequently, when the voltage reference value is generated, the read voltage can be provided to the second access line. The voltage across the first access line and the second access line may be equivalent. This voltage value can then be provided to the sense amplifier. Immediately after receiving both the read signal and the voltage reference value, the sense amplifier can better consider the changes in the ferroelectric memory cell. In addition, the provided voltage value allows the sense amplifier to more reliably determine the logic value of the ferroelectric memory cell.The features of the present invention introduced above are further described below in the context of a memory array. Next, the circuit and cell characteristics of the memory cell and array supporting the self-reference sensing scheme are described. These and other features of the present invention are further illustrated and described with reference to device diagrams, system diagrams, and flowcharts related to self-reference sensing schemes.Figure 1 illustrates an example memory array 100 that supports a self-reference sensing scheme according to various embodiments of the present disclosure. The memory array 100 may also be referred to as an electronic memory device. The memory array 100 includes memory cells 105 that are programmable to store different states. Each memory cell 105 can be programmable to store two states, denoted as logical "0" and logical "1". In some cases, the memory unit 105 may be configured to store more than two logic states. The memory cell 105 may include a capacitor that stores charge representing a programmable state; for example, charged and uncharged capacitors may respectively represent two logic states. DRAM architectures can generally use such designs, and the capacitors used can include dielectric materials with linear polarization characteristics. In contrast, a ferroelectric memory cell may include a capacitor having a ferroelectric as a dielectric material. Different amounts of charge of ferroelectric capacitors can represent different logic states. Ferroelectric materials have nonlinear polarization characteristics; some details and advantages of the ferroelectric memory cell 105 are discussed below.Operations such as reading and writing can be performed on the memory cell 105 by activating or selecting appropriate word lines 110 and digit lines 115. The word line 110 may also be referred to as an access line, and the digit line 115 may also be referred to as a bit line. In some instances, there may be additional wires, such as plate wires. Both the word line 110 and the digit line 115 may be referred to as access lines. Enabling or selecting the word line 110 or the digit line 115 may include applying a voltage to the corresponding line. The word line 110 and the digit line 115 are made of conductive materials. For example, the word line 110 and the digit line 115 may be made of metal (such as metal such as tungsten and other conductive materials).According to the example of FIG. 1, each row of the memory cell 105 is connected to a single word line 110, and each column of the memory cell 105 is connected to two digit lines 115. By activating one word line 110 and one digit line 115 (for example, applying a voltage to the word line 110 or the digit line 115), a single memory cell 105 can be accessed at the intersection thereof. Accessing the memory unit 105 may include reading or writing to the memory unit 105. The intersection of the word line 110 and the digit line 115 may be referred to as the address of the memory cell.In some architectures, the logic storage device (e.g., capacitor) of the cell can be electrically isolated from the digital line by selection components. It may be connected to each digital line 115 via a separate switching component (for example, a transistor). The word line 110 may be connected to a selection component and may control the selection component. For example, the first selection component (for example, the selection component 220 described with reference to FIG. 2) may be a first transistor, and the second selection component (for example, the selection component 222 described with reference to FIG. 2) may be a second transistor, and The line 110 may be connected to the gate of each transistor. Therefore, the unit 105 can be referred to as a two-transistor, one-capacitor unit. Enabling the word line 110 creates an electrical connection or closed circuit between the capacitor of the memory cell 105 and its corresponding digit line 115. The digit line can then be accessed to read or write the memory cell 105.Access to the memory unit 105 can be controlled by the row decoder 120 and the column decoder 130. In some examples, the row decoder 120 receives a row address from the memory controller 140 and activates the appropriate word line 110 based on the received row address. Similarly, the column decoder 130 receives the column address from the memory controller 140 and activates the appropriate digital line 115.After the memory cell 105 is accessed, the memory cell 105 can be read or sensed by the sensing component 125 to determine the stored state of the memory cell 105. For example, after accessing the memory cell 105, the ferroelectric capacitor of the memory cell 105 can be discharged onto its corresponding digit line 115. Discharging the ferroelectric capacitor may be based on applying a bias voltage or applying a voltage to the ferroelectric capacitor. The discharge may cause the voltage of the digital line 115 to change, and the sensing component 125 may compare the voltage change with a reference voltage (not shown) in order to determine the stored state of the memory cell 105. For example, if the digital line 115 has a voltage higher than the reference voltage, the sensing component 125 can determine that the stored state in the memory cell 105 is a logic "1", and vice versa. As described herein, the reference voltage can be generated from the cell 105 being sensed.The sensing component 125 may include various transistors or amplifiers to detect and amplify the signal difference, which may be referred to as latching. The detected logic state of the memory cell 105 can then be output by the column decoder 130 as an output 135. The sensing component 125 may compare the value obtained by applying the first voltage to the first access line with the second value obtained by applying the second voltage to the second access line. This comparison may determine the logical value of the cell 105 based on a reference value specific to the cell 105. As described below with reference to FIG. 4, the sensing component 125 may compare the read signal (for example, the read signal 330-a described with reference to FIG. 3) with a reference value (for example, the reference value 335-a described with reference to FIG. 3). Compare to determine the logic state of cell 105. The difference between the read signal and the reference value may be related to the difference between the read voltage and the second read voltage.The sensing component 125 may include various transistors or amplifiers to detect and amplify the signal difference, which may be referred to as latching. The sensing component 125 may also include one or more nodes, for example, a first node (e.g., node 425) and a second node (e.g., node 430) as described with reference to FIG. 4. The detected logic state of the memory cell 105 can then be output by the column decoder 130 as an output 135.The memory cell 105 can be set or written by activating the related word line 110 and digit line 115. As discussed above, the activated word line 110 electrically connects the corresponding row of the memory cell 105 to its corresponding digit line 115. By controlling the related digital line 115 while the word line 110 is activated, the memory cell 105 can be written, that is, the logic value can be stored in the memory cell 105. The column decoder 130 can accept data to be written to the memory unit 105, such as input 135. The ferroelectric memory cell 105 can be written by applying a voltage across the ferroelectric capacitor. This process is discussed in more detail below.As described herein, the memory cell 105 can be sensed several times. This type of solution may involve applying several voltages across at least one first access line and a second access line. The first read voltage can be applied to the first access line, and the second read voltage can be applied across the second access line. The read voltage applied to the first access line may result in a first voltage across the second access line (for example, refer to VBlt(0) of FIG. 3), and the read voltage applied to the second access line may result in A second voltage across the first access line (for example, refer to VBlc(0) in FIG. 3). The read voltage applied across the first access line and the second access line may respectively represent a read signal (for example, the read signal 330-a described with reference to FIG. 3) and a reference value (for example, the reference value described with reference to FIG. 3). Value 335-a). The read signal and reference value can be provided to the sensing component 125 for use in determining the logic state of the unit 105. This process is discussed in more detail below.In some memory architectures, accessing the memory cell 105 may degrade or destroy the stored logic state, and may perform a rewrite or refresh operation to return the original logic state to the memory cell 105. In DRAM, for example, the capacitor can be partially or completely discharged during the sensing operation, which can destroy the stored logic state. Therefore, the logic state can be rewritten after the sensing operation. In addition, activating a single word line 110 may cause all memory cells in the row to discharge; therefore, several or all memory cells 105 in the row may need to be rewritten.Some memory architectures including DRAM may lose their stored state over time unless they are periodically refreshed by an external power source. For example, a charged capacitor can discharge over time due to leakage current, causing the stored information to be lost. The refresh rate of these so-called volatile memory devices can be relatively high, for example, key DRAM arrays have dozens of refresh operations per second, which can cause significant power consumption. As memory arrays grow larger, increased power consumption can prevent the deployment or operation of memory arrays (eg, power supply, heat generation, material limitations, etc.), especially for mobile devices that rely on limited power sources such as batteries. As discussed below, the ferroelectric memory cell 105 can have advantageous properties that can cause performance improvements relative to other memory architectures.As discussed below, the ferroelectric memory cell 105 can have advantageous properties that can cause performance improvements relative to other memory architectures. For example, because ferroelectric memory cells tend to be less susceptible to stored charge degradation, the memory array 100 employing ferroelectric memory cells 105 may require fewer or no refresh operations, and therefore may require less power to perform operating. In addition, using the sensing scheme described herein in which several voltages are applied to several access lines to generate read signals and reference values may allow a wider read margin to be obtained.The memory controller 140 may control the operation of the memory unit 105 (for example, reading, writing, rewriting, refreshing, etc.) through various components such as the row decoder 120, the column decoder 130, and the sensing component 125. The memory controller 140 may generate row address signals and column address signals to activate the desired word line 110 and digit line 115. The memory controller 140 can also generate and control various voltage potentials used during the operation of the memory array 100. In general, the amplitude, shape, or duration of the applied voltage discussed herein can be adjusted or changed, and the amplitude, shape, or duration can be different for various operations for operating the memory array 100. In addition, one, multiple, or all memory cells 105 in the memory array 100 can be accessed simultaneously; for example, during a reset operation in which all the memory cells 105 or a group of memory cells 105 can be set to a single logic state, simultaneously Access multiple or all cells of the memory array 100.Figure 2 illustrates an example circuit 200 that supports a self-referenced sensing scheme according to various embodiments of the present disclosure. The circuit 200 includes a memory cell 105-a, a word line 110, a digit line 115, and a sensing component 125-a, which can be the components of the memory cell 105, word line 110, digit line 115, and sensing component 125 described with reference to FIG. Instance. The memory cell 105-a may include logic storage components, such as a capacitor 205, which has a first plate (which is the cell plate 230), and a second plate (which is the cell bottom 215). The cell plate 230 and the cell bottom 215 may be capacitively coupled through the ferroelectric material positioned therebetween. The orientation of the cell plate 230 and the cell bottom 215 can be flipped without changing the operation of the memory cell 105-a. The circuit 200 also includes a first selection component 220 and a second selection component 222. In some cases, there may be only one of the first selection component 220 and the selection component 222. The cell board 230 and the cell bottom 215 can be accessed via the digital lines 115-b and 115-a, respectively, and the sensing component 125-a can transmit, for example, a read signal (for example, the read signal 330-a described with reference to FIG. 3) Compare with a reference value (for example, reference value 335-a described with reference to FIG. 3). As described above, various states can be stored by charging or discharging the capacitor 205.The stored state of the capacitor 205 can be read or sensed by operating various elements represented in the circuit 200. The capacitor 205 can electronically communicate with the digital line 115. For example, when the first selection component 220 is deactivated, the capacitor 205 can be isolated from the digital line 115-a, and when the first selection component 220 is activated, the capacitor 205 can be connected to the digital line 115-a. The activation of the first selection component 220 and the second selection component 222 may be referred to as a selection memory unit 105-a. In some cases, the first selection component 220 and the second selection component 222 are transistors, and their operation is controlled by applying a voltage to the gate of the transistor, where the voltage amplitude is greater than the threshold amplitude of the transistor. The word line 110-a can activate the first selection element 220 or the second selection element 222, or both; for example, the voltage applied to the word line 110-a is applied to the gate of the transistor, so that the capacitor 205 and the digit line 115- bConnect.Due to the ferroelectric material between the plates of the capacitor 205, and as discussed in more detail below, the capacitor 205 may not discharge immediately after being connected to the digit line 115-a. In one solution, to sense the logic state stored by the ferroelectric capacitor 205, a bias voltage can be applied to the word line 110-a to select the memory cell 105-a, and a voltage can be applied to the digit line 115-b. In some cases, the digit line 115-a is virtual grounded and then isolated from the virtual ground, which may be referred to as "floating" before applying a bias to the digit line 115-b and the word line 110-a. Applying a bias to the digit line 115-b can result in a voltage difference across the capacitor 205 (eg, the digit line 115-b voltage minus the digit line 115-a voltage). The voltage difference may result in a change in the stored charge on the capacitor 205, where the magnitude of the change in the stored charge may depend on the initial state of the capacitor 205, for example, whether the initial state stores a logic "1" or a logic "0". This may cause the voltage of the digital line 115-a to change based on the charge stored on the capacitor 205. The operation of the digit lines 115-a and 115-b can be reversed to chase the discharge of the capacitor 205 onto the digit line 115-b. As described herein, accessing the unit 105-a via digital lines 115-a and 115-b in an alternating manner can be employed in a self-reference sensing scheme.The voltage change of the digital line 115-a may depend on its intrinsic capacitance. That is, as the charge flows through the digital line 115-a, some finite charge can be stored in the digital line 115-a, and the resulting voltage depends on the intrinsic capacitance. The intrinsic capacitance may depend on the physical characteristics of the digital line 115-a, including its size. The digital line 115-a can be connected to a plurality of memory cells 105, so the digital line 115-a can have a length that generates a non-negligible capacitance (for example, on the order of picofarad (pF)). The sensing component 125-a can then compare the resulting voltage of the digital line 115-a with a reference value in order to determine the logic state stored in the memory cell 105-a. For example, the reference value can be obtained from the digital line 115-b by accessing the memory cell 105 via the digital line 115-a. Other sensing processes can be used.The sensing component 125-a may include various transistors or amplifiers to detect and amplify the signal difference, which may be referred to as latching. The sensing component 125-a may include multiple inverters. The sensing component 125-a may also include a sense amplifier that receives the voltage of the digital line 115-a and compares the voltage with a reference voltage, which may be referred to as a reference value.Additionally or alternatively, the sensing component 125-a may, for example, compare the read signal (for example, the read signal 330-a described with reference to FIG. 3) with a reference value (for example, the reference value 335-a described with reference to FIG. 3). Compare. Based on the comparison, the sense amplifier output can be driven to a higher (e.g., positive) or lower (e.g., negative or ground (GND)) read voltage. In other words, the read voltage may represent the highest value and the lowest value (for example, Vcc or GND) of the sense amplifier swing. For example, if the digital line 115-a has a voltage higher than the reference voltage, the sense amplifier output can be driven to a positive read voltage. In some cases, the sense amplifier may additionally drive the digital line 115-a to the read voltage. The sensing component 125-a can then latch the output of the sense amplifier and/or the voltage of the digital line 115-a, which can be used to determine the stored state in the memory cell 105-a, such as a logic "1". Alternatively, if the digital line 115-a has a voltage lower than the reference voltage, the sense amplifier output can be driven to a negative or ground voltage. The sensing component 125-a can similarly latch the sense amplifier output to determine the stored state in the memory cell 105-a, such as a logic "0". The latched logic state of the memory cell 105-a can then be output, for example, by the column decoder 130, as the output 135 with reference to FIG. 1.To write to the memory cell 105-a, a voltage may be applied across the capacitor 205. Various methods can be used. In one example, the first selection component 220 can be activated through the word line 110-b to electrically connect the capacitor 205 to the digital line 115-a. The voltage can be applied across the capacitor 205 by controlling the cell board 230 (via the digital line 115-b) and the cell bottom 215 (via the digital line 115-a) voltage. To write logic "0", the cell plate 230 can be set high, that is, a positive voltage can be applied to the digital line 115-b, and the cell bottom 215 can be set low, for example, virtual ground is grounded or a negative voltage is applied to the digital line 115- a. The reverse process is performed to write a logic "1", where the cell board 230 is taken low and the cell bottom 215 is taken high.Figure 3 illustrates an example of non-linear electrical properties with hysteresis curves 300-a and 300-b for ferroelectric memory cells operating in accordance with various embodiments of the present disclosure. The hysteresis curves 300-a and 300-b illustrate the writing process and the reading process, respectively. The hysteresis curves 300-a and 300-b depict the charge Q stored on a ferroelectric capacitor (eg, capacitor 205 of FIG. 2) as a function of the voltage difference V.Ferroelectric materials are characterized by spontaneous electrical polarization, that is, they maintain non-zero electrical polarization in the absence of an electric field. Example ferroelectric materials include barium titanate (BaTiO3), lead titanate (PbTiO3), lead zirconate titanate (PZT), and strontium bismuth tantalate (SBT). The ferroelectric capacitors described herein may include these or other ferroelectric materials. The electric polarization in a ferroelectric capacitor generates a net charge at the surface of the ferroelectric material, and the opposite charge is attracted through the capacitor terminals. Therefore, charge is stored at the interface between the ferroelectric material and the capacitor terminal. Because the electric polarization can be maintained for a relatively long time, even indefinitely, without an externally applied electric field, compared with capacitors used in, for example, a DRAM array, charge leakage can be significantly reduced. This can reduce the need to perform refresh operations as described above for some DRAM architectures.The hysteresis curves 300-a and 300-b can be understood from the perspective of a single terminal of the capacitor. By way of example, if the ferroelectric material has negative polarity, then positive charges accumulate at the terminals. Likewise, if the ferroelectric material has positive polarity, then negative charges accumulate at the terminals. In addition, it should be understood that the voltage in the hysteresis curve 300 represents the voltage difference across the capacitor, and the voltage is directional. For example, it can be achieved by applying a positive voltage to the terminal in question (e.g., cell board 230, as shown in FIG. 2) and maintaining the second terminal (e.g., cell bottom 215, as shown in FIG. 2) at ground ( Or approximately zero volts (0V)) to achieve a positive voltage. A negative voltage can be applied by maintaining the terminal in question at ground and applying a positive voltage to the second terminal, ie, a positive voltage can be applied to negatively polarize the terminal in question. Similarly, two positive voltages, two negative voltages, or any combination of positive and negative voltages can be applied to the appropriate capacitor terminals to produce the voltage difference shown in the hysteresis curve 300.As depicted in the hysteresis curve 300-a, the ferroelectric material can maintain positive polarization via a zero voltage difference, thereby creating a possible charge state 305-a. According to the example of FIG. 3, the charge state 305-a represents a logic "0". In some instances, the logical value of the corresponding charge state can be reversed to accommodate other schemes for operating the memory cell.As depicted in the hysteresis curve 300-b, the ferroelectric material can experience zero voltage difference to maintain negative polarity, thereby creating a possible charge state 305-b. According to the example of FIG. 3, the charge state 305-b represents a logic "1". In some instances, the logical value of the corresponding charge state can be reversed to accommodate other schemes for operating the memory cell.The logic "0" or "1" can be written to the memory cell by controlling the polarization of the ferroelectric material, and therefore controlling the charge on the capacitor terminal by applying a voltage. For example, applying a net positive voltage across the capacitor produces charge accumulation until state of charge 310-a is reached. After the voltage is removed, the charge state 310-a immediately follows the path depicted by the hysteresis curve 300-a until it reaches the charge state 315-a at the zero voltage potential. Similarly, by applying a net negative voltage to the capacitor, the charge state 320-a is written. After removing the net negative voltage, the positive voltage is applied again, and the charge state 320-a follows the path until it reaches the charge state 325-a. The charge states 305-a and 305-b may also be referred to as residual polarization (Pr) values, that is, the polarization (or charge) that remains after the external bias (eg, voltage) is removed. Voltage is the voltage at which the charge (or polarization) is zero.To read or sense the stored state of a ferroelectric capacitor, a voltage can be applied across the capacitor. In response, the stored charge Q changes, and the extent of the change depends on the initial charge state, that is, the final stored charge (Q) depends on whether the charge state 305-a or 305-b is initially stored. A voltage may be applied across the capacitor as discussed with reference to FIG. 2. For example, in response to a voltage, the charge state 305-a may follow a specific path. In addition, for example, if the charge state 305-b is initially stored, it follows a certain path. The final position of the charge state depends on several factors, including specific sensing schemes and circuits.As depicted in the hysteresis curve 300-a, a voltage (e.g., VBlt(0)) may be applied to the first access line, thereby generating a charge state 310-a. This step may represent generating a read signal 330-a, where the voltage depends on the logic state of the cell. Subsequently, the cell can be reset to zero (0V), and the charge state is transferred from 310-a to 315-a. Immediately after reaching the charge state 315-a, a net negative voltage (for example, VBlc(0)) can be applied to the second access line, resulting in a charge state 320-a. This step may mean generating a reference value 335-a for a specific memory cell. The voltage can then be reapplied to the memory cell, resulting in charge state 325-a. This step can mean writing a logical value to the cell. After the operation is complete (eg, at charge state 325-a), the charge state of the cell may return to charge state 305-a.Similarly, as depicted in the hysteresis curve 300-b, a voltage (e.g., VBlt(1)) can be applied to the first access line, thereby generating a charge state 310-b. This step may represent the generation of a read signal 330-b, where the voltage depends on the logic state of the cell. Subsequently, the cell can be reset to zero (0V) and the state of charge is transferred from 310-b to 315-b. Immediately after reaching the charge state 315-b, a net negative voltage (for example, VBlc(1)) can be applied to the second access line, resulting in a charge state 320-b. This step may mean generating a reference value 335-b for a specific memory cell. The net negative voltage can then be reapplied to the memory cell, resulting in charge state 325-b. This step can mean writing a logical value to the cell. After the operation is complete (e.g., at charge state 325-b), the charge state of the cell may return to charge state 305-b.In some cases, the final charge may depend on the intrinsic capacitance of the digit line connected to the memory cell. For example, if a capacitor is electrically connected to a digital line and a voltage is applied, the voltage of the digital line can rise due to its intrinsic capacitance. Therefore, the voltage measured at the sensing component may not be equal to the voltage and may instead depend on the voltage of the digital line. The position of the final state of charge on the hysteresis curves 300-a and 300-b can therefore depend on the capacitance of the digital line, and the position can be determined by load line analysis. Therefore, the voltage of the capacitor may be different and may depend on the initial state of the capacitor. For example, when the same voltage is applied to the first access line, a voltage (for example, VBlt(0) or VBlt(1)) may be generated across the capacitor 205. Similarly, for example, when the same voltage is applied to the second access line, a voltage (for example, (0) or VBlc(1)) can be generated across the capacitor 205.By comparing the read signal with the reference value, the initial state of the capacitor can be determined. After the sensing components are compared, it can be determined that the read signal is higher or lower than the reference value, and the stored logic value of the ferroelectric memory cell (ie, logic "0" or "1") can be determined.As discussed above, reading a memory cell that does not use a ferroelectric capacitor can degrade or destroy the stored logic state. However, the ferroelectric memory cell can maintain the initial logic state after the read operation. For example, if the charge state 305-b is stored, the charge state may follow a certain path during a read operation and after removing the voltage, which charge state may return to by following the path in the opposite direction Initial state of charge 305-b.Figure 4 illustrates an example circuit 400 that supports a self-reference sensing scheme according to an example of the present disclosure. The circuit 400 includes a first voltage source 415 and a second voltage source 420, and virtual grounds 435 and 440, each of which may include a switch component. In addition, the circuit 400 includes switch components 445, 450, 455, and 460 and nodes 425 and 430. In some examples, the switch components 445, 450, 455, and 460 may be transistors. In addition, the circuit 400 may include a first selection component 220-a, a second selection component 222-a, a memory cell 105-b, a word line 110-c, digital lines 115-c and 115-d, a cell board 230-a, The cell bottom 215-b and the capacitor 205-b, which may be examples of corresponding components described with reference to FIG. 2. The selection component 220-a and the selection component 222-a in electronic communication with the ferroelectric capacitor 205-a can be used to select the ferroelectric memory cell 105-b. For example, the selection element 220-a and the selection element 222-a may be transistors (eg, FETs) and may be activated by using the voltage applied to the gate of the transistor by the word line 110-c.As depicted in FIG. 4, based on the selection of the ferroelectric memory cell 105-a, the read voltage 415 may be applied to the first access line, such as the digit line 115-c. After the read voltage 415 is applied, a voltage (for example, VBlt(0)) can be generated across the second access line, such as the digit line 115-d. This voltage may depend on the state of charge relative to the read voltage 415. In some examples, this voltage may be caused by the sharing of capacitance between the ferroelectric memory cell 105-a and the second access line. The voltage value across the second access line (for example, refer to the read signal 330-a of FIG. 3) may be sampled to the latch at node 425 (for example, the sensing component 125-b). After the voltage value is sampled to the latch at node 425, node 425 can be isolated. Similarly, based on the selection of the ferroelectric memory cell 105-a, the read voltage 420 may be applied to the second access line, such as the digit line 115-d. In some examples, the read voltage 415 and the read voltage 420 may be the same read voltage. Before applying the read voltage 420, the first access line and the second access line (for example, the digit line 115) can be reset to zero (0V), that is, grounded by connecting the virtual ground 435 and/or the virtual ground 440 . When grounded, the memory cell 105-a may have residual polarization due to the application of the read voltage 415. After grounding the access line, the read voltage 420 can be applied to the second access line. After the read voltage 420 is applied, a voltage (for example, VBlc(0)) can be generated across the first access line such as the digit line 115-c. This voltage may depend on the state of charge relative to the read voltage 420. In some examples, this voltage may be caused by the sharing of capacitance between the ferroelectric memory cell 105-a and the first access line. The read voltage 420 may, for example, generate a voltage across the first access line related to the residual polarization of the memory cell 105-a. After the read voltage 420 is applied, the voltage across the first access line may be equivalent to the voltage across the second access line. This voltage value (for example, reference value 335-a of FIG. 3) can be sampled to the latch at node 430. After sampling the voltage value to the latch at node 430, node 430 may be isolated. The logic value can then be written to the ferroelectric memory cell.In addition, for example, the first voltage across the second access line (for example, VBlt(0)) and the second voltage across the first access line (for example, VBlc(0)) may be equivalent. The values of the read voltage 415 and the read voltage 420 can be selected based on this determination. The values of the read voltage 415 and the read voltage 420 can be selected to generate a first voltage across the second access line (for example, VBlt(0)) and a second voltage across the first access line (for example, VBlc( 0)) The offset of the comparison.Additionally or alternatively, for example, the first reading voltage source and the second reading voltage source may supply the reading voltage 415 and the reading voltage 420. In some examples, the read voltage 415 and the read voltage 420 may be the same voltage. In such instances, the read voltage 420 may be the same as the read voltage 415, and the first read voltage source and the second read voltage source may be the same voltage source. In other examples, the first read voltage source and the second read voltage source may supply different voltages. In such cases, the read voltage 420 may be a higher voltage or a lower voltage compared to the read voltage 415. Therefore, the hysteresis of the state of the memory cell during the first part of the access operation (for example, 310-a or 310-b) and during the second part of the access operation (for example, 320-a or 320-b) The position on the curve can be changed. For example, the voltages VBlt(0) and VBlc(0) can be shifted. The reading can be obtained in the case of the equivalent or roughly equivalent difference between the sensed value (for example, VBlt(0) or VBlt(1)) and the corresponding reference value (for example, VBlc(0) or VBlc(1)) Take execution. In some examples, the values of the read voltage 415 and the read voltage 420 can be selected to be between the first voltage across the second access line (for example, VBlt(0) or VBlt(1)) and across the first access line When comparing the second voltage (for example, VBlc(0) or VBlc(1)), an offset is generated.In addition, FIG. 4 depicts switch components 445, 450, 455, and 460, which can be opened or closed to facilitate the access node 425 or 430, the first access line, and the second access line. For example, when the first read voltage 415 is applied to the first access line, the switch element 460 can be opened, and the switch element 455 can be closed at the same time. In addition, for example, when a voltage value is applied to the node 425, the node 425 can be isolated by closing the switching element 450 and opening the switching element 460. Additionally or alternatively, for example, the node 425 may be isolated only by opening the switch assembly 450. Similarly, when a voltage value is applied to the node 430, the node 430 can be isolated by closing the switching element 460 and opening the switching element 450. Additionally or alternatively, for example, the node 430 may be isolated by simply turning off the switch assembly 460. Immediately after the voltage value is supplied to the nodes 425 and 430, the sensing component 125-b can be operated to latch the logic value stored in the memory cell 105-b. This operation can, for example, increase the voltage value applied to nodes 425 and 430 and can facilitate writing logic values back to memory cell 105-b.FIG. 5 shows a block diagram 500 of a memory array 100-a that supports self-referencing for ferroelectric memory according to an example of the present disclosure. The memory array 100-a may be referred to as an electronic memory device, and may be an example of the components of the memory controller 140-a as described with reference to FIG. 1.The memory array 100-a may include one or more memory cells 105-c, a memory controller 140-a, a word line 110-d, a reference component 520, a sensing component 125-c, and digital lines 115-e and 115-f And the latch 525. These components can communicate with each other electronically and can perform one or more of the functions described herein. In some cases, the memory controller 140-a may include a bias component 510 and a timing component 515. The memory controller 140-a can electronically communicate with the word line 110-d, the digital line 115, and the sensing component 125-c, which can be the components of the word line 110, the digital line 115, and the sensing component 125 described with reference to FIGS. 1 and 2 Instance. In some cases, the reference component 520, the sensing component 125-c, and the latch 525 may be components of the memory controller 140-a.In some examples, the digital lines 115-e and 115-f are in electronic communication with the sensing component 125-c of the ferroelectric memory cell 105-c and the ferroelectric capacitor (eg, capacitor 205-a of FIG. 4). The ferroelectric memory cell 105-c may be writable in a usable logic state (e.g., the first or second logic state). The word line 110-d may electronically communicate with the memory controller 140-a and selection components of the ferroelectric memory cell 105-c. The sensing component 125-c can electronically communicate with the memory controller 140-a, the digital line 115, and the latch 525. The reference component 520 can electronically communicate with the digital line 115. In addition to components not listed above, these components may also electronically communicate with other components inside and outside the memory array 100-a via other components, connections, or buses.The memory controller 140-a may be configured to activate the word line 110-d or the digital line 115 by applying voltage to those various nodes. For example, as described above, the bias component 510 may be configured to apply a voltage to operate the memory cell 105-c to read or write to the memory cell 105-c. In some cases, as described with reference to FIG. 1, the memory controller 140-a may include a row decoder, a column decoder, or both. This may enable the memory controller 140-a to access the memory unit 105-c. The bias component 510 may also provide a voltage to the reference component 520 in order to generate a self-reference signal for the sensing component 125-c. For example, the bias component 510 can provide different read voltages to the sensing component 125-c via the reference component 520. In addition, the biasing component 510 can provide a voltage for operating the sensing component 125-c.In some cases, the memory controller 140-a may use the timing component 515 to perform its operations. For example, the timing component 515 can control the timing of various word line selections or board bias, including timing for switching and voltage application to perform the memory functions discussed herein, such as reading and writing. In some cases, the timing component 515 can control the operation of the bias component 510.The reference component 520 may include various components to generate a self-reference signal for the sensing component 125-c. The reference component 520 may include a circuit, including various switching components that apply a voltage to the digital line 115 or ground the digital line. In some examples, the reference component 520 can be in electronic communication with the sensing component 125-c. The sensing component 125-c can compare the signal from the memory cell 105-c (via the digital line 115-c) with the reference signal from the reference component 520.After determining the logic state, the sensing component can then immediately store the output in the latch 525. In the latch, the output can be used according to the operation of the electronic device. The memory array 100-a is the electronic device. Part of the device. The sense component 125-c may include a sense amplifier in electronic communication with the latch 525 and the ferroelectric memory cell. For example, the latch 525 may electronically communicate with the first access line and the second access line via multiple switch components (e.g., as shown in FIG. 4).The latch 525 can electronically communicate with a first access line (eg, digital line 115-e) via a first switch component and also electronically communicate with a second access line (eg, digital line 115-f) via a second switch component. Communication. In addition, the first voltage source can electronically communicate with both the first access line and the latch via the third switch component. The second voltage source can electronically communicate with both the second access line and the latch via the fourth switch component. In addition, the first access line can electronically communicate with the virtual ground through the fifth switch element, and the second access line can electronically communicate with the virtual ground through the sixth switch element. The latch 525 may further electronically communicate with the first access line at the first node and electronically communicate with the second access line at the second node. The first node and the second node can electronically communicate via the seventh switch assembly. The various switch components described with reference to FIG. 5 may be within the sensing component 125-c; although not shown, such switch components may perform functions similar to those described with reference to FIG. 4.At least some of the memory controller 140-a and/or its various subcomponents may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions of at least some of the memory controller 140-a and/or its various subcomponents can be implemented by general-purpose processors, digital signal processors (DSP), and application-specific integrated circuits (ASIC). , A field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described in this disclosure. At least some of the memory controller 140-a and/or its various sub-components may be physically located in various locations, including distributed so that parts of functions are implemented by one or more physical devices at different physical locations. In some examples, the memory controller 140-a and/or at least some of its various sub-components may be separate and distinct components according to examples of the present disclosure. In other examples, the memory controller 140-a and/or at least some of its various subcomponents may be combined with one or more other hardware components, including but not limited to receivers, transmitters, transceivers, One or more other components described in this disclosure, or a combination of examples according to this disclosure.The memory controller 140-a may electronically communicate with the first access line (for example, digital line 115-e), the second access line (digital line 115-d) and the sensing component 125 to control the first switch component, the second The second switch component, the third switch component, the fourth switch component, the fifth switch component, the sixth switch component, and the seventh switch component. For example, the memory controller 140-a may apply the first read voltage to the first access line during the first part of the access operation. After applying the first read voltage, the memory controller 140-a may then apply the second read voltage to the second access line during the second part of the access operation. Subsequently, the memory controller 140-a may compare the first voltage of the first access line with the second voltage of the second access line during the third part of the access operation, wherein the first voltage is at least partially Based on the first read voltage, and the second voltage is based at least in part on the second read voltage. The memory controller 140-a may determine the logic value associated with the ferroelectric memory cell based at least in part on comparing the first voltage of the first access line with the second voltage of the second access line.FIG. 6 shows a diagram of a system 600 including a device 605 supporting self-referencing for ferroelectric memory according to an example of the present disclosure. The device 605 may include a memory controller 140-b, which may be an example of the memory controller 140 as described above with reference to FIG. 1. The device 605 may include: components for two-way voice and data communication, including components for transmitting and receiving communications; a memory array 100-b, which includes a memory controller 140-b and a memory unit 105-d; basic input/output System (BIOS) component 615; processor 610; I/O controller 625; and peripheral device component 620. These components may communicate electronically via one or more buses (e.g., bus 630). As described herein, the memory cell 105-d may store information (ie, in the form of logical states).The BIOS component 615 may be a software component including BIOS operating as firmware, which can initialize and run various hardware components. The BIOS component 615 can also manage the data flow between the processor and various other components (eg, peripheral device components, input/output control components, etc.). The BIOS component 615 may include programs or software stored in read-only memory (ROM), flash memory, or any other non-volatile memory.The processor 610 may include intelligent hardware devices (for example, general-purpose processors, DSPs, central processing units (CPUs), microcontrollers, ASICs, FPGAs, programmable logic devices, discrete gate or transistor logic components, discrete hardware components, or their Any combination). In some cases, the processor 610 may be configured to operate a memory array using a memory controller. In other cases, the memory controller may be integrated into the processor 610. The processor 610 may be configured to execute computer-readable instructions stored in the memory to perform various functions (for example, support self-referencing functions or tasks for ferroelectric memory).The I/O controller 625 can manage the input and output signals for the device 605. The I/O controller 625 may also manage peripheral devices that are not integrated into the device 605. In some cases, the I/O controller 625 may represent the physical connection or port to the external peripheral device. In some cases, the I/O controller 625 may use an operating system such asor another known operating system.The peripheral component 620 may include any input or output device, or interface for such a device. Examples can include disk controllers, sound controllers, graphics controllers, Ethernet controllers, modems, universal serial bus (USB) controllers, serial or parallel ports, or peripheral device card slots, such as peripheral component interconnect ( PCI) or accelerated graphics port (AGP) slot.Input 635 may represent a device or signal external to device 605 that provides input to device 605 or its components. This may include a user interface or interface with or between other devices. In some cases, the input 635 can be managed by the I/O controller 625, and the input can interact with the device 605 via the peripheral component 620.The output 640 may also represent a device or signal external to the device 605 that is configured to receive output from the device 605 or any of its components. Examples of output 640 may include a display, audio speakers, printed device, another processor or printed circuit board, and so on. In some cases, the output 640 may be a peripheral device element that interfaces with the device 605 via the peripheral device component 620. In some cases, the output 640 can be managed by the I/O controller 625.The components of device 605 may include circuits designed to perform its functions. This may include various circuit elements configured to perform the functions described herein, such as wires, transistors, capacitors, inductors, resistors, amplifiers, or other active or inactive elements. The device 605 may be a computer, a server, a portable computer, a notebook computer, a tablet computer, a mobile phone, a wearable electronic device, a personal electronic device, and the like. Alternatively, the device 605 may be part or a component of such a device.FIG. 7 shows a flowchart illustrating a method 700 of a self-reference sensing scheme according to an example of the present disclosure. The operations of the method 700 may be implemented by the memory controller 140-a described with reference to FIG. 5 or its components as described herein. In some instances, a collection of executable codes of the memory controller to control the functional elements of the device to perform the functions described below. Additionally or alternatively, the memory controller may use dedicated hardware to perform some or all of the functions described below.At block 705, the method 700 may include applying a first read voltage to the first access line of the ferroelectric memory cell during the first portion of the access operation. The operation of block 705 may be implemented by the memory controller 140-a or the bias component 510 described with reference to FIG. 5.At block 710, the method 700 may include applying a second read voltage to the second access line of the ferroelectric memory cell during the second portion of the access operation. The operation of block 710 may be implemented by the memory controller 140-a or the bias component 510 described with reference to FIG. 5.In some examples, the method may further include grounding or virtual grounding the first access line and the second access line after applying the first read voltage, wherein after grounding or virtual grounding, applying the second read Take the voltage. In some examples, the third voltage of the second access line during the first portion of the access operation is based at least in part on the polarization of the ferroelectric memory cell. In some examples, the fourth voltage of the first access line during the second portion of the access operation is based at least in part on the polarization of the ferroelectric memory cell. In some examples, the fourth voltage of the first access line during the second portion of the access operation is less than the third voltage of the second access line. In other examples, the fourth voltage of the first access line during the second part of the access operation may be greater than the third voltage of the second access line.At block 715, the method 700 may include comparing the first voltage of the second access line with the second voltage of the first access line during the third portion of the access operation, wherein the first voltage is at least partially The ground is based on the application of the first read voltage, and the second read voltage is based at least in part on the application of the second read voltage. The operation of block 715 may be implemented by the memory controller 140-a or the bias component 510 described with reference to FIG. 5.At block 720, the method 700 may include determining a logic value associated with the ferroelectric memory cell based at least in part on comparing the first voltage of the second access line with the second voltage of the first access line. The operation of block 720 may be implemented by the memory controller 140-a or the bias component 510 described with reference to FIG. 5. The method may also include writing a logic value to the ferroelectric memory cell during the fourth part of the access operation. In some examples, the method may include a write-back operation, where the cell returns to the previously sensed charge representing the stored state.FIG. 8 shows a flowchart illustrating a method 800 of a self-reference sensing scheme according to an example of the present disclosure. The operations of the method 800 may be implemented by the memory controller 140-a described with reference to FIG. 5 or its components as described herein. In some instances, a collection of executable codes of the memory controller to control the functional elements of the device to perform the functions described below. Additionally or alternatively, the memory controller may use dedicated hardware to perform some or all of the functions described below.At block 805, the method 800 may include activating a first switch element coupled between the first read voltage and the first access line. The operation of block 805 may be implemented by the memory controller 140-a or the bias component 510 described with reference to FIG. 5.At block 810, the method 800 may include, after activating the first switch component, sensing the value at the second access line representing the first state associated with the ferroelectric memory cell. The operation of block 810 may be implemented by the memory controller 140-a or the sensing component 125-c described with reference to FIG. 5.At block 815, the method 800 may include, after sensing the first state, activating a second switch element coupled between the second read voltage and the second access line. In some examples, the first read voltage source and the second read voltage source may be the same voltage source. Additionally or alternatively, for example, the second read voltage source may be a voltage source different from the first read voltage source. The operation of block 815 may be implemented by the memory controller 140-a or the bias component 510 described with reference to FIG. 5.At block 820, the method 800 may include generating a reference value based at least in part on activating the second switch component. The operation of block 820 may be implemented by the memory controller 140-a or the reference component 520 described with reference to FIG. 5.At block 825, the method 800 may include determining a logical value stored at the ferroelectric memory cell, wherein the logical value is based at least in part on comparing a value representing the first state of the ferroelectric memory cell with a reference value. The operation of block 825 may be implemented by the memory controller 140-a or the sensing component 125-c described with reference to FIG. 5.Describe a device. In some examples, the apparatus may support means for applying a first read voltage to the first access line of the ferroelectric memory cell during the first part of the access operation. In some examples, the apparatus may support means for applying a second read voltage to the second access line of the ferroelectric memory cell during the second part of the access operation. In some examples, the apparatus may support means for comparing the first voltage of the second access line with the second voltage of the first access line during the third part of the access operation. In some examples, the first voltage is based at least in part on the application of a first read voltage, and the second voltage is based at least in part on the application of a second read voltage. In some examples, the device may support methods for determining a logical value associated with a ferroelectric memory cell based at least in part on comparing the first voltage of the second access line with the second voltage of the first access line installation.In some examples, the device may support a device for grounding the first access line and the second access line or virtual ground after applying the first read voltage. In some examples, after grounding or virtual grounding, the second read voltage is applied. In some examples, the device may support means for writing logic values to the ferroelectric memory cell during the fourth part of the access operation.Describe a device. In some examples, the apparatus may support a means for activating a first switch component coupled between the first read voltage and the first access line. In some examples, the device may support means for sensing the value of the first state associated with the ferroelectric memory cell at the second access line after activating the first switch component. In some examples, the device may support means for activating a second switch component coupled between the second read voltage source and the second access line after sensing the value representing the first state. In some examples, the apparatus may support means for generating a reference value based at least in part on activating the second switch component. In some examples, the device may support means for determining the logical value stored at the ferroelectric memory cell. In some examples, the logic value is based at least in part on comparing a value representing the first state of the ferroelectric memory cell with a reference value.In some examples, the apparatus may support means for sensing a value representing a first state associated with a ferroelectric memory cell, including applying a first read voltage source to the first access line. In some examples, the first read voltage source generates a first voltage across the second access line. In some examples, the device may support means for generating a reference value, including applying a second read voltage source to the second access line. In some examples, the second read voltage source generates a second voltage across the first access line.In some examples, the apparatus may support means for sampling the first voltage of the second access line at the first node of the sensing component. In some examples, the device may support means for isolating the first node after sampling the first voltage. In some examples, the device may support a device for sampling the second voltage of the first access line at the second node of the sensing component after isolating the first node. In some examples, the device may support means for isolating the second node after sampling the second voltage.It should be noted that the method described above describes possible implementations, and the operations and steps can be rearranged or modified in other ways, and other implementations are possible. In addition, some or all of the steps from two or more of the methods may be combined.Any of a variety of different techniques and techniques can be used to represent the information and signals described herein. For example, voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof may be used to represent data, instructions, commands, information, signals, bits, symbols that may be referred to throughout the above description. And chip. Some figures may illustrate a signal for a single signal; however, those of ordinary skill in the art will understand that the signal may represent a signal bus, where the bus may have multiple bit widths.As used herein, the term "virtual ground" means that the nodes of the circuit are maintained at approximately zero volts (0V) but are not directly connected to ground. Therefore, the voltage of the virtual ground may fluctuate in time and return to approximately 0V in a steady state. Various electronic circuit elements such as a voltage divider composed of operational amplifiers and resistors can be used to implement virtual grounding. Other embodiments are also possible. "Virtual ground" or "virtual ground to ground" means to connect to approximately 0V.The terms "electronic communication" and "coupling" refer to the relationship between components that support electronic flow between components. This may include direct connections between components or may include intermediate components. Components that are in electronic communication or coupling with each other may actively exchange electronics or signals (e.g., in a power-on circuit) or may not actively exchange electronics or signals (e.g., in a power-off circuit), but may be configured and operable to be in a circuit Exchange electrons or signals immediately after power on. By way of example, two components that are physically connected via a switch (e.g., a transistor) communicate electronically or can be coupled regardless of the state of the switch (ie, open or closed).The term "isolation" refers to the relationship between components in which electrons cannot currently flow between the components; if there is an open circuit between the components, the components are isolated from each other. For example, two components physically connected by a switch can be isolated from each other when the switch is open.As used herein, the term "shorting" refers to a relationship between components in which a conductive path is established between components via a single intermediate component between the two components in question. For example, a first component that is shorted to a second component can exchange electrons with the second component when the switch between the two components is closed. Therefore, the short circuit can be a dynamic operation that realizes the flow of electric charge between the components (or wires) of electronic communication.The device including the memory array 100 discussed herein may be formed on a semiconductor substrate, such as silicon, germanium, silicon germanium alloy, gallium arsenide, gallium nitride, and the like. In some cases, the substrate is a semiconductor wafer. In other cases, the substrate may be a silicon-on-insulator (SOI) substrate, such as silicon-on-glass (SOG) or silicon-on-sapphire (SOP), or an epitaxial layer of semiconductor material on another substrate. The conductivity of the substrate or sub-regions of the substrate can be controlled by doping with various chemical substances including (but not limited to) phosphorus, boron or arsenic. The doping can be performed during the initial formation or growth of the substrate, by ion implantation, or by any other doping method.The transistors discussed herein may represent field effect transistors (FETs) and include three-terminal devices including a source, a drain, and a gate. The terminals may be connected to other electronic components through conductive materials such as metals. The source and drain may be conductive and may include heavily doped (e.g., degenerate) semiconductor regions. The source and drain can be separated by lightly doped semiconductor regions or channels. If the channel is n-type (ie, most of the carriers are electrons), then the FET can be called an n-type FET. If the channel is p-type (ie, most of the carriers are holes), then the FET can be referred to as a p-type FET. The channel can be terminated by an insulated gate oxide. The channel conductivity can be controlled by applying a voltage to the gate. For example, applying a positive voltage or a negative voltage to an n-type FET or a p-type FET, respectively, can cause the channel to become conductive. When a voltage greater than or equal to the threshold voltage of the transistor is applied to the gate of the transistor, the transistor can be "turned on" or "started." When a voltage less than the threshold voltage of the transistor is applied to the gate of the transistor, the transistor can be "turned off" or "deactivated".The specific embodiments set forth herein in conjunction with the drawings describe example configurations, and do not represent all examples that can be implemented or fall within the scope of the claims. The term "exemplary" as used herein means "serving as an example, instance, or illustration", and is not "preferred" or "better" than other examples. The detailed description contains specific details for the purpose of providing an understanding of the described technology. However, these techniques can be practiced without these specific details. In some examples, well-known structures and devices are shown in the form of block diagrams in order to avoid confusing the concepts of the described examples.In the drawings, similar components or features may have the same reference label. In addition, various components of the same type can be distinguished by following the reference marks of the dotted line and the second mark, which are distinguished among similar components. If only the first reference mark is used in the specification, the description is applicable to any one of the similar components having the same first reference mark regardless of the second reference mark.Any of a variety of different techniques and techniques can be used to represent the information and signals described herein. For example, voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof may be used to represent data, instructions, commands, information, signals, bits, symbols that may be referred to throughout the above description. And chip.The various illustrative blocks and modules described in the present invention may be combined with general-purpose processors, DSPs, ASICs, FPGAs or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or designed to perform as described herein. Any combination of the described functions are implemented or executed. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a computing device (for example, a combination of a digital signal processor (DSP) and a microprocessor, multiple microprocessors, one or more microprocessors combined with a DSP core, or any other such configuration )The combination.The functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions can be stored as one or more instructions or codes on a computer-readable medium or transmitted through the computer-readable medium. Other examples and implementations are within the scope of the invention and the appended claims. For example, due to the nature of software, the functions described above can be implemented using software executed by a processor, hardware, firmware, hard wiring, or any combination of these. Features that implement a function may also be physically located at various locations, including being distributed so that various parts of the function are implemented at different physical locations. In addition, as used herein, included in the claims, as used in a list of items (for example, a list of items preceded by phrases such as "at least one of" or "one or more of") "Or" indicates a list containing endpoints, such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (ie, A and B and C). And, as used herein, the phrase "based on" should not be understood as referring to a set of closed conditions. For example, the exemplary steps described as "based on condition A" may be based on both condition A and condition B without departing from the scope of the present invention. In other words, as used herein, the phrase "based on" should equally be interpreted as the phrase "based at least in part."Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates the transfer of a computer program from one place to another. Non-transitory storage media can be any available media that can be accessed by a general-purpose or special-purpose computer. For example, but not limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or Any other non-transitory medium that can be used to carry or store desired program code components in the form of instructions or data structures and that can be accessed by general-purpose or special-purpose computers, or general-purpose or special-purpose processors. And, any connection is appropriately referred to as a computer-readable medium. For example, if you use coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology (for example, infrared, radio, and microwave) to launch software from a website, server, or other remote source, then The aforementioned coaxial cable, optical fiber cable, twisted pair, digital subscriber line (DSL) or wireless technology (for example, infrared, radio, and microwave) are included in the definition of the media. As used herein, magnetic disks and optical disks include CDs, laser disks, optical disks, digital versatile disks (DVDs), flexible disks, and Blu-ray disks. Disks usually reproduce data magnetically, while optical disks use lasers to reproduce data optically. data. Combinations of the above are also included in the scope of computer-readable media.The description herein is provided to enable those skilled in the art to make or use the present invention. Various modifications to the present invention will be readily apparent to those skilled in the art, and the general principles defined herein can be applied to other variations without departing from the spirit or scope of the present invention. Therefore, the present invention is not limited to the examples and designs described herein, but is given the widest scope consistent with the principles and novel features disclosed herein.
Technologies for managing the efficiency of workload execution in a managed node include a managed node that includes one or more processors that each include multiple cores. The managed nodes is to execute threads of workloads assigned to the managed node, generate telemetry data indicative of an efficiency of execution of the threads, determine, as a function of the telemetry data, an adjustmentto a configuration of the threads among the cores to increase the efficiency of the execution of the threads, and apply the determined adjustment. Other embodiments are also described and claimed.
1. A managed node, configured to manage the execution efficiency of a workload assigned to the managed node, the managed node comprising:one or more processors, each of which contains multiple cores;one or more memory devices having stored therein a plurality of instructions which, when executed by the one or more processors, cause the managed node to:a thread executing a workload assigned to said managed node;generating telemetry data indicative of an execution efficiency of the thread, wherein the efficiency is indicative of cycles per instruction executed by the corresponding core;determining an adjustment to the configuration of the thread based on the telemetry data to improve the execution efficiency of the thread; andApplies the identified adjustments.2. The managed node of claim 1, wherein generating the telemetry data comprises identifying a current pipeline stage for each thread using a counter associated with each stage of the pipeline for each core.3. The managed node of claim 1, wherein the plurality of instructions, when executed, cause the managed node to analyze the telemetry data to determine the execution efficiency of the thread.4. The managed node of claim 3, wherein determining the execution efficiency comprises determining cycles per instruction for each core.5. The managed node of claim 4, wherein the plurality of instructions, when executed, cause the managed node to compare the number of cycles per instruction with a predefined number of cycles per instruction to determine one of or more of the cores are stalled.6. The managed node of claim 3, wherein determining the efficiency comprises generating a fingerprint indicative of a pattern of usage of the corresponding core's pipeline stages by each thread over a predefined period of time.7. The managed node of claim 6, wherein determining the efficiency comprises determining a current capacity per core and a predicted capacity per core from the generated fingerprints.8. The managed node of claim 3, wherein determining the efficiency comprises generating a map indicative of pipeline stage utilization for each thread on each core of the one or more processors.9. The managed node of claim 3, wherein determining the efficiency comprises determining a pipeline stage primarily utilized by each thread.10. The managed node of claim 9, wherein determining the efficiency comprises determining a current capacity of each core and a predicted capacity of each core based on the determined pipeline stages primarily utilized by each thread.11. The managed node of claim 3, wherein the plurality of instructions, when executed, further cause the managed node to provide efficiency data indicative of the determined efficiency to an orchestrator server.12. The managed node of claim 11 , wherein providing the efficiency data comprises providing the orchestrator server with an indication of pipeline stage utilization for each thread on each core of the one or more processors. mapping.13. A method for managing execution efficiency of workloads assigned to managed nodes, the method comprising:executing, by the managed node, threads of a workload assigned to the managed node with one or more processors each comprising a plurality of cores;generating, by the managed node, telemetry data indicative of an execution efficiency of the thread, wherein the efficiency is indicative of cycles per instruction executed by the corresponding core;determining, by the managed node and based on the telemetry data, an adjustment to the configuration of the thread to improve the execution efficiency of the thread; andThe determined adjustment is applied by the managed node.14. The method of claim 13, wherein generating the telemetry data comprises identifying a current pipeline stage for each thread using a counter associated with each stage of the pipeline for each core.15. The method of claim 13, further comprising analyzing, by the managed node, the telemetry data to determine the execution efficiency of the thread.16. The method of claim 15, wherein determining the execution efficiency comprises determining cycles per instruction for each core.17. The method of claim 16 , further comprising: comparing, by the managed node, the number of cycles per instruction to a predefined number of cycles per instruction to determine whether one or more of the cores are parked change.18. The method of claim 15, wherein determining the efficiency comprises generating a fingerprint indicative of a pattern of usage of the corresponding core's pipeline stages by each thread over a predefined period of time.19. The method of claim 18, wherein determining the efficiency comprises determining a current capacity of each core and a predicted capacity of each core from the generated fingerprints.20. The method of claim 15, wherein determining the efficiency comprises generating a map indicative of pipeline stage utilization for each thread on each core of the one or more processors.21. The method of claim 15, wherein determining the efficiency comprises determining a pipeline stage primarily utilized by each thread.22. The method of claim 21, wherein determining the efficiency comprises determining a current capacity of each core and a predicted capacity of each core based on the determined pipeline stages primarily utilized by each thread.23. The method of claim 15, further comprising providing, by the managed node, efficiency data indicative of the determined efficiency to an orchestrator server.24. One or more machine-readable storage media comprising a plurality of instructions stored thereon which, in response to being executed, cause a managed node to perform the method of any one of claims 13-23 .25. A managed node, configured to manage the execution efficiency of a workload assigned to the managed node, the managed node comprising:one or more processors;one or more memory devices having stored therein instructions which, when executed by said one or more processors, cause said managed node to perform the described method.26. An apparatus for managing execution efficiency of workloads assigned to managed nodes, comprising:means for executing threads of a workload assigned to said managed node with one or more processors each comprising a plurality of cores;means for generating telemetry data indicative of an execution efficiency of the thread, wherein the efficiency is indicative of a number of cycles per instruction executed by a corresponding core;means for determining, based on the telemetry data, an adjustment to the configuration of the thread to improve the execution efficiency of the thread; andThe component used to apply the determined adjustments.27. The apparatus of claim 26, wherein the means for generating the telemetry data comprises means for identifying the current pipeline stage of each thread using a counter associated with each stage of the pipeline of each core part.28. The apparatus of claim 26, further comprising means for analyzing the telemetry data to determine the execution efficiency of the thread.29. The apparatus of claim 28, wherein the means for determining the execution efficiency comprises means for determining cycles per instruction for each core.30. The apparatus of claim 29, further comprising means for comparing the cycles-per-instruction number to a predefined cycles-per-instruction number to determine whether one or more of the cores are stalled.31. The apparatus of claim 28 , wherein said means for determining said efficiency comprises means for generating a pattern indicative of usage of a pipeline stage of said corresponding core by each thread within a predefined period of time. Parts of the fingerprint.32. The apparatus of claim 31, wherein the means for determining the efficiency comprises means for determining a current capacity of each core and a predicted capacity of each core from the generated fingerprints.33. The apparatus of claim 28 , wherein said means for determining said efficiency comprises generating a value indicative of pipeline stage utilization of each thread on each core of said one or more processors. Mapped widgets.34. The apparatus of claim 28, wherein said means for determining said efficiency comprises means for determining a pipeline stage primarily utilized by each thread.35. The apparatus of claim 34 , wherein the means for determining the efficiency comprises determining the current capacity of each core and the Parts that predict capacity.36. The apparatus of claim 28, further comprising means for providing efficiency data indicative of the determined efficiency to an arranger server.
Techniques for Managing Workload Execution EfficiencyCross References to Related ApplicationsThis application claims priority to U.S. Utility Patent Application Serial No. 15/395,174, filed December 30, 2016, entitled "TECHNOLOGIES FOR MANAGING THEEFFICIENCY OF WORKLOAD EXECUTION," and which claims U.S. Provisional Patent Application (Serial No. 62/365,969) filed on August 18, 2016 (Serial No. 62/376,859), and U.S. Provisional Patent Application (Serial No. 62/376,859) filed on November 29, 2016 (serial number 62/427,268) priority.Background techniqueIn a typical cloud-based computing environment (eg, a data center), multiple computing nodes may execute workloads (eg, applications, services, etc.) on behalf of customers. Human administrators can try to determine the efficiency of a compute node by estimating how long it takes the compute node to complete a specific workload. Similarly, an administrator can form an efficiency estimate for a data center by tracking the amount of time a workload takes to complete across all computing nodes in a significant effort. However, administrators do not have insight into the efficiency of components within each compute node, and are often unable to adjust the configuration of components within managed nodes to improve efficiency within the compute node. Therefore, to increase the performance of a data center, administrators typically install more hardware (eg, more computing nodes), which results in increased costs and increased energy consumption.Description of drawingsThe concepts described herein are illustrated in the drawings by way of example and not by way of limitation. For simplicity and clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. Where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements.1 is an illustration of a conceptual overview of a data center in which one or more techniques described herein may be implemented, according to various embodiments;FIG. 2 is a diagram of an example embodiment of a logical configuration of racks in the data center of FIG. 1;3 is an illustration of an example embodiment of another data center in which one or more techniques described herein may be implemented, according to various embodiments;4 is an illustration of another example embodiment of a data center in which one or more techniques described herein may be implemented, according to various embodiments;Figure 5 is a diagram of a connectivity scheme representing link layer connectivity that may be established between the various sleds of the data centers of Figures 1, 3 and 4;6 is a diagram of a rack architecture that may represent the architecture of any one of the racks depicted in FIGS. 1-4, according to some embodiments;FIG. 7 is an illustration of an example embodiment of a sled that may be used with the rack architecture of FIG. 6;Figure 8 is a diagram of an example embodiment of a rack architecture providing support for a sled featuring expansion capabilities;Figure 9 is a diagram of an example embodiment of a rack implemented according to the rack architecture of Figure 8;Figure 10 is a diagram of an example embodiment of a slide designed for use in conjunction with the rack of Figure 9;FIG. 11 is an illustration of an example embodiment of a data center in which one or more techniques described herein may be implemented, according to various embodiments;Figure 12 is a simplified block diagram of at least one embodiment of a system for managing workload execution efficiency in a set of managed nodes;Figure 13 is a simplified block diagram of at least one embodiment of a managed node of the system of Figure 12;Figure 14 is a simplified block diagram of at least one embodiment of an environment that may be established by the managed nodes of Figures 12 and 13;Figure 15 is a simplified block diagram of at least one embodiment of an environment that may be established by the orchestrator server of Figure 12;16-17 are simplified flowcharts of at least one embodiment of a method for managing workload execution efficiency executable by the managed nodes of FIGS. 12-14; and18-19 are simplified flowcharts of at least one embodiment of a method for managing workload execution efficiency among a plurality of managed nodes executable by the orchestrator server of FIG. 12 .Detailed waysWhile the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the disclosed concepts to the particular forms disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives consistent with the disclosure and appended claims .References in the specification to "one embodiment," "an embodiment," "illustrative embodiment," etc. indicate that the described embodiments may include a particular feature, structure, or characteristic, but that each embodiment may or may not necessarily include a particular feature, structure, or characteristic. Include that particular feature, structure or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. In addition, when a specific feature, structure or characteristic is described in conjunction with an embodiment, it is considered to be within the knowledge of those skilled in the art to implement such feature, structure or characteristic in conjunction with other embodiments (whether or not explicitly described ). Additionally, it should be recognized that an item contained in a list of the form "at least one of A, B and C" can mean (A), (B), (C), (A and B), (A and C), (B and C) or (A, B and C). Similarly, an item listed as "at least one of A, B, or C" could mean (A), (B), (C), (A and B), (A and C), (B and C) or (A, B and C).The disclosed embodiments may in some cases be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments can also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which can be read and executed by one or more processors . A machine-readable storage medium can be implemented as any storage device, mechanism, or other medium for storing or transmitting information in a form readable by a machine (e.g., volatile or nonvolatile memory, media disk, or other media device). physical structure.In the figures, some structural or method features may be shown in a particular arrangement and/or ordering. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or methodological features in a particular figure does not imply that such features are required in all embodiments, and may not be included in some embodiments, or may be combined with other features.1 illustrates a conceptual overview of a data center 100, which may generally represent a data center or other type of computing network, in/for which one or more of the methods described herein may be implemented, according to various embodiments. technology. As shown in FIG. 1 , data center 100 may typically contain multiple racks, each of which may house computing equipment including a corresponding set of physical resources. In the specific non-limiting example depicted in FIG. 1 , data center 100 contains four racks 102A-102D that house computing devices including respective sets of physical resources (PCRs) 105A-105D. According to this example, the common set of physical resources 106 of data center 100 includes various sets of physical resources 105A-105D distributed among racks 102A-102D. Physical resources 106 may include various types of resources such as - for example - processors, coprocessors, accelerators, field programmable gate arrays (FPGAs), memory, and storage devices. Embodiments are not limited to these examples.Illustrative data center 100 differs from typical data centers in many ways. For example, in the illustrative embodiment, the circuit board ("sled") on which components such as the CPU, memory, and other components are placed is designed for increased thermal performance. In particular, in the illustrative embodiment, the slide plate is shallower than a typical board. In other words, the board is shorter from front to back (where the cooling fans are located). This reduces the length of the path the air must travel across the on-board components. Additionally, components on the sled are spaced farther apart than in a typical circuit board, and the components are arranged to reduce or eliminate shadowing (ie, one component in the air flow path of another component). In an illustrative embodiment, processing components, such as processors, are located on the top side of the sled, while near memory, such as DIMMs, are located on the bottom side of the sled. As a result of the enhanced air flow provided by this design, components can be operated at higher frequencies and power levels than in typical systems, thereby increasing performance. Additionally, the sleds are configured to blind mate with the power and data communication cables in each rack 102A, 102B, 102C, 102D, enhancing their ability to be quickly removed, upgraded, reinstalled and/or replaced. Similarly, the various components located on the sled (eg, processors, accelerators, memory and data storage drives) are configured to be easily upgraded (due to their increased separation from each other). In an illustrative embodiment, the component additionally includes a hardware certification feature to demonstrate its authenticity.Furthermore, in the illustrative embodiment, data center 100 utilizes a single network fabric (“fabric”) that supports multiple other network fabrics, including Ethernet and Omni-Path. In an illustrative embodiment, the sleds are coupled to the switch via fiber optics, which provide higher bandwidth and lower latency than typical twisted pair cabling (eg, category 5, category 5e, category 6, etc.). Due to the high-bandwidth, low-latency interconnect and network architecture, data center 100 can use physically disaggregated pool resources (such as memory, accelerators (e.g., graphics accelerators, FPGAs, ASICs, etc.), and data storage drives) and They are provided to computing resources (eg, processors) on an on-demand basis, enabling computing resources to access pooled resources as if they were local. Illustrative data center 100 additionally receives utilization information for various resources, predicts resource utilization for different types of workloads based on past resource utilization, and dynamically reallocates resources based on this information.Racks 102A, 102B, 102C, 102D of data center 100 may include physical design features that facilitate automation of various types of maintenance tasks. For example, data center 100 may be implemented using racks designed for robotic access and to accept and house robotically steerable resource sleds. Furthermore, in the illustrative embodiment, the racks 102A, 102B, 102C, 102D include integrated power sources that receive greater voltages than are typical for power sources. The increased voltage enables the power source to deliver additional power to the components on each sled, enabling the components to operate at higher than typical frequencies.FIG. 2 illustrates an exemplary logical configuration of racks 202 of data center 100 . As shown in FIG. 2 , the rack 202 can generally accommodate a plurality of slide boards, and each slide board can include a corresponding set of physical resources. In the specific, non-limiting example depicted in FIG. 2 , rack 202 houses sleds 204-1 through 204-4 that include respective sets of physical resources 205-1 through 205-4, each of which comprises a Part of the common collection of physical resources 206 . With respect to FIG. 1 , if rack 202 represents—for example—rack 102A, physical resource 206 may correspond to physical resource 105A included in rack 102A. In the context of this example, physical resource 105A may thus be composed of a respective collection of physical resources, including physical storage resource 205-1 , physical accelerator resource 205-2 included in sleds 204-1 through 204-4 of rack 202 , a physical memory resource 205-3, and a physical computing resource 205-5. Embodiments are not limited to this example. Each slab can contain pools of each of various types of physical resources (eg, compute, memory, accelerators, storage). By having a robotically accessible and robotically steerable sled that includes disaggregated resources, each type of resource can be upgraded independently of the other and at its own optimized refresh rate.Figure 3 illustrates an example of a data center 300, which may generally represent a data center in/for which one or more techniques described herein may be implemented, according to various embodiments. In the specific non-limiting example depicted in FIG. 3, data center 300 includes racks 302-1 through 302-32. In various embodiments, the racks of data center 300 may be arranged in such a manner as to define and/or accommodate various access paths. For example, as shown in FIG. 3, the racks of data center 300 may be arranged in such a manner as to define and/or accommodate access paths 311A, 311B, 311C, and 311D. In some embodiments, the existence of such access paths may generally enable automated maintenance equipment (e.g., robotic maintenance equipment) to physically access computing equipment housed in the various racks of data center 300 and perform automated maintenance tasks (e.g., , replace the faulty skateboard, upgrade the skateboard). In various embodiments, the dimensions of access paths 311A, 311B, 311C, and 311D, the dimensions of racks 302-1 through 302-32, and/or one or more other aspects of the physical layout of data center 300 may be selected to Facilitate such automation. The embodiments are not limited in this context.Figure 4 illustrates an example of a data center 400, which may generally represent a data center in/for which one or more techniques described herein may be implemented, according to various embodiments. As shown in FIG. 4 , data center 400 may be characterized by light fabric 412 . Optical fabric 412 may generally include a combination of optical signaling media (such as fiber optic cables) and optical switching infrastructure via which any particular skateboard in data center 400 can send signals to and receive data from every other skateboard in data center 400. Signals for every other sled in hub 400 . The signaling connectivity provided by optical fabric 412 to any given sled may include connectivity both to other sleds in the same rack and to sleds in other racks. In the specific non-limiting example depicted in FIG. 4, data center 400 includes four racks 402A-402D. Racks 402A through 402D house respective pairs of slides 404A-1 and 404A-2, 404B-1 and 404B-2, 404C-1 and 404C-2, and 404D-1 and 404D-2. Thus, in this example, data center 400 includes a total of eight skateboards. Each such sled may possess signaling connectivity to each of the other seven sleds in data center 400 via optical fabric 412 . For example, via optical fabric 412, slider 404A-1 in rack 402A may have signaling connectivity to slider 404A-2 in rack 402A, as well as to other racks 402B, 402C, and Signaling connectivity to the other six slides 404B-1 , 404B-2, 404C-1 , 404C-2, 404D-1 and 404D-2 between 402D. Embodiments are not limited to this example.FIG. 5 illustrates an overview of a connectivity scheme 500, which may generally represent, in some embodiments, any a) Link-layer connectivity established between the various skateboards. Connectivity scheme 500 may be implemented using an optical fabric featuring dual-mode optical switching infrastructure 514 . Dual-mode optical switching infrastructure 514 may generally include switching infrastructure capable of receiving communications over the same set of unified optical signaling media and switching such communications appropriately according to multiple link layer protocols. In various embodiments, dual-mode optical switching infrastructure 514 may be implemented using one or more dual-mode optical switches 515 . In various embodiments, dual-mode optical switch 515 may generally comprise a high-radix switch. In some embodiments, dual-mode optical switch 515 may include a multi-layer switch, such as a four-layer switch. In various embodiments, dual-mode optical switches 515 may feature integrated silicon photonics (enabling them to switch communications with significantly reduced latency compared to conventional switching devices). In some embodiments, dual-mode optical switch 515 may constitute a leaf switch 530 in a leaf-spine architecture, which additionally includes one or more dual-mode optical spine switches 520 .In various embodiments, the dual-mode optical switch may be capable of receiving Ethernet protocol communications carrying Internet Protocol (IP packets) via the optical signaling medium of the optical fabric and according to a second High Performance Computing (HPC) link layer protocol ( For example, the Infiniband (Infiniband) communication of Intel's full-path architecture. As reflected in FIG. 5 , for any particular pair of sleds 504A and 504B possessing optical signaling connectivity to the optical fabric, the connectivity scheme 500 can thus provide pair link via Ethernet link and HPC link. Layer connectivity support. Thus, both Ethernet and HPC communications can be supported by a single high bandwidth, low latency switch fabric. Embodiments are not limited to this example.Figure 6 illustrates a general overview of a rack architecture 600, which may represent the architecture of any specific one of the racks depicted in Figures 1-4, according to some embodiments. As reflected in FIG. 6 , rack architecture 600 may generally feature a plurality of skateboard spaces into which a skateboard may be inserted, each of which may be robotically accessible via rack access area 601 . In the specific non-limiting example depicted in FIG. 6, rack architecture 600 features five skateboard spaces 603-1 through 603-5. Skateboard spaces 603-1 through 603-5 feature corresponding multi-function connector modules (MPCMs) 616-1 through 616-5.FIG. 7 illustrates an example of a skateboard 704 that may represent this type of skateboard. As shown in FIG. 7, a skateboard 704 may include a collection of physical resources 705, and an MPCM 716 designed to Coupled with counterpart MPCM. The sled 704 may also feature an expansion connector 717 . Expansion connector 717 may generally include a socket, socket, or other type of connection element capable of accepting one or more types of expansion modules, such as expansion sled 718 . By coupling with a counterpart connector on expansion sled 718 , expansion connector 717 may provide physical resource 705 with access to supplemental computing resource 705B residing on expansion sled 718 . The embodiments are not limited in this context.FIG. 8 illustrates an example of a rack architecture 800 that may represent a rack architecture that may be implemented to provide support for a sled featuring expansion capabilities, such as sled 704 of FIG. 7 . In the specific non-limiting example depicted in FIG. 8, rack architecture 800 includes seven skateboard spaces 803-1 through 803-7, which feature respective MPCMs 816-1 through 816-7. Skateboard spaces 803-1 through 803-7 include respective main regions 803-1A through 803-7A and corresponding extended regions 803-1B through 803-7B. For each such skateboard space, when the corresponding MPCM is coupled with the counterpart MPCM of the inserted skateboard, the main area may generally constitute a region of the skateboard space that may physically accommodate the inserted skateboard. The expansion area may generally constitute an area of sled space that may physically accommodate expansion modules, such as expansion sled 718 of FIG. 7 (where the inserted sled is configured with such modules).FIG. 9 illustrates an example of a rack 902 , which may represent a rack implemented according to the rack architecture 800 of FIG. 8 , according to some embodiments. In the specific non-limiting example depicted in FIG. 9, rack 902 features seven skateboard spaces 903-1 through 903-7, which include respective main areas 903-1A through 903-7A and corresponding expansion areas 903. -1B to 903-7B. In various embodiments, temperature control in rack 902 may be achieved using an air cooling system. For example, as reflected in FIG. 9, rack 902 may feature a plurality of fans 919 generally arranged to provide air cooling within the various skateboard spaces 903-1 through 903-7. In some embodiments, the height of the skateboard space is greater than conventional "1U" server heights. In such embodiments, the fans 919 may generally comprise relatively slow large diameter cooling fans as compared to fans used in conventional rack configurations. Running a larger diameter cooling fan at a lower speed relative to a smaller diameter cooling fan running at a higher speed can increase fan life while still providing the same amount of cooling. Skateboards are physically shallower than regular rack sizes. Additionally, components are arranged on each slide to reduce thermal shadowing (ie, not arranged in series in the direction of air flow). Thus, wider, shallower sleds allow for an increase in device performance because due to improved cooling (ie, no thermal shielding, more space between devices, more space for larger heatsinks, etc.), The device can be operated with a higher thermal envelope (eg, 250W).MPCMs 916 - 1 through 916 - 7 may be configured to provide an inserted sled with access to power supplied by a respective power module 920 - 1 through 920 - 7 , each of which may draw power from an external power source 921 . In various embodiments, external power source 921 may deliver alternating current (AC) power to rack 902, and power modules 920-1 through 920-7 may be configured to convert such AC power into Direct current (DC) power. In some embodiments, for example, power modules 920-1 through 920-7 may be configured to convert 277 volts AC power to 12 volts DC power to be provided to an inserted sled via a corresponding MPCM 916-1 through 916-7. Embodiments are not limited to this example.MPCMs 916-1 to 916-7 may also be arranged to provide optical signaling connectivity to a dual-mode optical switching infrastructure 914 for plug-in sleds, which may communicate with the dual-mode optical switching infrastructure of FIG. 514 is the same or similar thereto. In various embodiments, the optical connectors contained in the MPCMs 916-1 through 916-7 may be designed to couple with counterpart optical connectors contained in the MPCMs of the inserted sleds to connect via corresponding lengths of optical cables 922- 1 through 922-7 provide optical signaling connectivity to the dual-mode optical switching infrastructure 914 for such sleds. In some embodiments, each such length of fiber optic cable may extend from its corresponding MPCM to an optical interconnection loom 923 outside the sled space of rack 902 . In various embodiments, the optical interconnection loom 923 may be arranged through support columns or other types of load carrying elements of the rack 902 . The embodiments are not limited in this context. Since the inserted sled is connected to the optical switching infrastructure via the MPCM, resources that would normally be spent manually configuring rack cabling to accommodate the newly inserted sled can be saved.FIG. 10 illustrates an example of a sled 1004 , which may represent a sled designed for use with the frame 902 of FIG. 9 , according to some embodiments. The skateboard 1004 may feature an MPCM 1016 that includes an optical connector 1016A and a power connector 1016B, and is designed to couple with a counterpart MPCM in a skateboard space (in conjunction with inserting the MPCM 1016 into the skateboard space). Coupling the MPCM 1016 with such a counterpart MPCM may couple the power connector 1016 with a power connector included in the counterpart MPCM. This may generally enable physical resource 1005 of sled 1004 to be supplied with power from an external source via power connector 1016 and power transmission medium 1024 that conductively couples power connector 1016 to physical resource 1005 .The sled 1004 may also include a dual-mode optical network interface circuit 1026 . Dual-mode optical network interface circuitry 1026 may generally include circuitry capable of communicating over an optical signaling medium according to each of a plurality of link layer protocols supported by dual-mode optical switching infrastructure 914 of FIG. 9 . In some embodiments, dual-mode optical network interface circuit 1026 may be capable of both Ethernet protocol communication and communication according to a second high performance protocol. In various embodiments, dual-mode optical network interface circuit 1026 may include one or more optical transceiver modules 1027, each optical transceiver module 1027 may be capable of transmitting and receiving over each of one or more optical channels light signal. The embodiments are not limited in this context.Coupling the MPCM 1016 to a counterpart MPCM for the sled space in a given rack may couple the optical connector 1016A to an optical connector included in the counterpart MPCM. This typically establishes optical connectivity between the dual-mode optical network interface circuit 1026 and the sled's optical cables via each of the set of optical channels 1025 . Dual-mode optical network interface circuitry 1026 may communicate with physical resources 1005 of sled 1004 via electrical signaling medium 1028 . In addition to the arrangement of components on the sled and the size of the sled to provide improved cooling and enable operation at relatively high thermal envelopes (eg, 250W) (as described above with reference to FIG. 9 ), in some implementations For example, the sled may include one or more additional features to facilitate air cooling, such as heat pipes and/or fins (arranged to dissipate heat generated by physical resource 1005). It is worth noting that although the example skateboard 1004 depicted in FIG. 10 does not feature an expansion connector, any given skateboard that features design elements of the skateboard 1004 may also feature an expansion connector in accordance with some embodiments. The embodiments are not limited in this context.Figure 11 illustrates an example of a data center 1100, which may generally represent a data center in/for which one or more techniques described herein may be implemented, according to various embodiments. As reflected in FIG. 11 , a physical infrastructure management framework 1150A may be implemented to facilitate managing the physical infrastructure 1100A of the data center 1100 . In various embodiments, one function of the physical infrastructure management framework 1150A may be to manage automated maintenance functions within the data center 1100, such as using robotic maintenance equipment to service computing devices within the physical infrastructure 1100A. In some embodiments, physical infrastructure 1100A may feature an advanced telemetry system that performs telemetry reporting that is robust enough to support remote automated management of physical infrastructure 1100A. In various embodiments, telemetry information provided by such advanced telemetry systems may support features such as failure prediction/prevention capabilities and capacity planning capabilities. In some embodiments, the physical infrastructure management framework 1150A may also be configured to manage certification of physical infrastructure components using hardware attestation techniques. For example, robots can verify the authenticity of components prior to installation by analyzing information collected from radio frequency identification (RFID) tags associated with each component to be installed. The embodiments are not limited in this context.As shown in FIG. 11 , physical infrastructure 1100A of data center 1100 may include optical fabric 1112 , which may include dual-mode optical switching infrastructure 1114 . Optical fabric 1112 and dual-mode optical switching infrastructure 1114 may be the same as or similar to optical fabric 412 of FIG. 4 and dual-mode optical switching infrastructure 514 of FIG. 5 , respectively, and may provide high Bandwidth, low latency, multi-protocol connectivity. As discussed above with reference to FIG. 1 , in various embodiments, the availability of such connectivity may enable disaggregation and dynamic pooling of resources such as accelerators, memory, and storage. In some embodiments, for example, one or more pooled accelerator sleds 1130 can be included between physical infrastructure 1100A of data center 1100, each physical infrastructure 1100A can include a pool of accelerator resources—such as coprocessors and/or The FPGA - for example - is globally accessible to other sleds via the optical fabric 1112 and the dual-mode optical switching infrastructure 1114 .In another example, in various embodiments, one or more pooled storage sleds 1132 may be included between the physical infrastructure 1100A of the data center 1100, each physical infrastructure 1100A may include 1112 and dual-mode optical switching infrastructure 1114 are globally accessible storage resource pools for other skateboards. In some embodiments, such pooled storage sleds 1132 may include pools of solid state storage devices, such as solid state drives (SSDs). In various embodiments, one or more high performance processing sleds 1134 may be included between the physical infrastructure 1100A of the data center 1100 . In some embodiments, the high performance processing sled 1134 may include a high performance processor pool as well as cooling features that enhance air cooling to produce a higher thermal envelope of up to 250W or higher. In various embodiments, any given high-performance processing sled 1134 can feature an expansion connector 1117 that can accept a far memory expansion sled, making remote memory available locally to that high-performance processing sled 1134. Memory is disaggregated from near memory and processor included on the sled. In some embodiments, such high performance processing sleds 1134 may be configured with far memory (using expansion sleds including low latency SSD storage). The optical infrastructure allows computing resources on one sled to utilize remote accelerator/FPGA, memory and/or SSD resources that are disaggregated on sleds located in the same rack or any other rack in the data center. In the spine-leaf network architecture described above with reference to FIG. 5, remote resources may be located at a distance of one switch hop or two switch hops. The embodiments are not limited in this context.In various embodiments, one or more layers of abstraction may be applied to the physical resources of physical infrastructure 1100A to define a virtual infrastructure, such as software-defined infrastructure 1100B. In some embodiments, virtual computing resources 1136 of software-defined infrastructure 1100B may be allocated to support provisioning of cloud services 1140 . In various embodiments, specific sets of virtual computing resources 1136 may be grouped for provisioning cloud services 1140 (in the form of SDI services 1138 ). Examples of cloud services 1140 may include - but are not limited to - software as a service (SaaS) services 1142 , platform as a service (PaaS) services 1144 , and infrastructure as a service (IaaS) services 1146 .In some embodiments, management of the software-defined infrastructure 1100B can be performed using the virtual infrastructure management framework 1150B. In various embodiments, the virtual infrastructure management framework 1150B can be designed to implement workload fingerprinting techniques and/or machine learning techniques in conjunction with managing allocation of virtual computing resources 1136 and/or SDI services 1138 to cloud services 1140 . In some embodiments, the virtual infrastructure management framework 1150B may use/consult telemetry data in connection with performing such resource allocation. In various embodiments, an application/service management framework 1150C may be implemented to provide QoS management capabilities for cloud services 1140 . The embodiments are not limited in this context.As shown in FIG. 12 , an illustrative system 1210 for managing execution efficiency of workloads within managed nodes 1260 includes an orchestrator server 1240 in communication with the set of managed nodes 1260 . Each managed node 1260 may be implemented as a collection of resources (eg, physical resource 206), such as computing resources (eg, physical computing resource 205-4), storage resources (eg, physical storage resource 205-1), accelerator resources (for example, physics accelerator resource 205-2) or from the same or different skateboards (for example, skateboards 204-1, 204-2, 204-3, 204-4, etc.) -32) other resources (eg, physical memory resource 205-3). Each managed node 1260 may be established, defined, or "spin up" by the orchestrator server 1240 at the time a workload is to be assigned to a managed node 1260, or at any other time, and may exist whether or not currently Any workload is assigned to a managed node 1260. System 1210 may be implemented in accordance with data centers 100, 300, 400, 1100 described above with reference to FIGS. 1, 3, 4, and 11. In the illustrative embodiment, set of managed nodes 1260 includes managed nodes 1250 , 1252 , and 1254 . Although three managed nodes 1260 are shown in the group, it should be understood that in other embodiments the group may contain a different number of managed nodes 1260 (eg, tens of thousands). The system 1210 may be located in a data center and provide storage and computing services (eg, cloud services) over a network 1230 to client devices 1220 communicating with the system 1210 . Orchestrator server 1240 may support a cloud operating environment such as OpenStack, and managed node 1260 may execute one or more applications or processes (ie, workloads) on behalf of users of client device 1220 , such as in virtual machines or containers. As discussed in greater detail herein, orchestrator server 1240 is configured in operation to assign workloads to managed nodes 1260 and to receive indications generated by each managed node 1260 that each managed node 1260 is executing the assigned workload. Efficiency data for the efficiency of components in a node, such as individual cores of one or more processors. Orchestrator server 1260 may analyze efficiency data and determine adjustments to improve component efficiency, such as by relocating workload threads to different cores, processors, or managed nodes 1260, and/or adjusting the priority of threads to Each core's specific pipeline stages are bound (eg, spend most of their time there) to reduce core stalls (eg, when the cycles-per-instruction is below a threshold).In operation, in the illustrative embodiment, each managed node 1260 is configured to execute an assigned workload, generating telemetry data indicative of the execution efficiency of the workload within managed node 1260, such as by Counters are utilized in each stage of the core's pipeline to track utilization of each pipeline stage by each thread, identifying patterns (eg, fingerprints) in stage usage by each thread over a predefined period of time (such as seconds) , determining adjustments such as relocating threads to other cores or processors and/or adjusting thread priorities to improve efficiency based on the telemetry data, and applying these adjustments. As such, managed nodes 1260 may determine one or more of these adjustments based on their local efficiency view within managed node 1260, and/or based on a datacenter-wide efficiency view from all managed nodes 1260 The data gets adjusted from the composer server 1240 . In an illustrative embodiment, increasing execution efficiency of a workload may be defined as reducing the number of cycles per instruction executed by a core of a corresponding processor of managed node 1260 . Conversely, reducing the execution efficiency of a workload may be defined as increasing the number of cycles per instruction executed by a core of a corresponding processor of the managed node 1260 .Referring now to FIG. 13 , each managed node 1260 may be implemented as any type of computing device capable of performing the functions described herein, including: receiving an assignment of a workload, executing the workload, and generating telemetry when the workload is executed data, using the telemetry data to analyze the execution efficiency of the workload within the managed nodes 1260, providing efficiency data indicative of the execution efficiency to the orchestrator server 1240, determining configuration adjustments to improve the execution efficiency of the workload within the managed nodes 1260, and Apply adjustments. For example, managed node 1260 may be implemented as a computer, a distributed computing system, one or more sleds (e.g., sleds 204-1, 204-2, 204-3, 204-4, etc.), a server (e.g., stand-alone, rack mounts, blades, etc.), multiprocessor systems, network facilities (such as physical or virtual), desktops, workstations, laptops, notebooks, processor-based systems or network facilities. As shown in FIG. 13 , illustrative managed node 1260 includes central processing unit (CPU) 1302, main memory 1304, input/output (I/O) subsystem 1306, communication circuitry 1308, and one or more data storage devices 1312 . Of course, in other embodiments, managed node 1260 may contain other or additional components, such as components typically found in a computer (eg, display, peripherals, etc.). Additionally, in some embodiments, one or more of the illustrative components may be incorporated within, or otherwise form part of, another component. For example, main memory 1304 , or portions thereof, may be incorporated within CPU 1302 in some embodiments.CPU 1302 may be implemented as any type of processor or processors capable of performing the functions described herein. CPU 1302 may be implemented as a single or multi-core processor, microcontroller, or other processor or processing/control circuitry. In some embodiments, CPU 1302 may be implemented as, include, or be coupled to a Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), reconfigurable hardware or hardware circuits, or other specialized hardware that facilitates performance of the functions described herein. hardware. In the illustrative embodiment, CPU 1302 includes multiple cores 1320, which may be implemented as special-purpose circuits and/or components that process instructions for threads that include workloads in a pipeline of various stages, such as fetching The front-end stage, where instructions are decoded into operations to be performed, the back-end stage, where threads wait for data to be returned from memory or to complete complex computations, the poor speculation stage, where branches are canceled due to mispredictions, and the thread is retired the retirement stage. In the illustrative embodiment, each core 1320 includes a set of counters 1322, one counter 1322 per pipeline stage. Each counter 1322 may be implemented as any means that generates a signal when the thread's instructions are processed in the corresponding phase. As such, by tracking the number of cycles (e.g., based on the core's frequency) and the number of instructions processed by core 1320 over a given period of time (e.g., seconds), CPU 1302 can determine the number of cycles per core, as indicated by counter 1322. Instructions per loop, and what phase each thread spends most of its time in (eg, core's loop). Thus, a thread that spends most of its cycles in the front-end phase is "front-end bound", a thread that spends most of its cycles in the back-end phase is "back-end bound", and so on. As discussed above, managed nodes 1260 may contain resources distributed across multiple sleds, and in such embodiments, CPU 1302 may contain portions thereof located on the same sled or on different sleds.Main memory 1304 may be implemented as any type of volatile (eg, dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage device capable of performing the functions described herein. In some embodiments, all or part of main memory 1304 may be integrated into CPU 1302 . In operation, main memory 1304 may store various software and data used during operation, such as telemetry data, fingerprint data, priority data, pipeline utilization map data, operating systems, applications, programs, libraries, and drivers. As discussed above, managed nodes 1260 may contain resources distributed across multiple sleds, and in such an embodiment, main memory 1304 may contain portions thereof located on the same sled or on different sleds.I/O subsystem 1306 may be implemented as circuits and/or components that facilitate input/output operations with CPU 1302 , main memory 1304 , and other components of managed node 1260 . For example, I/O subsystem 1306 may be implemented as or otherwise include a memory controller hub, an input/output control hub, an integrated sensor hub, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems that facilitate input/output operations. In some embodiments, I/O subsystem 1306 may form part of a system-on-chip (SoC) and be combined in a single integrated system along with one or more of CPU 1302, main memory 1304, and other components of managed node 1260. circuit chip.Communication circuitry 1308 may be implemented as any communication circuitry, device, or combination thereof capable of enabling communication between managed node 1260 and another computing device (e.g., orchestrator server 1260 and/or other managed nodes 1260) over network 1230 collection. Communications circuitry 1308 may be configured to effectuate such communications using any one or more communications technologies (eg, wired or wireless communications) and associated protocols (eg, Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.).Illustrative communications circuitry 1308 includes a network interface controller (NIC) 1310, which may also be referred to as a host fabric interface (HFI). NIC 1310 may be implemented as one or more add-in boards, daughter cards, network interface cards, controller chips, chipsets, or may be used by managed node 1260 to communicate with another computing device, such as orchestrator server 1240 and/or Other devices connected to other managed nodes 1260). In some embodiments, NIC 1310 may be implemented as part of a system-on-chip (SoC) containing one or more processors, or included on a multi-chip package also containing one or more processors. In some embodiments, NIC 1310 may contain a local processor (not shown) and/or local memory (not shown), both local to NCI 1310 . In such embodiments, the local processor of NIC 1310 may be capable of performing one or more of the functions of CPU 1302 described herein. Additionally or alternatively, in such embodiments, the local memory of NIC 1310 may be integrated into one or more components of managed node 1260 at the board level, socket level, chip level, and/or other level. As discussed above, managed node 1260 may contain resources distributed across multiple sleds, and in such an embodiment, communication circuitry 1308 may contain portions thereof located on the same sled or on different sleds.One or more illustrative data storage devices 1312 may be implemented as any type of device configured for short-term or long-term storage of data, such as, for example, memory devices and circuits, memory cards, hard drives, solid-state drives, or other data storage devices . Each data storage device 1312 may include a system partition that stores data and firmware code for the data storage device 1312 . Each data storage device 1312 may also include an operating system partition that stores executable files for the operating system as well as data files.Additionally, managed node 1260 may include display 1314 . Display 1314 may be implemented as or otherwise use any suitable display technology, including, for example, a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, a plasma display, and/or a other displays. Display 1314 may include touch screen sensors that detect user tactile selection of information displayed on the display using any suitable touch screen input technology, including but not limited to: resistive touch screen sensors, capacitive touch screen sensors, surface acoustic wave (SAW) touch screen sensors, Infrared touch screen sensors, optical imaging touch screen sensors, acoustic touch screen sensors, and/or other types of touch screen sensors.Additionally or alternatively, managed node 1260 may include one or more peripheral devices 1316 . Such peripheral devices 1316 may include any type of peripheral device commonly found in computing devices, such as speakers, mouse, keyboard and/or other input/output devices, interface devices, and/or other peripheral devices.The client device 1220 and the composer server 1240 may have components similar to those described in FIG. 13 . The description of those components of managed node 1260 applies equally to the description of the components of client device 1220 and orchestrator server 1240 and is not repeated here for clarity of description, except: In the illustrative embodiment, the client Appliance 1220 and composer server 1240 may not include counter 1322 . It should be appreciated that any of client device 1220 and orchestrator server 1240 may include other components, subcomponents, and devices commonly found in computing devices, which were not discussed above with reference to managed node 1604, and for clarity of description For reasons not discussed in this article.As noted above, client device 1220, orchestrator server 1240, and managed nodes 1260 illustratively communicate via network 1230, which may be implemented as any type of wired or wireless communication network, including a global network such as the Internet, Local Area Network (LAN) or Wide Area Network (WAN), Cellular Network (e.g. Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), Digital Subscriber Line (DSL) network, Cable networks (such as coaxial networks, fiber optic networks, etc.) or any combination thereof.Referring now to FIG. 14, in an illustrative embodiment, each managed node 1260 may establish an environment 1400 during operation. Illustrative environment 1400 includes network communicator 1420 , workload executor 1430 and resource manager 1440 . Each component of environment 1400 may be implemented as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of environment 1400 may be implemented as a collection of electrical devices or circuits (e.g., network communicator circuit 1420, workload executor circuit 1430, resource manager circuit 1440 wait). It should be appreciated that in such embodiments, one or more of network communicator circuitry 1420, workload executor circuitry 1430, or resource manager circuitry 1440 may form the CPU 1302, main memory 1304, I/O subsystem 1306 and/or a portion of one or more of the other components of managed node 1260.In an illustrative embodiment, environment 1400 contains telemetry data 1402 , which may be implemented as data indicative of the performance and condition of managed node 1260 as it performs a workload assigned to it. In an illustrative embodiment, telemetry data 1402 includes data from counters 1322 that indicate cycles per instruction for each core 1320 and which pipeline stage(s) each thread is utilizing at any given time (e.g. , instructions from the corresponding thread are in the corresponding pipeline stage). Additionally, illustrative environment 1400 includes fingerprint data 1404 , which may be implemented as data indicative of a pattern of usage of each thread's pipeline stages over a predefined period of time (eg, seconds). Additionally, in the illustrative embodiment, environment 1400 includes priority data 1406, which may be implemented as any data indicating the current priority associated with each thread. In an illustrative embodiment, and as described in more detail herein, threads are scheduled for execution within cores 1320 according to their corresponding priorities, which can be adjusted to reduce core stalls and otherwise improve Workload execution efficiency. Additionally, in the illustrative embodiment, environment 1400 includes pipeline utilization map data 1408, which may be implemented as any data indicating usage of pipeline stages by threads, including cycles per instruction, per The proportion of cycles of each pipeline stage (e.g., 80% backend stage, 10% frontend stage, 5% bad speculation stage, and 5% retirement stage, etc.) and/or all cores of all processors of CPU 1302 of managed node 1260 1320 cycles per instruction.In illustrative environment 1400, network communicators 1420 (which may be implemented as hardware, firmware, software, virtualized hardware, emulation frameworks, and/or combinations thereof, as discussed above) are configured to facilitate communication to and from managed Inbound and outbound network communications for node 1260 (eg, network traffic, network packets, network flows, etc.). To this end, network communicator 1420 is configured to receive and process data packets, and to prepare and transmit data packets to a system or computing device (eg, orchestrator server 1240). Thus, at least some of the functionality of network communicator 1420 may be performed by communication circuitry 1308 in some embodiments, and, in an illustrative embodiment, by NIC 1310 .Workload Executor 1430 (which, as discussed above, may be implemented as hardware, firmware, software, virtualized hardware, emulated architecture, and/or combinations thereof) is configured to execute workloads assigned to managed nodes 1260, and Telemetry data is generated in-process for use by the resource manager 1440. To this end, in the illustrative embodiment, workload executor 1430 includes telemetry generator 1432 which, in the illustrative embodiment, is configured to receive and other components, such as memory 1304, I/O subsystem 1306, communication circuitry 1308, and/or data storage 1312, receive data and parse and store the data as workloads related to the operations performed by the components on their behalf when the data was generated Telemetry data 1402 associated with identifiers of threads and corresponding components. In an illustrative embodiment, telemetry generator 1432 may actively poll every component within managed node 1260 (e.g., CPU 1302, memory 1304, I/O sub- system 1306, communication circuitry 1308, data storage device 1312, etc.), or may passively receive telemetry data 1402 from these components, such as by monitoring one or more registries or the like.resource manager 1440 (which may be implemented as hardware, firmware, software, virtualized hardware, emulated architecture, and/or combinations thereof) configured to analyze telemetry data 1402 to determine the execution efficiency of workloads in managed nodes 1260, Data indicative of efficiency is provided to orchestrator server 1240, adjustments are determined to improve execution efficiency of workloads in managed nodes 1260, and adjustments are applied when executing workloads. To this end, in the illustrative embodiment, resource manager 1440 includes thread fingerprinter 1442 , thread prioritizer 1444 , thread reassigner 1446 , and map generator 1448 .In an illustrative embodiment, thread fingerprinter 1442 is configured to analyze each thread's use of each stage of the core pipeline over a predefined period of time (e.g., one second) to identify patterns and store the patterns as fingerprints In fingerprint data 1404. The pattern may indicate that a thread may spend a period of time in one stage, usually followed by a period of time in another stage, and then a subsequent period of time in another pipeline stage, usually on a repeating basis (eg, every second). a time period. As such, fingerprint data 1404 may be used to categorize threads as primarily utilizing and bound by a specific stage (e.g., front-end bound, back-end bound, etc.), and may be used to predict threads based on their current pipeline stage utilization. future pipeline stage utilization.In an illustrative embodiment, thread prioritizer 1444 is configured to initially assign each thread a priority (e.g., a default priority) and to adjust the priority using telemetry data 1402 and fingerprint data 1404 to improve the execution efficiency of the thread . In an illustrative embodiment, for each processor in CPU 1302, managed node 1260 maintains a run queue of threads with associated priorities, and the threads' cycles for processor cores 1320 are given priority according to their priority sequence. In an illustrative embodiment, thread prioritizer 1444 is configured to map priorities to threads such that front-end bound threads are given high priority (e.g., a number in the range of 0-75), retired threads (e.g., Threads in the retirement phase) are also given high priority (e.g., in the range 0-75), bad speculation phase threads are given lower priority (e.g., in the range 76-110), and the backend phase Threads are given the lowest priority (eg, in the range of 111-140) because those threads are usually waiting to access data from memory or waiting for complex calculations to complete.In an illustrative embodiment, thread reassigner 1446 is configured to reassign threads to another processor in CPU 1302 of managed node 1260 or to other cores 1320 in the same processor to match complementary threads (e.g., front-end-bound threads versus back-end-bound threads), and otherwise improve workload execution efficiency (for example, by reducing cycles per instruction). In an illustrative embodiment, map generator 1448 is configured to generate pipeline utilization map data 1408 from telemetry data 1402 and fingerprint data 1404 . In an illustrative embodiment, components of resource manager 1440 such as thread prioritizer 1444 and thread reassigner 1446 may analyze pipeline utilization map data 1408 generated by map generator 1448 to Utilization of pipeline stages by threads assigned to corresponding cores 1320 is identified, thereby identifying adjustments to thread priorities and potential reassignments of threads to other cores 1320 in managed node 1260 .It should be appreciated that each of thread fingerprinter 1442, thread prioritizer 1444, thread reassigner 1446, and map generator 1448 may be implemented separately as hardware, firmware, software, virtualized hardware, emulated architecture, and/or their combination. For example, thread fingerprinter 1442 may be implemented as a hardware component, while thread prioritizer 1444, thread reassigner 1446, and mapping generator 1448 are implemented as virtualized hardware components or as hardware, firmware, software, virtualized hardware, Some other combination of emulation architectures and/or combinations thereof.Referring now to FIG. 15 , in an illustrative embodiment, orchestrator server 1240 may establish environment 1500 during operation. Illustrative environment 1500 includes network communicator 1520 , workload assigner 1530 , and efficiency manager 1540 . Each component of environment 1500 may be implemented as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of environment 1500 may be implemented as a collection or circuit of electrical devices (e.g., network communicator circuit 1520, workload allocator circuit 1530, efficiency manager circuit 1540 wait). It should be appreciated that in such embodiments, one or more of network communicator circuitry 1520, workload allocator circuitry 1530, or efficiency manager circuitry 1540 may form the CPU 1302, main memory 1304, I/O subsystem 1306 and/or part of one or more of other components of the composer server 1240. In an illustrative embodiment, environment 1500 includes workload data 1502 , which may be implemented as data indicative of workloads currently being executed by managed nodes 1260 and workloads that have not been assigned to managed nodes 1260 . Additionally, in the illustrative embodiment, environment 1500 contains efficiency data 1504 , which may be implemented as data indicative of the execution efficiency of workloads among cores 1320 of processors of managed nodes 1260 , such as fingerprint data 1404 and pipeline utilization mapping data 1408 , which may be provided by corresponding managed nodes 1260 to orchestrator server 1240 . Additionally, environment 1500 includes tuning data 1506 that may be implemented as adjustments made to the thread configuration of workloads across cores 1320 of managed nodes 1260 to improve workload execution efficiency, including adjustments to the priorities of threads And/or reassignment of threads to other cores 1320.In illustrative environment 1500, network communicators 1520 (which may be implemented as hardware, firmware, software, virtualized hardware, emulation frameworks, and/or combinations thereof, as discussed above) are configured to facilitate communication to and from the orchestrator, respectively. Inbound and outbound network communications (eg, network traffic, network packets, network flows, etc.) of server 1240 . To this end, network communicator 1520 is configured to receive and process data packets, and to prepare and transmit data packets to a system or computing device (eg, client device 1220, one or more managed nodes 1260, etc.). Thus, at least some of the functionality of network communicator 1520 may be performed by communication circuitry 1308 in some embodiments, and, in an illustrative embodiment, by NIC 1310 .In an illustrative embodiment, workload assigner 1530 (which may be implemented as hardware, firmware, software, virtualized hardware, emulated architecture, and/or combinations thereof, as discussed above) is configured to assign workloads to Managed nodes 1260 are provisioned. In doing so, the workload assigner 1530 may specify to the assigned managed node 1260 one or more threads and/or requirements to execute the workload based on information from the efficiency manager 1540 described in greater detail herein. A specific core 1320 within CPU 1302 is assigned a priority level for a thread. In an illustrative embodiment, workload allocator 1530 may additionally assign cores on cores within the same managed node 1260 or even from one managed node 1260 to Another managed node 1260 reassigns the workload.In an illustrative embodiment, efficiency manager 1540 (which may be implemented as hardware, firmware, software, virtualized hardware, emulated architecture, and/or combinations thereof, as discussed above) is configured to analyze Efficiency data 1504 for nodes 1260 is managed, and adjustments are determined to improve execution efficiency of workloads. To this end, in the illustrative embodiment, efficiency manager 1540 includes mapping combiner 1542 and adjustment determiner 1544 . In an illustrative embodiment, map combiner 1542 is configured to combine pipeline utilization map data 1408 received from each managed node 1260 in efficiency data 1504 to generate a map of the pipeline utilization of the cores of all managed nodes 1260 . With a map of the pipeline utilization of the cores 1320 of all managed nodes 1260, the efficiency manager 1540 can determine that a core 1320 of a managed node 1260 can more efficiently execute threads currently assigned to different managed node 1260 workloads, Because the core is currently executing a thread that is complementary (eg, bound by a different pipeline stage) than the thread being reassigned. In an illustrative embodiment, adjustment determiner 1544 is configured to determine an adjustment to the assignment of threads to cores 1320 in managed node 1260 and/or an adjustment to the priority of threads, similar to that shown in FIG. Thread prioritizer 1444 and thread reassigner 1446 of environment 1400 , in addition to alignment determiner 1544 may additionally determine alignment across managed nodes 1260 rather than strictly within a single managed node 1260 .It should be appreciated that each of mapping combiner 1542 and adjustment determiner 1544 may be implemented individually as hardware, firmware, software, virtualized hardware, emulated architecture, and/or combinations thereof. For example, mapping combiner 1542 may be implemented as a hardware component, while adjustment determiner 1544 is implemented as a virtualized hardware component or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or combinations thereof. combination.Referring now to FIG. 16 , in use, each managed node 1260 may execute a method 1600 for managing the execution efficiency of a workload within the managed node 1260 (as the workload is executed). Method 1600 begins at block 1602 where, in an illustrative embodiment, managed node 1260 determines whether to manage execution efficiency of a workload. In an illustrative embodiment, managed node 1260 determines to manage execution efficiency if managed node 1260 is powered on and in communication with orchestrator server 1240 . In other embodiments, the managed node 1260 may determine whether to manage efficiency based on other factors. Regardless, in response to a determination that efficiency is to be managed, in an illustrative embodiment, method 1600 proceeds to block 1604 where managed node 1260 receives an assignment of one or more workloads. In an illustrative embodiment, upon receiving an assignment, managed node 1260 receives an identification of a workload assigned by orchestrator server 1240 (eg, executable name, location of the executable, etc.). In doing so, managed node 1260 may additionally receive an identification of one or more cores 1320 on which to execute the assigned workload thread and/or a priority to assign to the workload's thread.At block 1606, the managed node 1260 executes the assigned workload's threads. In doing so, managed node 1260 assigns threads to one or more of cores 1320 , as indicated in block 1608 , in the illustrative embodiment. Managed nodes 1260 may assign threads to cores 1320 based on indications contained in the initial assignment of workloads from orchestrator server 1240 , based on random selection, or based on any other method for selecting cores 1320 . In block 1610, managed node 1260 generates telemetry data 1402 when executing a workload. In doing so, managed node 1260 identifies the current pipeline stage of each thread on each core 1320 using a corresponding counter (eg, counter 1322 ), as indicated in block 1612 , in an illustrative embodiment. As described above, each counter 1322 is configured to generate a signal indicative of the presence of a specific instruction for a specific thread in the pipeline stage associated with the counter 1322 . Additionally, managed node 1260 may receive information from one or more other components in managed node 1260 (such as communication circuitry 1308 (eg, NIC 1310 ), memory 1304 , I/O subsystem 1306 , and/or data storage 1312 ). Telemetry data indicative of the performance and condition of those components is obtained 1402 , as indicated in block 1614 .In block 1616, the managed node 1260 analyzes the generated telemetry data 1402 to determine the execution efficiency of the threads of the workload. In doing so, managed node 1260 determines the number of cycles per instruction for each core 1320 , as indicated in block 1618 , in the illustrative embodiment. In an illustrative embodiment, managed node 1260 converts the signal to This is done by comparing the number to the number of cycles of the kernel for a predefined time period (for example, frequency multiplied by one second). Additionally, in an illustrative embodiment, managed node 1260 compares the cycles per instruction to a predefined number of cycles per instruction to identify any stalled cores 1320 (e.g., where the cycles per instruction is greater than the predefined number of cycles core 1320 ), as indicated in block 1620 .Additionally, in an illustrative embodiment, managed node 1260 generates a fingerprint of the executed thread, as indicated in block 1622 . In an illustrative embodiment, managed nodes 1260 may identify patterns by analyzing the usage of each stage of the core pipeline by each thread over a predefined period of time (e.g., one second), and store the patterns in fingerprint data 1404 to generate fingerprints. For example, managed node 1260 may determine that a thread may utilize one stage of the pipeline during one period of time, then another stage during a subsequent period of time, and then typically a third stage of the pipeline during a subsequent period of time, after which Repeat the pattern. As indicated in block 1624, in an illustrative embodiment, managed node 1260 also generates a map for pipeline stage utilization for each thread on each core 1320 of each processor of CPU 1302 (eg, pipeline Using mapping data 1408 ), as described above with reference to FIG. 14 . Additionally, as indicated in block 1626 , in an illustrative embodiment, managed node 1260 determines the pipeline stage primarily used by each thread of managed node 1260 , such as by determining from the fingerprint generated in block 1622 in mode The most utilized pipeline stage during a predefined period of time (for example, within a second period).At block 1628 , in an illustrative embodiment, managed node 1260 determines the current capacity of each core 1320 and the predicted capacity of each core 1320 . In doing so, as indicated in block 1630 , managed node 1260 may base on the identification of the major pipeline stages utilized by each thread (as described with reference to block 1626 ) and/or on the basis of the thread assigned to each core 1320 fingerprint to determine capacity. For example, managed node 1260 may determine that core 1320 is currently performing a thread that primarily utilizes or is predicted to utilize the front-end stage based on the corresponding fingerprint, then core 1320 has a relatively small capacity. Conversely, cores 1320 have more capacity for threads that are complementary to currently executing threads (eg, cores that primarily use the backend stage or are predicted to transition to use the backend stage). Method 1600 then proceeds to block 1632 of FIG. 17 , where, in an illustrative embodiment, managed nodes 1260 provide orchestrator server 1240 with efficiency data indicative of the execution efficiency of workload threads in managed nodes 1260 (e.g., Efficiency Data 1504).Referring now to FIG. 17 , in providing efficiency data 1504 , in an illustrative embodiment, managed node 1260 provides a pipeline stage utilization map (e.g., pipeline utilization map data 1408 ) to orchestrator server 1240 as shown in block 1634 Instructed. Additionally, in the illustrative embodiment, managed node 1260 provides fingerprint data 1404 to orchestrator server 1240 as indicated in block 1636 . Managed node 1260 then determines adjustments to the thread configuration among cores 1320 to improve execution efficiency, as indicated in block 1638 . In doing so, managed nodes 1260 determine adjustments to reduce cycles per instruction in each core 1320 , as indicated in block 1640 . In an illustrative embodiment, managed node 1260 may determine adjustments to the priority of each thread based on the phase primarily utilized by each core, as indicated in block 1642 . For example, the managed node 1260 may set the priority of the threads such that front-end bound threads (e.g., threads that are primarily in the front-end stage of the pipeline) are given high priority (e.g., a number in the range 0-75), retire Threads (e.g., threads primarily in the retirement phase) are also given high priority (e.g., in the range of 0-75), bad speculation phase threads are given lower priority (e.g., in the range of 76-110 ), and the backend stage threads are given the lowest priority (eg, in the range of 111-140), because those threads are usually waiting to access data from memory or waiting for complex calculations to complete.Managed node 1260 may additionally determine the reassignment of one or more of the threads to a different core 1320 of the same processor or to a core 1320 of a different processor in managed node 1260 , as indicated in block 1644 . In doing so, managed node 1260 may determine to reassign to match complementary threads (eg, threads that primarily utilize different pipeline stages) to the same core 1320 , as indicated in block 1646 . For example, and as indicated by block 1648 , managed node 1260 may match (eg, determine to reassign) front-end bound threads with back-end bound threads to execute on the same core 1320 . As indicated in block 1650, managed node 1260 may additionally or alternatively receive adjustments from orchestrator server 1240, such as recommended changes to one or more thread priorities or thread reassignments (e.g., in After the server server 1240 has analyzed the efficiency data 1504) (provided in block 1632).Subsequently, as indicated in block 1652 , managed node 1260 applies the adjustments determined from block 1638 in an illustrative embodiment. In applying the adjustments, the managed node 1260 may apply the adjusted priorities for each thread, as indicated in block 1654 . Managed node 1260 may also reassign threads to other cores 1320 of the same processor or cores 1320 of other processors, as indicated in block 1656 . Additionally or alternatively, in an illustrative embodiment, managed node 1260 may stop one or more threads executing one or more workloads to enable orchestrator server 1260 to coordinate the migration of workloads to existing Another managed node 1260 is identified as having cores with the capacity to execute the threads of the workload more efficiently, as indicated by block 1658 . Method 1600 then loops back to block 1604 of FIG. 16 , where managed node 1260 may receive an assignment from orchestrator server 1240 of one or more additional workloads.Referring now to FIG. 18 , in use, the orchestrator server 1240 may execute a method 1800 for managing execution efficiency of workloads by managed nodes 1260 . Method 1800 begins at block 1802 , where orchestrator server 1240 determines whether to manage execution efficiency of workloads among managed nodes 1260 , in an illustrative embodiment. In an illustrative embodiment, orchestrator server 1240 determines to manage efficiency if orchestrator server 1240 is powered on and in communication with managed nodes 1260 . In other embodiments, orchestrator server 1240 may determine whether to manage efficiency based on other factors. Regardless, method 1800 proceeds to block 1804 in response to a determination that efficiency is to be managed, where orchestrator server 1240 assigns workloads to managed nodes 1260 . Orchestrator server 1240 may initially assign workloads to managed nodes 1240 based on any suitable scheme (eg, randomly based on a predefined sequence, etc.). In block 1806 , orchestrator server 1240 receives efficiency data 1504 from managed nodes 1260 . In doing so, orchestrator server 1240 receives pipeline utilization map data 1408 from each managed node 1260 , as indicated in block 1808 , in the illustrative embodiment. Additionally, in the illustrative embodiment, orchestrator server 1240 receives workload thread fingerprint data (eg, fingerprint data 1404 ) from managed node 1260 , as indicated in block 1810 .Subsequently, in block 1812 , orchestrator server 1240 determines adjustments to improve the efficiency of execution of workload threads by managed nodes 1260 . In doing so, as indicated in block 1814, in an illustrative embodiment, orchestrator server 1240 may identify a match of cores 1320 to workload threads for managed nodes 1260, such as by identifying the current capacity of cores 1320 and and/or predict capacity, and identify threads that would execute more efficiently when matched with the identified capacity of core 1320, similar to blocks 1628, 1630, and 1644 of method 1600 of FIGS. 16-17. As indicated in block 1816 , orchestrator server 1240 may determine priority adjustments for workload threads based on thread fingerprint data 1404 contained in efficiency data 1504 , similar to block 1642 of FIG. 17 . Additionally, as indicated in block 1818, orchestrator server 1240 may determine reassignment of threads of the workload to another managed node 1260 (such as if a core 1320 on managed node 1260 is stalled and another If one or more cores 1320 on the management node 1260 have capacity to increase the execution efficiency of threads currently assigned to the parked cores 1320). Method 1800 then proceeds to block 1820 of FIG. 19 , where orchestrator server 1240 provides the determined adjustments to managed nodes 1260 .Referring now to FIG. 19 , in providing determined adjustments to managed nodes 1260, orchestrator server 1240 may send identified matches of cores to workload threads (e.g., the match identified in block 1814) to one or more of the managed nodes 1260, as indicated in block 1822. Upon sending the identified match, the orchestrator server 1240 may send a request to reassign the workload thread to another core 1320 of the same processor or to a core 1320 of a different processor in the same managed node 1260, as in block 1824 as indicated. As indicated in block 1826 , orchestrator server 1240 may send a request to assign complementary threads to the same core 1320 . For example, as indicated in block 1828, orchestrator server 1240 may send a request to schedule a front-end bound thread to execute on the same core 1320 as the back-end bound thread. Additionally or alternatively, as indicated in block 1830, upon providing the determined adjustment, the orchestrator server 1240 may send the workload thread priority readjustment determined in block 1816 of FIG. 18 to one or more Managed node 1260. Additionally or alternatively, the orchestrator server 1240 may reassign the workload to another managed node 1260 as indicated in block 1832 based on a determination to do so in block 1818 of FIG. 18 discussed above. Method 1800 then loops back to block 1802 of FIG. 18 , where orchestrator server 1240 assigns any additional workload to managed nodes 1260 .exampleIllustrative examples of the techniques disclosed herein are provided below. Embodiments of the technology may include any one or more of the examples described below and any combination thereof.Example 1 includes a managed node to manage execution efficiency of a workload assigned to the managed node, the managed node comprising: one or more processors, wherein each processor includes a plurality of cores; a or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the managed node to: execute a thread assigned to a workload of the managed node ; generating telemetry data indicative of the execution efficiency of the thread, wherein the efficiency indicates the number of cycles per instruction executed by the corresponding core; determining an adjustment to the configuration of the thread to improve execution of the thread based on the telemetry data efficiency; and applying the identified adjustments.Example 2 includes the subject matter of Example 1, and wherein generating the telemetry data includes identifying a current pipeline stage for each thread using counters associated with each stage of the pipeline for each core.Example 3 includes the subject matter of any of Examples 1 and 2, and wherein the plurality of instructions, when executed, cause the managed node to analyze the telemetry data to determine an execution efficiency of the thread.Example 4 includes the subject matter of any of Examples 1-3, and wherein determining the execution efficiency includes determining cycles per instruction for each core.Example 5 includes the subject matter of any of Examples 1-4, and wherein the plurality of instructions, when executed, cause the managed node to compare the number of cycles per instruction with a predefined number of cycles per instruction to determine Whether one or more of the cores are stalled.Example 6 includes the subject matter of any of Examples 1-5, and wherein determining the efficiency includes generating a fingerprint indicative of a pattern of usage by each thread of a pipeline stage of a corresponding core over a predefined period of time.Example 7 includes the subject matter of any of Examples 1-6, and wherein determining the efficiency includes determining a current capacity of each core and a predicted capacity of each core from the generated fingerprints.Example 8 includes the subject matter of any of Examples 1-7, and wherein determining the efficiency includes generating a map indicative of pipeline stage utilization for each thread on each core of the one or more processors.Example 9 includes the subject matter of any of Examples 1-8, and wherein determining the efficiency includes determining a pipeline stage that each thread primarily utilizes.Example 10 includes the subject matter of any of Examples 1-9, and wherein determining the efficiency includes determining a current capacity of each core and a predicted capacity of each core based on the determined pipeline stage that each thread primarily utilizes.Example 11 includes the subject matter of any of Examples 1-10, and wherein the plurality of instructions, when executed, further cause the managed node to provide efficiency data indicative of the determined efficiency to the orchestrator server.Example 12 includes the subject matter of any of Examples 1-11, and wherein providing the efficiency data includes providing the orchestrator server with a pipeline indicative of each thread on each core of the one or more processors The map utilized by the stage.Example 13 includes the subject matter of any of Examples 1-12, and wherein providing the efficiency data includes providing to the orchestrator server a fingerprint indicative of a pattern of each thread's usage of the corresponding core's pipeline stage over a predefined period of time.Example 14 includes the subject matter of any of Examples 1-13, and wherein determining an adjustment includes determining an adjustment to reduce cycles per instruction in one or more of the cores.Example 15 includes the subject matter of any of Examples 1-14, and wherein determining the adjustment includes determining an adjustment to a priority of one or more of the threads based on an identification of a pipeline stage that each thread primarily uses.Example 16 includes the subject matter of any of Examples 1-15, and wherein determining the adjustment comprises determining to reassign one or more of the threads to another processor or another core of the one or more processors.Example 17 includes the subject matter of any of Examples 1-16, and wherein determining the reassignment includes determining a reassignment that matches the complementary thread with one or more of the cores.Example 18 includes the subject matter of any of Examples 1-17, and wherein matching the complementary threads includes matching the front-end bound threads with the back-end bound threads on the same core.Example 19 includes the subject matter of any of Examples 1-18, and wherein the plurality of instructions, when executed, cause the managed node to receive adjustment data indicative of the adjustment determined by the orchestrator server.Example 20 includes the subject matter of any of Examples 1-19, and wherein generating telemetry data includes obtaining performance data from a communication circuit of the managed node.Example 21 includes a method for managing execution efficiency of a workload assigned to a managed node, the method comprising: executing, by the managed node with one or more processors each comprising a plurality of cores assigned to Threads of the managed node's workload; telemetry data is generated by the managed node indicating the execution efficiency of the thread, wherein the efficiency indicates the number of cycles per instruction executed by the corresponding core; determination of the number of threads per instruction by the managed node and from the telemetry data Adjustment of the configuration of the thread to improve the execution efficiency of the thread; and the determined adjustment is applied by the managed node.Example 22 includes the subject matter of Example 21, and wherein generating the telemetry data includes identifying a current pipeline stage for each thread using a counter associated with each stage of the pipeline for each core.Example 23 includes the subject matter of any of Examples 21 and 22, and further includes analyzing, by the managed node, the telemetry data to determine an execution efficiency of the thread.Example 24 includes the subject matter of any of Examples 21-23, and wherein determining the execution efficiency includes determining cycles per instruction for each core.Example 25 includes the subject matter of any of Examples 21-24, and further includes comparing, by the managed node, the number of cycles per instruction to a predefined number of cycles per instruction to determine one or more of the Check whether the core is stalled.Example 26 includes the subject matter of any of Examples 21-25, and wherein determining the efficiency includes generating a fingerprint indicative of a pattern of each thread's usage of a pipeline stage of a corresponding core over a predefined period of time.Example 27 includes the subject matter of any of Examples 21-26, and wherein determining the efficiency includes determining a current capacity of each core and a predicted capacity of each core from the generated fingerprint.Example 28 includes the subject matter of any of Examples 21-27, and wherein determining the efficiency includes generating a map indicative of pipeline stage utilization for each thread on each core of the one or more processors.Example 29 includes the subject matter of any of Examples 21-28, and wherein determining the efficiency comprises determining a pipeline stage that each thread primarily utilizes.Example 30 includes the subject matter of any of Examples 21-29, and wherein determining the efficiency includes determining a current capacity of each core and a predicted capacity of each core based on the determined pipeline stage that each thread primarily utilizes.Example 31 includes the subject matter of any of Examples 21-30, and further includes providing, by the managed node, efficiency data indicative of the determined efficiency to an orchestrator server.Example 32 includes the subject matter of any of Examples 21-31, and wherein providing the efficiency data includes providing the orchestrator server with a pipeline indicative of each thread on each core of the one or more processors The map utilized by the stage.Example 33 includes the subject matter of any of Examples 21-32, and wherein providing the efficiency data comprises providing to the orchestrator server a fingerprint indicative of a pattern of each thread's usage of the corresponding core's pipeline stage over a predefined period of time.Example 34 includes the subject matter of any of Examples 21-33, and wherein determining an adjustment includes determining an adjustment to reduce cycles per instruction in one or more of the cores.Example 35 includes the subject matter of any of Examples 21-34, and wherein determining the adjustment includes determining an adjustment to the priority of one or more of the threads based on an identification of a pipeline stage that each thread primarily uses.Example 36 includes the subject matter of any of Examples 21-35, and wherein determining the adjustment comprises determining to reassign one or more of the threads to another processor or another core of the one or more processors.Example 37 includes the subject matter of any of Examples 21-36, and wherein determining the reassignment includes determining a reassignment that matches the complementary thread with one or more of the cores.Example 38 includes the subject matter of any of Examples 21-37, and wherein matching the complementary threads includes matching the front-end bound threads with the back-end bound threads on the same core.Example 39 includes the subject matter of any of Examples 21-38, and further includes receiving adjustment data indicative of the adjustment determined by the composer server.Example 40 includes the subject matter of any of Examples 21-39, and wherein generating telemetry data includes obtaining performance data from a communication circuit of the managed node.Example 41 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a managed node to perform the method of any of Examples 21-40.Example 42 includes a managed node to manage the execution efficiency of a workload assigned to the managed node, the managed node comprising: one or more processors; one or more memory devices storing therein A plurality of instructions that when executed by the one or more processors cause the managed node to perform the method of any of Examples 21-40.Example 43 includes a managed node to manage execution efficiency of a workload assigned to the managed node, the managed node comprising means for performing the method of any of Examples 21-40.Example 44 includes a managed node for managing execution efficiency of a workload assigned to the managed node, the managed node comprising: workload executor circuitry, each comprising a plurality of cores with one or more a processor executes a thread assigned to a workload of the managed node and generates telemetry data indicative of an execution efficiency of the thread, wherein the efficiency indicates the number of cycles per instruction executed by the corresponding core; and, a resource manager circuit, An adjustment to the configuration of the thread is determined based on the telemetry data to improve execution efficiency of the thread, and the determined adjustment is applied.Example 45 includes the subject matter of Example 44, and wherein generating the telemetry data includes identifying a current pipeline stage for each thread using a counter associated with each stage of the pipeline for each core.Example 46 includes the subject matter of any of Examples 44 and 45, and wherein resource manager circuitry further analyzes the telemetry data to determine execution efficiency of the thread.Example 47 includes the subject matter of any of Examples 44-46, and wherein determining the execution efficiency includes determining cycles per instruction for each core.Example 48 includes the subject matter of any of Examples 44-47, and wherein the resource manager circuit further compares the number of cycles per instruction to a predefined number of cycles per instruction to determine whether one or more of the cores stalled.Example 49 includes the subject matter of any of Examples 44-48, and wherein determining the efficiency comprises generating a fingerprint indicative of a pattern of each thread's usage of a pipeline stage of a corresponding core over a predefined period of time.Example 50 includes the subject matter of any of Examples 44-49, and wherein determining the efficiency includes determining a current capacity per core and a predicted capacity per core from the generated fingerprint.Example 51 includes the subject matter of any of Examples 44-50, and wherein determining the efficiency comprises generating a map indicative of pipeline stage utilization for each thread on each core of the one or more processors. [00138] Example 52 includes the subject matter of any of Examples 44-51, and wherein determining the efficiency comprises determining a pipeline stage that each thread primarily utilizes.Example 53 includes the subject matter of any of Examples 44-52, and wherein determining the efficiency includes determining a current capacity of each core and a predicted capacity of each core based on the determined pipeline stage that each thread primarily utilizes.Example 54 includes the subject matter of any of Examples 44-53, and wherein the resource manager further provides efficiency data indicative of the determined efficiency to the orchestrator server.Example 55 includes the subject matter of any of Examples 44-54, and wherein providing the efficiency data includes providing to the orchestrator server a pipeline indicative of each thread on each core of the one or more processors The map utilized by the stage.Example 56 includes the subject matter of any of Examples 44-55, and wherein providing the efficiency data includes providing to the orchestrator server a fingerprint indicative of a pattern of each thread's usage of the corresponding core's pipeline stage over a predefined period of time.Example 57 includes the subject matter of any of Examples 44-56, and wherein determining an adjustment includes determining an adjustment to reduce cycles per instruction in one or more of said cores.Example 58 includes the subject matter of any of Examples 44-57, and wherein determining the adjustment includes determining an adjustment to the priority of one or more of the threads based on an identification of a pipeline stage that each thread primarily uses.Example 59 includes the subject matter of any of Examples 44-58, and wherein determining the adjustment comprises determining to reassign one or more of the threads to another processor or another core of the one or more processors.Example 60 includes the subject matter of any of Examples 44-59, and wherein determining the reassignment includes determining a reassignment that matches the complementary thread with one or more of the cores.Example 61 includes the subject matter of any of Examples 44-60, and wherein matching the complementary threads includes matching the front-end bound threads with the back-end bound threads on the same core.Example 62 includes the subject matter of any of Examples 44-61, and further includes network communicator circuitry to receive adjustment data indicative of the adjustment determined by the composer server.Example 63 includes the subject matter of any of Examples 44-62, and wherein generating telemetry data includes obtaining performance data from a communication circuit of the managed node.Example 64 includes a managed node to manage execution efficiency of a workload assigned to the managed node, the managed node comprising: for executing with one or more processors each comprising a plurality of cores a circuit assigned to a thread of a workload of a managed node; and a circuit for generating telemetry data indicative of an execution efficiency of the thread, wherein the efficiency indicates a number of cycles per instruction executed by a corresponding core; for use in accordance with the means for determining an adjustment to a configuration of the thread to improve execution efficiency of the thread by telemetry data; and means for applying the determined adjustment.Example 65 includes the subject matter of Example 64, and wherein the circuitry for generating the telemetry data comprises circuitry for identifying a current pipeline stage for each thread using a counter associated with each stage of the pipeline for each core.Example 66 includes the subject matter of any of Examples 64 and 65, and further includes means for analyzing the telemetry data to determine execution efficiency of the thread.Example 67 includes the subject matter of any of Examples 64-66, and wherein the means for determining the execution efficiency comprises means for determining cycles per instruction for each core.Example 68 includes the subject matter of any of Examples 64-67, and further includes: means for comparing the number of cycles per instruction to a predefined number of cycles per instruction to determine whether one or more of the cores are stalled .Example 69 includes the subject matter of any of Examples 64-68, and wherein the means for determining the efficiency comprises generating a fingerprint indicative of a pattern of each thread's usage of a pipeline stage of the corresponding core over a predefined period of time parts.Example 70 includes the subject matter of any of Examples 64-69, and wherein the means for determining the efficiency comprises means for determining a current capacity of each core and a predicted capacity of each core from the generated fingerprint.Example 71 includes the subject matter of any of Examples 64-70, and wherein the means for determining the efficiency comprises generating a pipeline stage indicative of each thread on each core of the one or more processors The components of the map utilized.Example 72 includes the subject matter of any of Examples 64-71, and wherein the means for determining the efficiency comprises determining a pipeline stage that each thread primarily utilizes.Example 73 includes the subject matter of any of Examples 64-72, and wherein the means for determining the efficiency comprises determining a current capacity of each core and an components of the predicted capacity.Example 74 includes the subject matter of any of Examples 64-73, and further includes means for providing efficiency data indicative of the determined efficiency to the arranger server.Example 75 includes the subject matter of any of Examples 64-74, and wherein the means for providing the efficiency data comprises providing the orchestrator server with an indication of on-per-core of the one or more processors Each thread's pipeline stage utilizes the mapped components.Example 76 includes the subject matter of any of Examples 64-75, and wherein the means for providing the efficiency data comprises providing to the orchestrator server a value indicating a pipeline stage of each thread for a corresponding core within a predefined period of time The pattern of fingerprint components used.Example 77 includes the subject matter of any of Examples 64-76, and wherein the means for determining an adjustment comprises means for determining an adjustment to reduce cycles per instruction in one or more of said cores.Example 78 includes the subject matter of any of Examples 64-77, and wherein the means for determining tuning comprises means for determining priority to one or more threads based on an identification of the pipeline stage that each thread primarily uses Adjusted parts.Example 79 includes the subject matter of any of Examples 64-78, and wherein the means for determining an adjustment comprises determining to reassign one or more of the threads to another of the one or more processors or parts of another core.Example 80 includes the subject matter of any of Examples 64-79, and wherein the means for determining a reassignment comprises means for determining a reassignment that matches a complementary thread to one or more of the cores.Example 81 includes the subject matter of any of Examples 64-80, and wherein the means for matching complementary threads comprises means for matching front-end bound threads with back-end bound threads on the same core.Example 82 includes the subject matter of any of Examples 64-81, and further includes circuitry for receiving adjustment data indicative of the adjustment determined by the composer server.Example 83 includes the subject matter of any of Examples 64-82, and wherein the circuitry for generating telemetry data comprises circuitry for obtaining performance data from communication circuitry of the managed node.Example 84 includes an orchestrator server to manage the execution efficiency of workloads assigned to a set of managed nodes, the orchestrator server comprising: one or more processors; one or more memory devices, wherein Stored are a plurality of instructions that, when executed by the one or more processors, cause the orchestrator server to: assign a workload to the set of managed nodes; receive efficiency data from the managed nodes, wherein the efficiency data indicates Efficiency of execution of threads of a workload by a core of a processor in a managed node, and wherein the efficiency indicates the number of cycles per instruction executed by the corresponding core; determining an adjustment to a thread configuration to improve execution efficiency in the managed node ; and providing the determined adjustment to the managed node.Example 85 includes the subject matter of Example 84, and wherein receiving efficiency data from the managed node comprises: receiving a map indicating pipeline stage utilization of each thread on each core of the managed node or indicating that each thread is within a predefined period of time At least one of the thread fingerprint data for a pattern of usage of the pipeline stages of the corresponding core.Example 86 includes the subject matter of any of Examples 84 and 85, and wherein determining the adjustment includes identifying a match of workload threads to cores of the managed nodes.Example 87 includes the subject matter of any of Examples 84-86, and wherein providing the determined adjustment includes sending the identified match to the managed node.Example 88 includes the subject matter of any of Examples 84-87, and wherein determining the adjustment comprises determining the priority of the threads based on thread fingerprint data indicating a pattern of each thread's usage of the corresponding core's pipeline stage within a predefined period of time. adjustment.Example 89 includes the subject matter of any of Examples 84-88, and wherein determining the adjustment comprises determining a reassignment of workload from one managed node to another managed node.Example 90 includes the subject matter of any of Examples 84-89, and wherein providing the determined adjustment includes sending a request to reassign the thread to another processor or another core within the managed node.Example 91 includes the subject matter of any of Examples 84-90, and wherein providing the determined adjustment includes sending a request to assign complementary threads to the same core.Example 92 includes the subject matter of any of Examples 84-91, and wherein providing the determined adjustment includes sending a request to schedule the front-end bound thread and the back-end bound thread on the same core.Example 93 includes the subject matter of any of Examples 84-92, and wherein providing the determined adjustment includes sending the workload thread priority adjustment to the at least one managed node.Example 94 includes the subject matter of any of Examples 84-93, and wherein providing the determined adjustment includes reassigning a workload from one managed node to another managed node.Example 95 includes a method of managing the execution efficiency of workloads assigned to a set of managed nodes, the method comprising: assigning, by an orchestrator server, workloads to the set of managed nodes; receiving efficiency data, wherein the efficiency data indicates an efficiency of execution of threads of a collating workload by a processor in a managed node, and wherein the efficiency indicates a number of cycles per instruction executed by a corresponding core; determined by an orchestrator server adjustments to thread configurations to improve execution efficiency in the managed nodes; and providing, by the orchestrator server, the determined adjustments to the managed nodes.Example 96 includes the subject matter of Example 95, and wherein receiving efficiency data from the managed node comprises: receiving a map indicating pipeline stage utilization of each thread on each core of the managed node or indicating that each thread is within a predefined period of time At least one of the thread fingerprint data for a pattern of usage of the pipeline stages of the corresponding core.Example 97 includes the subject matter of any of Examples 95 and 96, and wherein determining the adjustment includes identifying a match of workload threads to cores of the managed nodes.Example 98 includes the subject matter of any of Examples 95-97, and wherein providing the determined adjustment includes sending the identified match to the managed node.Example 99 includes the subject matter of any of Examples 95-98, and wherein determining the adjustment comprises determining the priority of the threads based on thread fingerprint data indicating a pattern of each thread's usage of the corresponding core's pipeline stage within a predefined period of time. adjustment.Example 100 includes the subject matter of any of Examples 95-99, and wherein determining the adjustment comprises determining a reassignment of workload from one managed node to another managed node.Example 101 includes the subject matter of any of Examples 95-100, and wherein providing the determined adjustment includes sending a request to reassign the thread to another processor or another core within the managed node.Example 102 includes the subject matter of any of Examples 95-101, and wherein providing the determined adjustment includes sending a request to assign complementary threads to the same core.Example 103 includes the subject matter of any of Examples 95-102, and wherein providing the determined adjustment includes sending a request to schedule the front-end bound thread and the back-end bound thread on the same core.Example 104 includes the subject matter of any of Examples 95-103, and wherein providing the determined adjustment includes sending the workload thread priority adjustment to the at least one managed node.Example 105 includes the subject matter of any of Examples 95-104, and wherein providing the determined adjustment includes reassigning a workload from one managed node to another managed node.Example 106 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause the composer server to perform the method of any of Examples 95-105.Example 107 includes an orchestrator server to manage the execution efficiency of workloads assigned to a set of managed nodes, the orchestrator server comprising: one or more processors; one or more memory devices, wherein A plurality of instructions are stored which, when executed by the one or more processors, cause the composer server to perform the method of any of Examples 95-105.Example 108 includes an orchestrator server to manage execution efficiency of workloads assigned to a set of managed nodes, the orchestrator server comprising means for performing the method of any of Examples 95-105.Example 109 includes an orchestrator server to manage execution efficiency of workloads assigned to a set of managed nodes, the orchestrator server comprising: workload assigner circuitry to assign workloads to the set of managed nodes; network communication A processor circuit that receives efficiency data from a managed node, wherein the efficiency data is indicative of an efficiency of execution of threads of a collating workload by a processor in the managed node, and wherein the efficiency is indicative of cycles per instruction executed by the corresponding core and an efficiency manager circuit that determines an adjustment to the thread configuration to improve execution efficiency in the managed node and provides the determined adjustment to the managed node.Example 110 includes the subject matter of Example 109, and wherein receiving efficiency data from the managed node comprises: receiving a map indicating pipeline stage utilization of each thread on each core of the managed node or indicating that each thread is within a predefined period of time At least one of the thread fingerprint data for a pattern of usage of the pipeline stages of the corresponding core.Example 111 includes the subject matter of any of Examples 109 and 110, and wherein determining the adjustment includes identifying a match of workload threads to cores of the managed nodes.Example 112 includes the subject matter of any of Examples 109-111, and wherein providing the determined adjustment includes sending the identified match to the managed node.Example 113 includes the subject matter of any of Examples 109-112, and wherein determining the adjustment comprises determining an adjustment to the thread priority based on thread fingerprint data indicating a pattern of each thread's usage of the corresponding core's pipeline stage within the predefined period of time. adjustment.Example 114 includes the subject matter of any of Examples 109-113, and wherein determining the adjustment comprises determining a reassignment of workload from one managed node to another managed node.Example 115 includes the subject matter of any of Examples 109-114, and wherein providing the determined adjustment includes sending a request to reassign the thread to another processor or another core within the managed node.Example 116 includes the subject matter of any of Examples 109-115, and wherein providing the determined adjustment includes sending a request to assign complementary threads to the same core.Example 117 includes the subject matter of any of Examples 109-116, and wherein providing the determined adjustment includes sending a request to schedule the front-end bound thread and the back-end bound thread on the same core.Example 118 includes the subject matter of any of Examples 109-117, and wherein providing the determined adjustment includes sending the workload thread priority adjustment to the at least one managed node.Example 119 includes the subject matter of any of Examples 109-118, and wherein providing the determined adjustment includes reassigning a workload from one managed node to another managed node.Example 120 includes an orchestrator server to manage execution efficiency of workloads assigned to a set of managed nodes, the orchestrator server comprising: means for assigning workloads to the set of managed nodes; circuitry for a managing node to receive efficiency data, wherein the efficiency data indicates efficiency of execution of threads of a collating workload by processors in the managed node, and wherein the efficiency indicates cycles per instruction executed by the corresponding core; and means for determining an adjustment to the thread configuration to improve execution efficiency in the managed node; and circuitry for providing the determined adjustment to the managed node.Example 121 includes the subject matter of Example 120, and wherein the circuitry for receiving efficiency data from the managed node includes receiving a map indicating pipeline stage utilization for each thread on each core of the managed node or indicating that each thread is at Circuitry for at least one of the thread fingerprint data for a pattern of usage of the pipeline stages of the corresponding core over a predefined period of time.Example 122 includes the subject matter of any of Examples 120 and 121, and wherein the means for determining the adjustment includes means for identifying a match of workload threads to cores of the managed nodes.Example 123 includes the subject matter of any of Examples 120-122, and wherein the circuitry for providing the determined adjustment comprises circuitry for sending the identified match to the managed node.Example 124 includes the subject matter of any of Examples 120-123, and wherein the means for determining the adjustment comprises thread fingerprint data according to a pattern indicative of each thread's use of a pipeline stage of the corresponding core within a predefined period of time A component that determines adjustments to thread priorities.Example 125 includes the subject matter of any of Examples 120-124, and wherein the means for determining an adjustment comprises means for determining a reassignment of a workload from one managed node to another managed node.Example 126 includes the subject matter of any of Examples 120-125, and wherein the circuitry for providing the determined adjustment comprises sending a request to reassign the thread to another processor or another core within the managed node circuit.Example 127 includes the subject matter of any of Examples 120-126, and wherein the circuitry for providing the determined adjustment comprises circuitry for sending a request to assign complementary threads to the same core.Example 128 includes the subject matter of any of Examples 120-127, and wherein the circuitry for providing the determined adjustment comprises circuitry for sending a request to schedule the front-end bound thread and the back-end bound thread on the same core.Example 129 includes the subject matter of any of Examples 120-128, and wherein the circuitry for providing the determined adjustment comprises circuitry for sending the workload thread priority adjustment to at least one managed node.Example 130 includes the subject matter of any of Examples 120-129, and wherein the circuitry for providing the determined adjustment comprises circuitry for reassigning a workload from one managed node to another managed node.
A method and an apparatus for uniform electroless plating of layers onto exposed metallizations in integrated circuits such as bond pads. The apparatus provides means for holding a plurality of wafers, and rotating each wafer at constant speed and synchronous within the plurality. Immersed in a plating solution flowing in substantially laminar motion and at constant speed, the method creates periodic superposition of directions and speeds of the motion of the wafers and the motion of the plating solution. The invention creates periodically changing wafer portions where the directions and speeds are additive and where the directions and speeds are opposed and subtractive. Consequently, highly uniformly layers are electrolessly plated onto the exposed metallizations of bond pads. If the plated layers are bondable metals, the process transforms otherwise unbondable pad metallization into bondable pads. <IMAGE>
A method for controlled electroless plating of uniform metal layers onto exposed metallizations in integrated circuits positioned on the active surface of semiconductor wafers, which method comprising:maintaining a plurality of said wafers approximately parallel to each other at predetermined distances;immersing said wafers into an electroless plating solution flowing in laminar motion at constant speed substantially parallel to said active surface of said wafers;rotating each of said wafers at constant speed and synchronously with each other; andcreating periodic relative motion in changing directions between said plating solution and said wafers, thereby uniformly plating layers onto said exposed metallizations by controlled electroless deposition.The method according to Claim 1 wherein said exposed metallizations are non-oxidized copper metallizations of bond pads positioned in said integrated circuits having copper metallizations.The method according to Claim 1 or Claim 2 wherein said plurality of said wafers comprises between 10 and 30 wafers.The method according to any preceding Claim wherein said relative motion comprises a periodic superposition of directions and speeds of the motion of said wafers and the motion of said solution, thus creating periodically changing wafer portions where the directions and speeds are additive and where the directions and speeds are opposed and subtractive.The method according to any preceding Claim further comprising the steps of:inserting the wafers into a clean-up or presoak bath;removing the wafers from the clean-up or presoak bath; andinserting the wafers into the plating solution.An apparatus for controlled electroless plating of uniform layers onto exposed metallizations in integrated circuits positioned on the active surface of semiconductor wafers, comprising:means for holding a plurality of said wafers approximately parallel to each other at predetermined distances;means for rotating each wafer of said plurality;means for electroless plating in a solution flowing substantially in laminar motion at constant speed substantially parallel to said active surface of said wafers; andmeans for creating periodic relative motion in changing directions between said plating solution and said wafers, whereby uniformly plated layers are electrolessly deposited onto said exposed metallizations.The apparatus according to Claim 6 wherein said means for rotating wafers creates constant wafer speed and synchronous rotation between wafers.The apparatus according to Claim 6 or Claim 7 wherein said holding means comprises a plurality of grooved rollers positioned parallel to each other, each of said rollers having grooves around said rollers, shaped to support said wafers, the respective grooves of each roller positioned in a plane suitable for holding one of said wafers.The apparatus according to Claim 8 wherein said plurality of rollers comprises three rollers.The apparatus according to any of Claims 6 to 9 wherein said rotating means comprises a central sun gear driving said grooved rollers positioned in parallel around said central gear.The apparatus according to any of Claims 6 to 10 further including a motor associated with the apparatus which rotates the apparatus in a plating solution.
FIELD OF THE INVENTIONThe present invention is related in general to the field of semiconductor devices and processes and more specifically to a fixture and process for electroless plating bondable metal caps onto bond pads of integrated circuits having copper interconnecting metallization.DESRCIPTION OF THE RELATED ARTIn integrated circuits (IC) technology, pure or doped aluminum has been the metallization of choice for interconnection and bond pads for more than four decades. Main advantages of aluminum include ease of deposition and patterning. Further, the technology of bonding wires made of gold, copper, or aluminum to the aluminum bond pads has been developed to a high level of automation, miniaturization, and reliability. Examples of the high technical standard of wire bonding to aluminum can be found in U.S. Patents # 5,455,195, issued on Oct. 3, 1995 (Ramsey et al., "Method for Obtaining Metallurgical Stability in Integrated Circuit Conductive Bonds"); # 5,244,140, issued on Sep. 14, 1993 (Ramsey et al., "Ultrasonic Bonding Process Beyond 125 kHz"); # 5,201,454, issued on Apr. 13, 1993 (Alfaro et al., "Process for Enhanced Intermetallic Growth in IC Interconnections"); and # 5,023,697, issued on Jun. 11, 1991 (Tsumura, "Semiconductor Device with Copper Wire Ball Bonding").In the continuing trend to miniaturize the ICs, the RC time constant of the interconnection between active circuit elements increasingly dominates the achievable IC speed-power product. Consequently, the relatively high resistivity of the interconnecting aluminum now appears inferior to the lower resistivity of metals such as copper. Further, the pronounced sensitivity of aluminum to electromigration is becoming a serious obstacle. Consequently, there is now a strong drive in the semiconductor industry to employ copper as the preferred interconnecting metal, based on its higher electrical conductivity and lower electromigration sensitivity. From the standpoint of the mature aluminum interconnection technology, however, this shift to copper is a significant technological challenge.Copper has to be shielded from diffusing into the silicon base material of the ICs in order to protect the circuits from the carrier lifetime killing characteristic of copper atoms positioned in the silicon lattice. For bond pads made of copper, the formation of thin copper(I)oxide films during the manufacturing process flow has to be prevented, since these films severely inhibit reliable attachment of bonding wires, especially for conventional gold-wire ball bonding. In contrast to aluminum oxide films overlying metallic aluminum, copper oxide films overlying metallic copper cannot easily be broken by a combination of thermocompression and ultrasonic energy applied in the bonding process. As further difficulty, bare copper bond pads are susceptible to corrosion.In order to overcome these problems, a process has been disclosed to cap the clean copper bond pad with a layer of aluminum and thus re-construct the traditional situation of an aluminum pad to be bonded by conventional gold-wire ball bonding. A suitable bonding process is described in U.S. Patent # 5,785,236, issued on Jul. 28, 1998 (Cheung et al., "Advanced Copper Interconnect System that is Compatible with Existing IC Wire Bonding Technology"). The described approach, however, has several shortcomings.First, the fabrication cost of the aluminum cap is higher than desired, since the process requires additional steps for depositing metal, patterning, etching, and cleaning. Second, the cap must be thick enough to prevent copper from diffusing through the cap metal and possibly poisoning the IC transistors. Third, the aluminum used for the cap is soft and thus gets severely damaged by the markings of the multiprobe contacts in electrical testing. This damage, in turn, becomes so dominant in the ever decreasing size of the bond pads that the subsequent ball bond attachment is no longer reliable.A low-cost structure and method for capping the copper bond pads of copper-metallized ICs has been disclosed in European Patent Application EP 01000021.4, filed on 19 Feb. 2001. The present invention is related to that application. The structure provides a metal layer plated onto the copper, which impedes the up-diffusion of copper. Of several possibilities, nickel is a preferred choice. This layer is topped by a bondable metal layer, which also impedes the up-diffusion of the barrier metal. Of several possibilities, gold is a preferred choice. Metallurgical connections can then be performed by conventional wire bonding.It is difficult, though, to plate these bond pad caps uniformly in electroless deposition systems, because electroless deposition is affected by local reactant concentrations and by the agitation velocities of the aqueous solution. Deposition depletes the reactants in areas around the bond pads. Increasing the agitation of the solution only exacerbates the deposition non-uniformity, which is influenced by the flow direction of the solution. The problem is further complicated when a whole batch of wafers is to be plated simultaneously in order to reduce cost, since known control methods have been applied only to process single wafers under applied electrical bias. See, for example, U.S. Patents # 5,024,746, issued Jun. 18, 1991, and # 4,931,149, issued Jun. 5, 1990 (Stierman et al., "Fixture and a Method for Plating Contact Bumps for Integrated Circuits").An urgent need has arisen for a reliable method of plating metal caps over copper bond pads which combines minimum fabrication cost with maximum plating control of all layers to be deposited. The plating method should be flexible enough to be applied for different IC product families and a wide spectrum of design and process variations. Preferably, these innovations should be accomplished while shortening production cycle time and increasing throughput, and without the need of expensive additional manufacturing equipment.SUMMARY OF THE INVENTIONThe present invention provides a method and an apparatus for uniform electroless plating of layers onto exposed metallizations in integrated circuits such as bond pads. The apparatus provides means for holding a plurality of wafers, and rotating each wafer at constant speed and synchronous within the plurality. Immersed in a plating solution flowing in substantially laminar motion and at constant speed, the method creates periodic superposition relative of directions and speeds of the motion of the wafers and the motion of the plating solution. The invention creates periodically changing wafer portions where the directions and speeds are additive and where the directions and speeds are opposed and subtractive. Consequently, highly uniformly layers are electrolessly plated onto the exposed metallizations of bond pads. If the plated layers are bondable metals, the process transforms otherwise unbondable bond pad metallization into bondable pads.The present invention is related to high density and high speed ICs with copper interconnecting metallization, especially those having high numbers of copper metallized inputs/outputs, or "bond pads". These circuits can be found in many device families such as processors, digital and analog devices, logic devices, high frequency and high power devices, and in both large and small area chip categories.The present invention is applicable to bond pad area reduction and thus to be in support of the shrinking of IC chips. Consequently, the invention helps to alleviate the space constraint of continually shrinking applications such as cellular communication, pagers, hard disk drives, laptop computers and medical instrumentation.Another aspect of the invention is to deposit the bond pad metal caps by the self-defining process of electroless plating, thus avoiding costly photolithographic and alignment techniques.Another aspect of the invention is to accomplish the control and stability needed for successful electroless metal deposition.Another aspect of the invention is to advance the process and reliability of wafer-level multi-probing by eliminating probe marks and subsequent bonding difficulties.The invention provides design and process concepts which are flexible so that they can be applied to many families of semiconductor products, and are general so that they can be applied to several generations of products.The invention only uses designs and processes most commonly employed and accepted in the fabrication of IC devices, thus avoiding the cost of new capital investment and using the installed fabrication equipment base.These advantages have been achieved by the teachings of the invention concerning selection criteria, process flows and controls suitable for mass production. Various modifications have been successfully employed to satisfy the requirements of different plating solutions.In the first embodiment of the invention, an apparatus is disclosed for uniform electroless plating of layers onto exposed metallizations in integrated circuits, such as bond pads, which are positioned on the active surface of semiconductor wafers. The apparatus is suitable for simultaneous processing of a plurality of wafers. It provides rotation at constant speed synchronously to the wafers and thus creates relative motion, between the wafers and the chemical solution of a plating bath.In the second embodiment of the invention, a plating apparatus is disclosed which combines the rotation of the wafers with the laminar motion at constant speed of the plating solution. The superposition of rotational and laminar motions and the resulting periodic changes of direction and speed create periodically changing wafer portions where the speeds are additive and where the speeds are subtractive. The resulting controlled electroless deposition of metal creates uniformly plated layers.In all preferred embodiments, the various metal layers are deposited by electroless plating, thus avoiding the need for expensive photolithographic definition steps.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention will now be further described, by way of example, with reference to the preferred and exemplary embodiments illustrated in the figures of the accompanying drawings in which:FIG. 1 is a schematic side view of the first embodiment of the invention, the apparatus for controlled electroless plating including a plurality of integrated circuit wafers.FIG. 2 is a schematic end view of a first embodiment of the invention, the apparatus for controlled electroless plating.FIG. 3 is a schematic composite side view and cross section of a second embodiment of the invention, the plating tank and apparatus for controlled electroless plating.FIG. 4 is a schematic composite end view and cross section of a second embodiment of the invention, the plating tank and apparatus for controlled electroless plating.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSIllustrating the first embodiment of the invention, generally designated 100, FIG. 1 shows a side view of the apparatus for controlled electroless plating of uniform metal layers onto exposed metallizations on a plurality of integrated circuit (IC) wafers 101. Usually, there are 10 to 30 wafers in a batch. In the fixture 100, the wafers 101 are held approximately parallel to each other at predetermined distances 102. A typical distance is in the range from about 5 to 10 mm and thus several times wider than the thickness of a wafer (about 0.25 to 0.75 mm). At their rims, the wafers are loosely held in grooves 103 of rollers. In FIG. 1, two rollers are shown, the bottom roller 105 and the capture roller 104. The rollers are made of chemically inert plastic material such as polypropylene. Instead of grooved rollers, toothed rollers may be used. A practical groove is about 2 to 5 mm deep. In the preferred embodiments, there are three rollers (see FIG. 2) employed to contain the wafers.It is an essential feature of the invention that the rollers can be set in rotational motion by their respective driven gears 104a and 105a, which are driven by a central sun gear 110 (partially obscured in FIG. 1, but fully visible in FIG. 2). With this feature, the turning sun gear 110 drives all rollers at the same speed. Consequently, all wafers 101, contained in the roller grooves 103 and held in secure contact with the roller material by their weight, are rotating in unison at constant speed and in synchronous manner. For wafers of 200mm diameter, preferred rotation speeds are in the range of about 0.5 to 5 rpm.In FIG. 2, fixture 100 is displayed in a schematic end view. All three rollers are indicated by their respective driven gears 104a, 105a and 106a. The position of a 200 mm IC wafer is indicated by dashed line 101a. For practical ease of loading and unloading of the wafers, one of the rollers (in FIGs. 1 and 2, the capture roller 104) has a handle 104b fixed to a pivot arm 201 so that the roller 104 can be swung sidewise manually. In FIG. 2, the closed position is indicated by solid lines for pivot arm 201 and driven gear 104b, the opened position by dashed lines.Illustrating the second embodiment of the invention, generally designated 300, as well as the process for electroless plating, FIGs. 3 and 4 show schematically the cross section through a plating tank filled by the liquid plating solution 302 up to the surface 302a of the solution. The plating tank has an outer wall 301a and an inner wall 301b, separated by a gap 303, which enables the reflow of the liquid. In FIGs. 3 and 4, arrows indicate the flow of the liquid solution. As can be seen, the solution enters the tank from the bottom (arrows 310), moves in laminar flow at constant speed upward (for example, at a speed of 20 cm/min) through the tank, and exits from the tank surface (arrows 311) by overflowing into the reflow gap 303. After reaching the tank bottom, the flow cycle begins anew.Further shown in FIGs. 3 and 4 is the apparatus/fixture for holding a plurality of wafers, explained in FIG. 1 and 2. In FIG. 3, the fixture is illustrated in side view 320 as in FIG. 1; in FIG. 4, the fixture is illustrated in end view 420 as in FIG. 2. As can be seen from FIG. 3, the fixture is loaded with a batch of wafers 321, contained on their side edges while their active and passive surfaces covered by a protective resist are exposed to the plating solution (the passive surfaces are covered by a protective resist).On its laminar flow from the bottom to the surface of the tank, the plating solution flows substantially parallel to the active surfaces of the wafers contained in the fixture. In order to control the electroless plating process and achieve uniform metal layer deposition, it is an essential feature of the present invention that the direction and speed of the laminarly moving solution is superposed by another relative motion. This additional relative motion is generated by the rotation at constant speed of the wafers held in the fixture (the fixture causes the wafers to move synchronously with each other). With this additional motion, a periodic superposition of directions and speeds is achieved between the motion of the wafers and the motion of the solution, resulting in periodically changing wafer portions where the directions and speeds are additive and where the directions and speeds are opposed and subtractive.This periodic relative motion in changing directions between the plating solution and the rotating wafers is crucial for creating uniformly plated layers on exposed metallizations of the active wafer surfaces by controlled electroless deposition.The preferred electroless process flow used for plating uniform metal layers as caps onto exposed copper metallizations such as bond pads of ICs positioned on the active surface of semiconductor wafers has the following steps. The example is chosen for fabricating a cap consisting of two metal layers.Step 1: Coating the passive surface of the IC wafers with resist using a spin-on technique. This coat will prevent accidental metal deposition on the passive surface of the wafers.Step 2: Baking the resist, typically at 110 °C for a time period of about 30 toe 60 minutes.Step 3: Cleaning of the exposed bond pad copper surface using a plasma ashing process for about 2 minutes.Step 4: Loading the wafers into the apparatus/fixture described above for controlled electroless plating.Step 5: Cleaning by immersing the wafers, having the exposed copper of the bond pads, in a solution of sulfuric acid, nitric acid, or any other acid, for about 50 to 60 seconds.Step 6: Rinsing in overflow rinser for about 100 to 180 seconds.Step 7: Immersing the wafers in a catalytic metal chloride solution, such as palladium chloride, for about 40 to 80 seconds. This step "activates" the copper surface, i.e., a layer of seed metal (such as palladium) is deposited onto the clean non-oxidized copper surface.Step 8: Rinsing in dump rinser for about 100 to 180 seconds.Step 9: Initiating laminar motion at constant speed of first electroless plating solution in plating tank. If nickel is to be plated, the solution consists of an aqueous solution of a nickel salt, such as nickel chloride, sodium hypo-phosphite, buffers, complexors, accelerators, stabilizers moderators, and wetting agents.Step 10: Immersing the wafers into the electroless plating solution. The solution, flowing in laminar motion at constant speed, flows substantially parallel to the active surface of the wafers.Step 11: Initiating rotation of wafers at constant speed and synchronously with each other, initiating superposition of directions and speeds of the waver motion and the solution motion.Step 12: Plating layer electrolessly. If a nickel layer is to be plated, plating between 150 and 180 seconds will deposit about 0.4 to 0.6 µm thick nickel layer.Step 13: Stopping rotation of wafers.Step 14: Removing wafers from plating solution.Step 15: Rinsing in dump rinser for about 100 to 180 seconds.Step 16: Repeating Steps 9 through 15 for second electroless plating solution, varying composition of solution and plating time according to metal-to-be-plated.Step 17: Repeating Steps 9 through 15 for third electroless plating solution, varying composition of solution and plating time according to metal-to-be-plated.Step 18: Stripping wafer protection resist from passive surface of wafers for about 8 to 12 minutes.Step 19: Spin rinsing and drying for about 6 to 8 minutes.While this invention has been described in reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. As an example, the invention can be applied to IC bond pad metallizations other than copper, which are difficult or impossible to bond by conventional ball or wedge bonding techniques, such as alloys of refractory metals and noble metals. As another example, the invention applies to immersion plating and autocatalytic plating. A sequence of these plating techniques is particularly useful for electroless plating of gold layers. As another example, the invention provides for easy control of the uniformity of plated layers by modifying individually the flow speed of the plating solution or the rotation speed of the wafers, even in the course of one plating deposition.
A lead frame and method of making the same are provided. The lead frame includes a die mounting portion, first and second pairs of tie bars, and first and second tie bar bridges extending between respective second extension portions of each tie bar pair. First and second pairs of tie bars are mechanically coupled to respective first and second ends of the die mounting portion. Each of the tie bars includes a first extension portion, a second extension portion, a tie bar span mechanically coupled to the first end of the die mounting portion via the first extension portion, a tie bar flap formed along a longitudinal reinforcement crease, and a lateral reinforcement portion extending from said first extension portion to said die mounting portion. The tie bar flap and the tie bar span lie in intersecting planes and are connected along the longitudinal reinforcement crease between the first extension portion and the second extension portion. The lateral reinforcement portion extends in a direction perpendicular to a direction of said longitudinal reinforcement crease. A first tie bar bridge extends between respective second extension portions of a first tie bar of the first pair of tie bars and a second tie bar of the first pair of tie bars. A second tie bar bridge extends between respective second extension portions of a first tie bar of the second pair of tie bars and a second tie bar of the second pair of tie bars.
What is claimed is: 1. A lead frame comprising: a die mounting portion defining a substantially planar mounting surface portion; and at least one tie bar mechanically coupled to said die mounting portion, wherein said at least one tie bar includes a longitudinal reinforcement crease defined along at least a portion of said tie bar and a tie bar flap formed along said reinforcement crease. 2. A lead frame as claimed in claim 1 wherein said lead frame further comprises electrically conductive leads mechanically coupled to said tie bar. 3. A lead frame as claimed in claim 1 wherein said lead frame further comprises electrically conductive leads. 4. A lead frame as claimed in claim 1 wherein said tie bar further comprises a tie bar span defined by said longitudinal reinforcement crease. 5. A lead frame as claimed in claim 1 wherein said tie bar flap is substantially planar. 6. A lead frame as claimed in claim 1 wherein said tie bar further comprises a tie bar span and wherein said tie bar flap and said tie bar span are offset by an angle of less than 90 DEG . 7. A lead frame as claimed in claim 1 wherein said tie bar further comprises a tie bar span and wherein said tie bar flap and said tie bar span are offset by an angle selected so as to enable closely packed lead frame stacking. 8. A lead frame as claimed in claim 1 wherein said die mounting portion and said tie bar form a one-piece, integrally constructed, lead frame. 9. A lead frame comprising: a die mounting portion defining a substantially planar mounting surface portion; and at least one tie bar including a tie bar span mechanically coupled to said die mounting portion and a substantially planar tie bar flap connected to said tie bar span, wherein said tie bar span and said tie bar flap lie in intersecting planes. 10. A lead frame as claimed in claim 9 wherein said angle iron tie bar includes an extension portion wherein said tie bar span is mechanically coupled to said die paddle via said extension portion. 11. A lead frame as claimed in claim 10 wherein said tie bar flap and said tie bar span are connected along a longitudinal reinforcement crease formed by a bend in said lead frame. 12. A lead frame as claimed in claim 11 wherein said bend is characterized by a bend angle of less than 90 DEG . 13. A lead frame comprising: a die mounting portion; and at least one tie bar mechanically coupled to said die mounting portion, wherein said at least one tie bar includes a longitudinal reinforcement crease defined exclusively along a portion of said tie bar. 14. A lead frame as claimed in claim 13 wherein said tie bar includes a tie bar span and a tie bar flap and wherein said reinforcement crease is formed by bending said tie bar flap relative to said tie bar span. 15. A lead frame comprising: a die mounting portion defining a substantially planar mounting surface portion; and at least one tie bar mechanically coupled to said die mounting portion, wherein said at least one tie bar includes a longitudinal reinforcement crease defined along at least a portion of said tie bar, a tie bar flap formed along said reinforcement crease, and a lateral reinforcement portion extending from said tie bar to said die mounting portion, wherein said lateral reinforcement portion extends in a direction perpendicular to a direction of said longitudinal reinforcement crease. 16. A lead frame as claimed in claim 15 wherein said lateral reinforcement portion comprises a chamfered span. 17. A lead frame as claimed in claim 15 wherein said die mounting portion and said tie bar form a one-piece, integrally constructed, lead frame. 18. A mounted die arrangement comprising: a lead frame including a lead frame body including a plurality of electrically conductive leads, a die mounting portion defining a substantially planar mounting surface portion, and at least one tie bar extending from said lead frame body to said die mounting portion, wherein said at least one tie bar includes a longitudinal reinforcement crease defined along at least a portion of said tie bar and a tie bar flap formed along said reinforcement crease; and an integrated circuit die mounted on said die mounting portion, said integrated circuit die including electrical connections conductively coupled to said electrically conductive leads. 19. A mounted die arrangement as claimed in claim 18 wherein said integrated circuit die is characterized by physical characteristics indicative of formation from a wafer including a plurality of similar integrated circuit dies. 20. An encapsulated integrated circuit comprising: a plurality of electrically conductive leads; a die mounting portion defining a substantially planar mounting surface portion; an integrated circuit die mounted on said die mounting portion, said integrated circuit die including electrical connections conductively coupled to said electrically conductive leads; at least one tie bar mechanically coupled to said die mounting portion, wherein said at least one tie bar includes a longitudinal reinforcement crease defined along at least a portion of said tie bar and a tie bar flap formed along said reinforcement crease; and an encapsulating material surrounding said at least one tie bar, said integrated circuit die, and portions of said electrically conductive leads. 21. An encapsulated integrated circuit as claimed in claim 20 wherein said encapsulating material forms a solid state encapsulated integrated circuit. 22. An encapsulated integrated circuit as claimed in claim 20 wherein said encapsulating material physically binds said integrated circuit die. 23. A lead frame comprising: a die mounting portion defining a substantially planar mounting surface portion and having a first end and a second end opposite said first end; a first pair of tie bars mechanically coupled to said first end of said die mounting portion, and a second pair of tie bars mechanically coupled to said second end of said die mounting portion, wherein each of said tie bars includes a first extension portion, a second extension portion, a tie bar span mechanically coupled to said first end of said die mounting portion via said first extension portion, a tie bar flap formed along a longitudinal reinforcement crease, wherein said tie bar flap and said tie bar span lie in intersecting planes and are connected along said longitudinal reinforcement crease between said first extension portion and said second extension portion, a lateral reinforcement portion extending from said first extension portion to said die mounting portion, wherein said lateral reinforcement portion extends in a direction perpendicular to a direction of said longitudinal reinforcement crease; a first tie bar bridge extending between respective second extension portions of a first tie bar of said first pair of tie bars and a second tie bar of said first pair of tie bars; and a second tie bar bridge extending between respective second extension portions of a first tie bar of said second pair of tie bars and a second tie bar of said second pair of tie bars.
BACKGROUND OF THE INVENTION The present invention relates to fabrication technology used in the assembly of integrated circuit packages and, more particularly, to the design of a lead frame for an encapsulated integrated circuit. According to conventional integrated circuit manufacture, the lead frame, and, in particular, the tie bars of the lead frame, often bows or becomes distorted during the die attachment and encapsulation process. The result is an improper spatial relationship of the die attach pad relative to the integrated circuit package. Such displacement causes mechanical and electrical failure within the integrated circuit package and results in loss of system integrity and quality. Accordingly, a need exists for a lead frame design that effectively reduces bowing and distortion of the lead frame during integrated circuit packaging operations. BRIEF SUMMARY OF THE INVENTION This need is met by the present invention wherein a lead frame is provided and includes an angle iron tie bar with lateral reinforcement portions. In accordance with one embodiment of the present invention, a lead frame is provided comprising a die mounting portion and at least one tie bar mechanically coupled to the die mounting portion. The tie bar includes a longitudinal reinforcement crease defined along at least a portion of the fie bar and a tie bar flap formed along the reinforcement crease. The lead frame may further comprise electrically conductive leads mechanically coupled to the tie bar. The tie bar may further comprises a tie bar span defined by the longitudinal reinforcement crease. The tie bar flap may be substantially planar. The tie bar flap and the fie bar span are preferably offset by an angle selected so as to enable closely packed lead frame stacking, e.g., less than 90 DEG . The die mounting portion and the tie bar preferably form a one-piece, integrally constructed, lead frame. In accordance with another embodiment of the present invention, a lead frame is provided comprising a die paddle and an angle iron tie bar mechanically coupled to the die paddle. The angle iron tie bar is preferably characterized by a bend angle of less than 90 DEG . The angle iron tie bar preferably includes an extension portion, a tie bar span defined by a longitudinal reinforcement crease, and a tie bar flap formed along the reinforcement crease. The tie bar span may be mechanically coupled to the die paddle via the extension portion. In accordance with yet another embodiment of the present invention, a lead frame is provided comprising a die mounting portion and at least one tie bar. The tie bar includes a tie bar span mechanically coupled to the die mounting portion and a substantially planar tie bar flap connected to the tie bar span. The tie bar span and the tie bar flap may lie in intersecting planes. The angle iron tie bar may include an extension portion and the tie bar span may be mechanically coupled to the die paddle via the extension portion. The tie bar flap and the tie bar span may be connected along a longitudinal reinforcement crease formed by a bend in the lead frame. In accordance with yet another embodiment of the present invention, a lead frame is provided comprising a die mounting portion and at least one tie bar mechanically coupled to the die mounting portion. The tie bar includes a longitudinal reinforcement crease defined along at least a portion of the tie bar. The tie bar includes a tie bar span and a tie bar flap and the reinforcement crease is formed by bending the tie bar flap relative to the tie bar span. In accordance with yet another embodiment of the present invention, a lead frame is provided comprising a die mounting portion and at least one tie bar mechanically coupled to the die mounting portion. The at least one tie bar includes a longitudinal reinforcement crease defined along at least a portion of the tie bar, a tie bar flap formed along the reinforcement crease, and a lateral reinforcement portion extending from the tie bar to the die mounting portion. The lateral reinforcement portion extends in a direction perpendicular to a direction of the longitudinal reinforcement crease and may comprise a chamfered span. In accordance with yet another embodiment of the present invention, a method of forming a lead frame is provided comprising the steps of: providing a die mounting portion and at least one tie bar, wherein the tie bar includes a fie bar span mechanically coupled to the die mounting portion; and bending a portion of the tie bar span along a longitudinal reinforcement crease defined along at least a portion of the tie bar so as to form a tie bar flap connected to the tie bar span along the reinforcement crease. In accordance with yet another embodiment of the present invention, a method of forming a lead frame is provided comprising the steps of: providing a die mounting portion and at least one tie bar, wherein the tie bar is mechanically coupled to the die mounting portion; and forming a longitudinal reinforcement crease along at least a portion of the tie bar. In accordance with yet another embodiment of the present invention, a method of forming a lead frame is provided comprising the steps of: providing a die mounting portion and at least one tie bar, wherein the tie bar includes a longitudinal tie bar span mechanically coupled to the die mounting portion; and connecting a tie bar flap to the tie bar span. In accordance with yet another embodiment of the present invention, a mounted die arrangement is provided comprising a lead frame and an integrated circuit die. The lead frame includes a lead frame body including a plurality of electrically conductive leads, a die mounting portion, and at least one tie bar extending from the lead frame body to the die mounting portion, wherein the at least one tie bar includes a longitudinal reinforcement crease defined along at least a portion of the tie bar and a tie bar flap formed along the reinforcement crease. The integrated circuit die is mounted on the die mounting portion and includes electrical connections conductively coupled to the electrically conductive leads. The integrated circuit die may be characterized by physical characteristics indicative of formation from a wafer including a plurality of similar integrated circuit dies. In accordance with yet another embodiment of the present invention, an encapsulated integrated circuit is provided comprising: a plurality of electrically conductive leads; a die mounting portion; an integrated circuit die, at least one tie bar, and an encapsulating material. The integrated circuit die is mounted on the die mounting portion and includes electrical connections conductively coupled to the electrically conductive leads. The tie bar is mechanically coupled to the die mounting portion and includes a longitudinal reinforcement crease defined along at least a portion of the tie bar and a tie bar flap formed along the reinforcement crease. The encapsulating material surrounds the tie bar, the integrated circuit die, and portions of the electrically conductive leads to form a solid state encapsulated integrated circuit. In accordance with yet another embodiment of the present invention, a method of encapsulating an integrated circuit is provided comprising the steps of: providing a plurality of electrically conductive leads, a die mounting portion, and at least one tie bar mechanically coupled to the die mounting portion; mounting an integrated circuit die on the die mounting portion, the integrated circuit die including electrical connections; conductively coupling the electrically conductive leads to the electrical connection; reinforcing the tie bar by forming a longitudinal reinforcement crease along at least a portion of the tie bar; and encapsulating the integrated circuit die, at least a portion of the tie bar, and portions of the electrically conductive leads. The tie bar may include a tie bar flap and a tie bar span and the longitudinal reinforcement crease may be formed by bending a tie bar flap relative to a tie bar span along the longitudinal reinforcement crease. In accordance with yet another embodiment of the present invention, a lead frame is provided comprising a die mounting portion, first and second pairs of tie bars, and first and second tie bar bridges extending between respective second extension portions of the tie bar pairs. Each of the tie bars includes a first extension portion, a second extension portion, a tie bar span mechanically coupled to the first end of the die mounting portion via the first extension portion, a tie bar flap formed along a longitudinal reinforcement crease, wherein the tie bar flap and the tie bar span lie in intersecting planes and are connected along the longitudinal reinforcement crease between the first extension portion and the second extension portion. A lateral reinforcement portion extends from the first extension portion to the die mounting portion in a direction perpendicular to a direction of the longitudinal reinforcement crease. Accordingly, it is an object of the present invention to provide a lead frame resistant to bowing and distortion during integrated circuit encapsulation. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS FIG. 1 is an isometric illustration of a lead frame including angle iron tie bar according to the present invention; FIG. 2 is a cross sectional view of a portion of an angle iron tie bar taken along line 2--2 in FIG. 1; FIG. 3 is an isometric illustration of an angle iron tie bar provided with lateral reinforcement portions according to the present invention; FIG. 4 is an isometric view, partially broken away, of an encapsulated integrated circuit incorporating an angle iron tie bar according to the present invention; and FIG. 5 is a plan view of a lead frame including an angle iron tie bar according to the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Referring to FIGS. 1 and 2, where like structure is represented by like reference numerals, a lead frame 10 according to the present invention is illustrated in detail. The lead frame 10 comprises a die mounting portion or die paddle 12 having a first end 13A and a second end 13B opposite the first end 13A. As is clearly illustrated in FIGS. 1 and 5, the die mounting portion 12 defines a substantially planar mounting surface portion. Further, the die mounting portion 12, as a whole, defines a substantially continuous surface with a substantially uniform cross section. The lead frame 10 also comprises first and second pairs of angle iron tie bars 16 mechanically coupled to the die mounting portion 12. Specifically, a first pair of tie bars 16 is mechanically coupled to the first end 13A of the die mounting portion 12, and a second pair of tie bars 16 is mechanically coupled to the second end of the die mounting portion 12. An integrated circuit die 14 is mounted on the die mounting portion 12 to define a mounted die arrangement. Each tie bar 16 includes a first extension portion 22, a second extension portion 23, and a tie bar span 17 which is mechanically coupled to respective first and second ends 13A, 13B of the die mounting portion 12 via the first extension portion 22. The tie bar span 17 is defined by a longitudinal reinforcement crease 18. The longitudinal reinforcement crease 18 is defined along at least a portion of the tie bar 16. The tie bar 16 further includes a substantially planar fie bar flap 20 formed along the reinforcement crease 18 and connected to the tie bar span 17 between the first extension portion 22 and the second extension portion 23. The tie bar span 17 and the tie bar flap 20 lie in intersecting planes. Preferably, the die mounting portion 12 and the tie bars 16 form a one-piece, integrally constructed, lead frame 10. A first tie bar bridge 32 extends between respective second extension portions 23 of a first pair of tie bars 16. Similarly, a second tie bar bridge 34 extends between respective second extension portions of the second pair of tie bars 16. Preferably, the tie bar flap 20 and the corresponding tie bar span 17 and longitudinal reinforcement crease 18 are formed by a bend in the lead frame 10. Specifically, the reinforcement crease 18 is formed by bending the tie bar flap 20 relative to the tie bar span 17. It is contemplated by the present invention, however, that the tie bar flap 20 and the tie bar span 17 may be provided in a manner other than bending. For example, the tie bar flap 20 may be welded to the tie bar span 17. Further, it is contemplated by the present invention that the tie bar flap 20 may be connected to the tie bar span 17 at a location other than the edge of the tie bar span 17. Each tie bar flap 20 and corresponding tie bar span 17 are preferably offset by a bend angle .theta. of less than 90 DEG to enable closely packed stacking of a plurality of lead frames 10, see FIG. 2. Specifically, closely packed stacking is achievable at bend angles .theta. less than 90 DEG because the base edges 21 of the tie bar flaps 20 fit within the tie bar separation space 19 when a lead frame 10 is stacked upon another lead frame 10. At bend angles close to or above 90 DEG , the base edges 21 of the tie bar flaps 20 abut the tie bar span 17 of the other lead frame 10 and closely packed spacing is not achievable without thick and expensive lead frame separation material. Referring to FIGS. 4 and 5, a lead frame 100 comprises a lead frame body 110 including a plurality of electrically conductive leads 26, the die mounting portion 12, and the tie bars 16. The integrated circuit die 14 includes electrical connections 15 conductively coupled to the leads 26. Each tie bar 16 extends from the lead frame body 110 to the die mounting portion 12. As is noted above with reference to FIGS. 1 and 2, each tie bar 16 includes the longitudinal reinforcement crease 18 defined along at least a portion of the tie bar 16 and a tie bar flap 20 formed along the reinforcement crease 18. The leads 26 are mechanically coupled to the tie bar 16 via the lead frame body 110. Portions of the lead frame body 110 are removed, as indicated by the hatched line 25, after encapsulation of the lead frame 100 and the die 14. An encapsulated integrated circuit 28 is illustrated in FIG. 4 and comprises an encapsulating material 29 surrounding the tie bars 16, the integrated circuit die 14, and portions of the electrically conductive leads 26. The encapsulating material 29 physically binds the integrated circuit die 14 and forms a solid state encapsulated integrated circuit 28. For the purposes of describing and defining the present invention, it should be appreciated that the integrated circuit die 14 typically comprises a patterned-substrate integrated circuit cut from a semiconductor wafer including a plurality of similar integrated circuits. However, it should also be appreciated that an integrated circuit die, as utilized herein, is not limited to integrated circuits formed from a wafer of dies. Rather, the integrated circuit die 14 comprises an integrated circuit formed on a substrate. It is contemplated by the present invention that a "lead frame," as referred to in the present description and claims, does not necessarily incorporate electrically conductive leads. Rather, the lead frame according to the present invention may merely serve to support a die or a die paddle mechanically. Referring now to FIG. 3, an alternative embodiment of the present invention, including an angle iron tie bar 10' provided with lateral reinforcement portions 30, is illustrated. The lead frame 10' comprises the die mounting portion or die paddle 12 and respective pairs of tie bars 16' mechanically coupled to opposite ends of the die is mounting portion 12. Each tie bar 16' includes the longitudinal reinforcement crease 18 and the tie bar flap 20. In addition, lateral reinforcement portions 30 extend from the each tie bar 16' to the die mounting portion 12 in a direction perpendicular to a direction of the longitudinal reinforcement crease 18. The lateral reinforcement portion 30 comprises a chamfered span, i.e., a lead frame portion bounded by a diagonal projection from the tie bar 16 to the die mounting portion 12. The lateral reinforcement portion 30 is operative further to reduce bowing and distortion of the tie bar 16 during encapsulation. Having described the invention in detail and by reference to preferred embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the invention defined in the appended claims.
Embodiments include computing devices, systems, and methods for task signaling on a computing device. Execution of a task by an initial thread on a critical path of execution may be interrupted to create at least one parallel task by the initial thread that can be executed in parallel with the task executed by the initial thread. An initial signal indicating the creation of the at least one parallel task to a relay thread may be sent by the initial thread. Execution of the task by the initial thread may resume before an acquisition of the at least one parallel task.
CLAIMSWhat is claimed is:1. A method of task signaling on a computing device, comprising:interrupting execution of a task by an initial thread on a critical path of execution;creating at least one parallel task by the initial thread that can be executed in parallel with the task executed by the initial thread;sending an initial signal indicating creation of the at least one parallel task to a relay thread by the initial thread; andresuming execution of the task by the initial thread before an acquisition of the at least one parallel task.2. The method of claim 1, further comprising:receiving the initial signal by the relay thread; andchanging the relay thread to an active state in response to receiving the initial signal when the relay thread is in a wait state.3. The method of claim 2, wherein the initial signal is a direct initial signal, and wherein receiving the initial signal by the relay thread comprises receiving the direct initial signal via a connection with the initial thread.4. The method of claim 2, wherein the initial signal is an indirect initial signal, the method further comprising:modifying data at a location of a memory device indicating the creation of the at least one parallel task, wherein receiving the initial signal by the relay thread comprises retrieving the modified data from the location of the memory device.5. The method of claim 2, further comprising sending a relay signal indicating the creation of the at least one parallel task to at least one work thread.6. The method of claim 5, further comprising:receiving the relay signal by the at least one work thread;changing the at least one work thread to an active state in response to receiving the relay signal when the at least one work thread is in a wait state;acquiring the at least one parallel task by the at least one work thread; and executing the at least one parallel task by the at least one work thread in parallel with the execution of the task by the initial thread.7. The method of claim 2, further comprising:determining whether another parallel task remains by the relay thread;acquiring the another parallel task by the relay thread; andexecuting the another parallel task by the relay thread in parallel with the execution of the task by the initial thread.8. The method of claim 1, further comprising:determining whether a state change threshold for the relay thread is surpassed; andchanging a state of the relay thread from an active state to a wait state or from a level of the wait state to a lower level of the wait state in response to determining that the state change threshold for the relay thread is surpassed.9. A computing device, comprising:a plurality of processor cores communicatively connected to each other, wherein the plurality of processor cores includes a first processor core configured to execute an initial thread, a second processor core configured to execute a relay thread, and a third processor core configured to execute a work thread, and wherein the first processor core is configured with processor-executable instructions to perform operations comprising: interrupting execution of a task by the initial thread on a critical path of execution;creating at least one parallel task by the initial thread that can be executed in parallel with the task executed by the initial thread;sending an initial signal indicating creation of the at least one parallel task to the relay thread by the initial thread; andresuming execution of the task by the initial thread before an acquisition of the at least one parallel task.10. The computing device of claim 9, wherein the second processor core is configured with processor-executable instructions to perform operations comprising: receiving the initial signal by the relay thread; andchanging the relay thread to an active state in response to receiving the initial signal when the relay thread is in a wait state.11. The computing device of claim 10, wherein the initial signal is a direct initial signal, and wherein the second processor core is configured with processor-executable instructions to perform operations such that receiving the initial signal by the relay thread comprises receiving the direct initial signal via a connection with the first processor core executing the initial thread.12. The computing device of claim 10, further comprising a memory device communicatively connect to the first processor core and the second processor core, wherein the initial signal is an indirect initial signal,wherein the first processor core is configured with processor-executable instructions to perform operations further comprising modifying data at a location of the memory device indicating the creation of the at least one parallel task, and wherein the second processor core is configured with processor-executable instructions to perform operations such that receiving the initial signal by the relay thread comprises retrieving the modified data from the location of the memory device.13. The computing device of claim 10, wherein the second processor core is configured with processor-executable instructions to perform operations further comprising:sending a relay signal indicating the creation of the at least one parallel task to at least one work thread.14. The computing device of claim 13, wherein the third processor core is configured with processor-executable instructions to perform operations comprising:receiving the relay signal by the work thread;changing the work thread to an active state in response to receiving the relay signal when the work thread is in a wait state;acquiring the at least one parallel task by the work thread; andexecuting the at least one parallel task by the work thread in parallel with the execution of the task by the initial thread.15. The computing device of claim 10, wherein the second processor core is configured with processor-executable instructions to perform operations further comprising:determining whether another parallel task remains by the relay thread;acquiring the another parallel task by the relay thread; andexecuting the another parallel task by the relay thread in parallel with the execution of the task by the initial thread.16. The computing device of claim 9, wherein the second processor core is configured with processor-executable instructions to perform operations further comprising:determining whether a state change threshold for the relay thread is surpassed; andchanging a state of the relay thread from an active state to a wait state or from a level of the wait state to a lower level of the wait state in response to determining that the state change threshold for the relay thread is surpassed.17. A computing device, comprising:means for interrupting execution of a task by an initial thread on a critical path of execution;means for creating at least one parallel task by the initial thread that can be executed in parallel with the task executed by the initial thread;means for sending an initial signal indicating creation of the at least one parallel task to a relay thread by the initial thread; andmeans for resuming execution of the task by the initial thread before an acquisition of the at least one parallel task.18. The computing device of claim 17, further comprising:means for receiving the initial signal by the relay thread; andmeans for changing the relay thread to an active state in response to receiving the initial signal when the relay thread is in a wait state.19. The computing device of claim 18, wherein the initial signal is a direct initial signal, andwherein means for receiving the initial signal by the relay thread comprises means for receiving the direct initial signal via a connection with the initial thread.20. The computing device of claim 18, wherein the initial signal is an indirect initial signal, andwherein the computing device further comprises means for modifying data at a location of a memory device indicating the creation of the at least one parallel task, wherein means for receiving the initial signal by the relay thread comprises means for retrieving the modified data from the location of the memory device.21. The computing device of claim 18, further comprising:means for sending a relay signal indicating the creation of the at least one parallel task to at least one work thread;means for receiving the relay signal by the at least one work thread;means for changing the at least one work thread to an active state in response to receiving the relay signal when the at least one work thread is in a wait state;means for acquiring the at least one parallel task by the at least one work thread; andmeans for executing the at least one parallel task by the at least one work thread in parallel with execution of the task by the initial thread.22. The computing device of claim 18, further comprising:means for determining whether another parallel task remains by the relay thread;means for acquiring the another parallel task by the relay thread; and means for executing the another parallel task by the relay thread in parallel with execution of the task by the initial thread.23. The computing device of claim 17, further comprising:means for determining whether a state change threshold for the relay thread is surpassed; and means for changing a state of the relay thread from an active state to a wait state or from a level of the wait state to a lower level of the wait state in response to determining that the state change threshold for the relay thread is surpassed.24. A non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform operations comprising:interrupting execution of a task by an initial thread on a critical path of execution;creating at least one parallel task by the initial thread that can be executed in parallel with the task executed by the initial thread;sending an initial signal indicating creation of the at least one parallel task to a relay thread by the initial thread; andresuming execution of the task by the initial thread before an acquisition of the at least one parallel task.25. The non-transitory processor-readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:receiving the initial signal by the relay thread; andchanging the relay thread to an active state in response to receiving the initial signal when the relay thread is in a wait state.26. The non-transitory processor-readable storage medium of claim 25, wherein the initial signal is a direct initial signal, andwherein the stored processor-executable instructions are configured to cause the processor to perform operations such that receiving the initial signal by the relay thread comprises receiving the direct initial signal via a connection with the initial thread.27. The non-transitory processor-readable storage medium of claim 25, wherein the initial signal is an indirect initial signal, andwherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:modifying data at a location of a memory device indicating the creation of the at least one parallel task, wherein receiving the initial signal by the relay thread comprises retrieving the modified data from the location of the memory device.28. The non-transitory processor-readable storage medium of claim 25, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:sending a relay signal indicating the creation of the at least one parallel task to at least one work thread;receiving the relay signal by the at least one work thread;changing the at least one work thread to an active state in response to receiving the relay signal when the at least one work thread is in a wait state;acquiring the at least one parallel task by the at least one work thread; and executing the at least one parallel task by the at least one work thread in parallel with the execution of the task by the initial thread.29. The non-transitory processor-readable storage medium of claim 25, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:determining whether another parallel task remains by the relay thread;acquiring the another parallel task by the relay thread; andexecuting the another parallel task by the relay thread in parallel with the execution of the task by the initial thread.30. The non-transitory processor-readable storage medium of claim 24, wherein the stored processor-executable instructions are configured to cause the processor to perform operations further comprising:determining whether a state change threshold for the relay thread is surpassed; andchanging a state of the relay thread from an active state to a wait state or from a level of the wait state to a lower level of the wait state in response to determining that the state change threshold for the relay thread is surpassed.
TITLETask Signaling Off A Critical Path Of Execution BACKGROUND[0001] Building applications that are responsive, high-performance, and power- efficient is crucial to delivering a satisfactory user experience. To increaseperformance and power efficiency, parallel sections of a program can be executed by one or more threads running on one or more computing cores, on a central processing unit (CPU), graphics processing unit (GPU), or other parallel hardware. Typically, one thread, called the "main thread", enters the parallel section, creates helper tasks, and notifies the other threads to help in the execution of the parallel section.[0002] While task creation is typically inexpensive, notifying other threads can be relatively very expensive because it often involves operating system calls. For example, on a top-tier quad-core smartphone, the latency to signal threads waiting on a condition variable can be as high as 40 microseconds (approximately 90,000 CPU cycles). Each of the several parallel sections of code may take under 40 microseconds to execute, making such high signaling costs unacceptable for parallel execution. During signaling on the critical path of execution, execution of the parallel section does not begin on either the critical path of execution or another thread, initiated by the signaling, until the signaling is completed. Thus, rather than speeding up the original section of code on the critical path of execution, the parallelization slows down the execution on the critical path of execution by nearly a factor of two. Some of this latency can be recovered when the other thread executes a task in parallel with the task on the critical path of execution.SUMMARY[0003] The methods and apparatuses of various embodiments provide circuits and methods for task signaling on a computing device. Various embodiments may include interrupting execution of a task by an initial thread on a critical path of execution, creating at least one parallel task by the initial thread that can be executed in parallel with the task executed by the initial thread, sending an initial signal indicating the creation of the at least one parallel task to a relay thread by the initial thread, and resuming execution of the task by the initial thread before an acquisition of the at least one parallel task.[0004] Some embodiments may further include receiving the initial signal by the relay thread, and changing the relay thread to an active state in response to receiving the initial signal when the relay thread is in a wait state.[0005] In some embodiments, the initial signal may be a direct initial signal, and receiving the initial signal by the relay thread may include receiving the direct initial signal via a connection with the initial thread.[0006] In some embodiments, the initial signal may be an indirect initial signal, and the embodiments may further include modifying data at a location of a memory device indicating the creation of the at least one parallel task in which receiving the initial signal by the relay thread may include retrieving the modified data from the location of the memory device.[0007] Some embodiments may further include sending a relay signal indicating the creation of the at least one parallel task to at least one work thread.[0008] Some embodiments may further include receiving the relay signal by the at least one work thread, changing the at least one work thread to an active state in response to receiving the relay signal when the at least one work thread is in a wait state, acquiring the at least one parallel task by the at least one work thread, and executing the at least one parallel task by the at least one work thread in parallel with the execution of the task by the initial thread.[0009] Some embodiments may further include determining whether another parallel task remains by the relay thread, acquiring the another parallel task by the relay thread, and executing the another parallel task by the relay thread in parallel with the execution of the task by the initial thread.[0010] Some embodiments may further include determining whether a state change threshold for the relay thread is surpassed, and changing a state of the relay thread from an active state to a wait state or from a level of the wait state to a lower level of the wait state in response to determining that the state change threshold for the relay thread is surpassed.[0011] Various embodiments may include a computing device configured for task signaling. The computing device may include a plurality of processor cores communicatively connected to each other, in which the plurality of processor cores includes a first processor core configured to execute an initial thread, a second processor core configured to execute a relay thread, and a third processor core configured to execute a work thread, and in which the processor cores are configured to perform operations of one or more embodiment methods described above.[0012] Various embodiments may include a computing device configured for task signaling having means for performing functions of one or more of the aspect methods described above.[0013] Various embodiments may include a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform operations of the methods described above.BRIEF DESCRIPTION OF THE DRAWINGS[0014] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate example embodiments of various embodiments, and together with the general description given above and the detailed description given below, serve to explain the features of the claims. [0015] FIG. 1 is a component block diagram illustrating a computing device suitable for implementing an embodiment.[0016] FIG. 2 is a component block diagram illustrating an example multi-core processor suitable for implementing an embodiment.[0017] FIG. 3 is a process flow signaling diagram illustrating an example of task signaling off of a critical path of execution with direct initial signaling according to an embodiment.[0018] FIG. 4 is a process flow signaling diagram illustrating an example of task signaling off of a critical path of execution with indirect initial signaling according to an embodiment.[0019] FIG. 5 is a state diagram illustrating a state progression for a thread used to implement task signaling off of a critical path of execution according to anembodiment.[0020] FIG. 6 is a process flow diagram illustrating an embodiment method for initial signaling in task signaling off of a critical path of execution.[0021] FIG. 7 is a process flow diagram illustrating an embodiment method for relay signaling in task signaling off of a critical path of execution.[0022] FIG. 8 is a process flow diagram illustrating an embodiment method for task execution in task signaling off of a critical path of execution.[0023] FIG. 9 is component block diagram illustrating an example mobile computing device suitable for use with the various embodiments.[0024] FIG. 10 is component block diagram illustrating an example mobilecomputing device suitable for use with the various embodiments.[0025] FIG. 11 is component block diagram illustrating an example server suitable for use with the various embodiments. DETAILED DESCRIPTION[0026] The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the claims.[0027] The terms "computing device" and "mobile computing device" are used interchangeably herein to refer to any one or all of cellular telephones, smartphones, personal or mobile multi-media players, personal data assistants (PDA's), laptop computers, tablet computers, convertible laptops/tablets (2-in-l computers), smartbooks, ultrabooks, netbooks, palm-top computers, wireless electronic mail receivers, multimedia Internet enabled cellular telephones, mobile gaming consoles, wireless gaming controllers, and similar personal electronic devices that include a memory, and a multi-core programmable processor. While the various embodiments are particularly useful for mobile computing devices, such as smartphones, which have limited memory and battery resources, the embodiments are generally useful in any electronic device that implements a plurality of memory devices and a limited power budget in which reducing the power consumption of the processors can extend the battery-operating time of a mobile computing device. The term "computing device" may further refer to stationary computing devices including personal computers, desktop computers, all-in-one computers, work stations, super computers, mainframe computers, embedded computers, servers, home theater computers, and game consoles.[0028] Embodiments include methods, and systems and devices implementing such methods for improving device performance by reducing the signaling burden on the critical path of execution and implementing the signaling by a dedicated thread. The embodiments include methods, systems, and devices for a hybrid signaling scheme including signaling, by an initial thread, of a parallel task for execution to a dedicated relay thread, and signaling by the relay thread of the parallel tasks for execution to one or multiple work threads.[0029] An outer process executing on a device may include parallel sections that may be executed in parallel with the outer process. The parallel sections may include any processes that may be executed in parallel with the outer process. For ease of explanation, the parallel sections are described herein using the nonlimiting example of nested loops. The nested loops may involve multiple executions for various executions of inner processes. To help execute the nested loops of the outer processes, an initial thread on a critical path of execution for an outer process creates parallel tasks to execute by other work threads, such as threads executed on a different processor or processor core as the initial thread.[0030] The initial thread may signal a relay thread that a parallel task is created for execution by a work thread. Signaling a thread by an initial thread (e.g., direct initial signaling the thread) to notify the thread of creation of a parallel task for execution by the thread interrupts the execution of the task by the main thread in the critical path of execution. In an embodiment, the relay thread may actively check or wait for a signal from the initial thread (e.g., direct initial signal to the relay thread or setting of a Boolean in memory monitored by the relay thread).[0031] The signal by the initial thread to the relay thread may include a signal to the relay thread configured to notify the relay thread of the creation of a parallel task for execution by a thread other than the initial thread, such as a work thread or the relay thread itself. The signal by the initial thread to the relay thread may include a direct initial signal to the relay thread, which may wakeup the relay thread from an idle or inactive thread. The signal by the initial thread to the relay thread may include setting a value to a location in memory, such as a Boolean flag in a register, indicating that the parallel task is created, in which case the relay thread may periodically check the location in memory for the value indicating that the parallel task is created. I [0032] In either embodiment, it is only necessary for the initial thread to signal the relay thread to initiate parallel processing of the created parallel task(s). Thus, the initial thread does not have to signal other work threads, thereby reducing the amount of time and resources spent by the initial thread to signal that the parallel task is created. After sending signals to the relay thread, the initial thread returns to executing its task on the critical path of execution without assuming the overhead of having to send more signals to work threads, or in various embodiments, having to wait for the created parallel task to be acquired by a work thread or relay thread.[0033] In an embodiment, the initial thread may signal multiple relay threads by writing the value (e.g., a Boolean) to the location in memory that is accessible by the multiple relay threads, or by directly signaling the notification signal to each relay thread. The number of the multiple relay threads may be less than or equal to the number of the multiple work threads. The number of the relay threads and work threads, and an assignment of a set of work threads to a relay thread may be configured based on performance and/or power requirements, and may beconfigurable for each process.[0034] In response to receiving the signal from the initial thread, the relay thread may signal a set of work threads that may be assigned to processors or processor cores in an active, idle, or inactive state while waiting for the signal by the relay thread. The signal by the relay thread may cause one or more of the work threads to wake up, retrieve the parallel task, and execute the parallel task. The signaling of the work thread may be implemented through operating system calls that can specify particular work or relay threads within a set of work or relay threads to receive the signal or broadcast the signal to the set of work threads. In an embodiment, a processor or processor core assigned to a specified signaled work thread may be pre-assigned to execute the parallel task. In an embodiment, broadcasting the signal may create a race condition between the processors or processor cores assigned with work threads to be assigned to execute the parallel task. [0035] FIG. 1 illustrates a system including a computing device 10 in communication with a remote computing device 50 suitable for use with the various embodiments. The computing device 10 may include a system-on-chip (SoC) 12 with a processor 14, a memory 16, a communication interface 18, and a storage memory interface 20. The computing device may further include a communication component 22 such as a wired or wireless modem, a storage memory 24, an antenna 26 for establishing a wireless connection 32 to a wireless network 30, and/or the network interface 28 for connecting to a wired connection 44 to the Internet 40. The processor 14 may include any of a variety of hardware cores, for example a number of processor cores.[0036] The term "system-on-chip" (SoC) is used herein to refer to a set ofinterconnected electronic circuits typically, but not exclusively, including a hardware core, a memory, and a communication interface. A hardware core may include a variety of different types of processors, such as a general purpose processor, a central processing unit (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), an accelerated processing unit (APU), an auxiliary processor, a single-core processor, and a multi-core processor. A hardware core may further embody other hardware and hardware combinations, such as a field programmable gate array(FPGA), an application-specific integrated circuit (ASIC), other programmable logic device, discrete gate logic, transistor logic, performance monitoring hardware, watchdog hardware, and time references. Integrated circuits may be configured such that the components of the integrated circuit reside on a single piece of semiconductor material, such as silicon. The SoC 12 may include one or more processors 14. The computing device 10 may include more than one SoCs 12, thereby increasing the number of processors 14 and processor cores. The computing device 10 may also include processors 14 that are not associated with an SoC 12. Individual processors 14 may be multi-core processors as described below with reference to FIG. 2. The processors 14 may each be configured for specific purposes that may be the same as or different from other processors 14 of the computing device 10. One or more of the processors 14 and processor cores of the same or different configurations may be grouped together. A group of processors 14 or processor cores may be referred to as a multi-processor cluster.[0037] The memory 16 of the SoC 12 may be a volatile or non-volatile memory configured for storing data and processor-executable code for access by the processor 14. The computing device 10 and/or SoC 12 may include one or more memories 16 configured for various purposes. In an embodiment, one or more memories 16 may include volatile memories such as random access memory (RAM) or main memory, or cache memory. These memories 16 may be configured to temporarily hold a limited amount of data received from a data sensor or subsystem, data and/or processor- executable code instructions that are requested from non- volatile memory, loaded to the memories 16 from non- volatile memory in anticipation of future access based on a variety of factors, and/or intermediary processing data and/or processor-executable code instructions produced by the processor 14 and temporarily stored for future quick access without being stored in non-volatile memory.[0038] The memory 16 may be configured to store data and processor-executable code, at least temporarily, that is loaded to the memory 16 from another memory device, such as another memory 16 or storage memory 24, for access by one or more of the processors 14. The data or processor-executable code loaded to the memory 16 may be loaded in response to execution of a function by the processor 14. Loading the data or processor-executable code to the memory 16 in response to execution of a function may result from a memory access request to the memory 16 that isunsuccessful, or a miss, because the requested data or processor-executable code is not located in the memory 16. In response to a miss, a memory access request to another memory 16 or storage memory 24 may be made to load the requested data or processor-executable code from the other memory 16 or storage memory 24 to the memory device 16. Loading the data or processor-executable code to the memory 16 in response to execution of a function may result from a memory access request to another memory 16 or storage memory 24, and the data or processor-executable code may be loaded to the memory 16 for later access.[0039] The communication interface 18, communication component 22, antenna 26, and/or network interface 28, may work in unison to enable the computing device 10 to communicate over a wireless network 30 via a wireless connection 32, and/or a wired network 44 with the remote computing device 50. The wireless network 30 may be implemented using a variety of wireless communication technologies, including, for example, radio frequency spectrum used for wireless communications, to provide the computing device 10 with a connection to the Internet 40 by which it may exchange data with the remote computing device 50.[0040] The storage memory interface 20 and the storage memory 24 may work in unison to allow the computing device 10 to store data and processor-executable code on a non-volatile storage medium. The storage memory 24 may be configured much like an embodiment of the memory 16 in which the storage memory 24 may store the data or processor-executable code for access by one or more of the processors 14. The storage memory 24, being non-volatile, may retain the information even after the power of the computing device 10 has been shut off. When the power is turned back on and the computing device 10 reboots, the information stored on the storage memory 24 may be available to the computing device 10. The storage memory interface 20 may control access to the storage memory 24 and allow the processor 14 to read data from and write data to the storage memory 24.[0041] Some or all of the components of the computing device 10 may be differently arranged and/or combined while still serving the necessary functions. Moreover, the computing device 10 may not be limited to one of each of the components, and multiple instances of each component may be included in various configurations of the computing device 10.[0042] FIG. 2 illustrates a multi-core processor 14 suitable for implementing an embodiment. The multi-core processor 14 may have a plurality of homogeneous or heterogeneous processor cores 200, 201, 202, 203. The processor cores 200, 201, 202, 203 may be homogeneous in that, the processor cores 200, 201, 202, 203 of a single processor 14 may be configured for the same purpose and have the same or similar performance characteristics. For example, the processor 14 may be a general purpose processor, and the processor cores 200, 201, 202, 203 may be homogeneous general purpose processor cores. Alternatively, the processor 14 may be a graphics processing unit or a digital signal processor, and the processor cores 200, 201, 202, 203 may be homogeneous graphics processor cores or digital signal processor cores, respectively. For ease of reference, the terms "processor" and "processor core" may be used interchangeably herein.[0043] The processor cores 200, 201, 202, 203 may be heterogeneous in that, the processor cores 200, 201, 202, 203 of a single processor 14 may be configured for different purposes and/or have different performance characteristics. Theheterogeneity of such heterogeneous processor cores may include different instruction set architecture, pipelines, operating frequencies, etc. An example of suchheterogeneous processor cores may include what are known as "big. LITTLE" architectures in which slower, low-power processor cores may be coupled with more powerful and power-hungry processor cores. In similar embodiments, the SoC 12 may include a number of homogeneous or heterogeneous processors 14.[0044] In the example illustrated in FIG. 2, the multi-core processor 14 includes four processor cores 200, 201, 202, 203 (i.e., processor core 0, processor core 1, processor core 2, and processor core 3). For ease of explanation, the examples herein may refer to the four processor cores 200, 201, 202, 203 illustrated in FIG. 2. However, the four processor cores 200, 201, 202, 203 illustrated in FIG. 2 and described herein are merely provided as an example and in no way are meant to limit the variousembodiments to a four-core processor system. The computing device 10, the SoC 12, or the multi-core processor 14 may individually or in combination include fewer or more than the four processor cores 200, 201, 202, 203 illustrated and described herein. [0045] FIG. 3 illustrates an example of task signaling off a critical path of execution with direct initial signaling according to an embodiment. In various embodiments, the initial thread 300, the relay threads 302a-302b, and the work threads 304a-304c, may be implemented on different processor cores of a multi-core processor, multithreaded processor, or across various processors of various configurations.[0046] The initial thread 300 may execute a task 306a of a program on a critical path for execution of the program. The task may be for executing a process of the program. The process may include a parallel section, such as a loop, for which the process may become an outer process and iterations of the loop include at least one execution of an inner process. The inner process may be configured such that the inner process may be executed in parallel with the outer process and iterations of the inner process as well. For example, the initial thread 300 may execute the outer process while one or more other threads (e.g., work threads 304a-304c or relay threads 302a-302b) execute iterations of the inner process.[0047] Upon encountering the loop, the initial thread 300 may interrupt the execution of the task 306a to divide the iterations of the inner loop and create parallel tasks 308 for executing the inner loop. The number of iterations per parallel task and the number of parallel tasks created may be determined by various factors, including latency requirements for executing the parallel tasks, power or resource (e.g., processors, processor cores, memory, bandwidth, relay threads 302a-302b, and work threads 304a-304c) requirements for executing the parallel tasks, resource availability for executing the parallel tasks, and programming. The parallel tasks may be configured with the same number or different numbers of iterations of the inner process. The number of parallel tasks created may be equal to, greater than, or less than the number of work threads 304a-304c and/or relay threads 302a-302b.[0048] The initial thread 300 may send a direct initial signal 310a, 310b to at least one of the relay threads 302a-302b to wake up and/or notify the relay threads 302a-302b of the creation of a parallel task, and the initial thread 300 may return to executing the task 306b. In various embodiments, the relay threads 302a-302b may each be in one of various states, including active or wait (e.g., idle and inactive).[0049] The state of a relay thread 302a-302b may be determined by various factors, including latency, power, or resource requirements for executing the parallel tasks, power or resource availability, and programming. For example, because latency of execution of the parallel tasks may be affected by the time it takes to move a relay thread 302a-302b from an inactive or idle state to an active state, lower latency requirements may benefit from having more relay threads 302a-302b active; more relay threads 302a-302b idle may be sufficient for middle latency requirements; and more relay threads 302a-302b inactive may be sufficient for higher latencyrequirements. In other examples, because of the power required to keep a relay thread 302a-302b in an active or idle state, lower power requirements may benefit from having more relay threads 302a-302b inactive; more relay threads 302a-302b idle may be sufficient for middle power requirements; and more relay threads 302a-302b active may be sufficient for higher power requirements. Depending on the state of the relay thread 302a-302b receiving the direct initial signal, the response by the relay thread 302a-302b may vary.[0050] In the example illustrated in FIG. 3, the relay thread 302a may be in an active state and the relay thread 302b may be in an idle state or an inactive state. In response to receiving the direct initial signal from the initial thread 300, the relay thread 302a may send a relay signal 312a-312c (e.g., 312a) to at least one of the work threads 304a-304c (e.g., 304a).[0051] In response to receiving the direct initial signal from the initial thread 300, the relay thread 302b may wake up 314 and send a relay signal 312b, 312c to at least one of the work threads 304a-304c. The relay threads 302a-302b may send the relay signal to the work threads 304a-304c to wake up and/or notify the work threads 304a- 304c of the creation of a parallel task. Similar to the relay threads 302a-302b, in various embodiments, the work threads 304a-304c may each be in one of various states, including active, idle, and inactive.[0052] The state of a work thread 304a-304c may be determined by various factors, including latency, power, or resource requirements for executing the parallel tasks, power or resource availability, and programming. For example, because latency of execution of the parallel tasks may be affected by the time it takes to move a work thread 304a-304c from an inactive or idle state to an active state, lower latency requirements may benefit from having more work threads 304a-304c active; more work threads 304a-304c idle may be sufficient for middle latency requirements; and more work threads 304a-304c inactive may be sufficient for higher latencyrequirements. In other examples, because of the power required to keep a work thread 304a-304c in an active or idle state, lower power requirements may benefit from having more work threads 304a-304c inactive; more work threads 304a-304c idle may be sufficient for middle power requirements; and more work threads 304a-304c active may be sufficient for higher power requirements. Depending on the state of the work thread 304a-304c receiving the relay signal, the response by the work thread 304a- 304c may vary.[0053] In the example illustrated in FIG. 3, the work thread 304a may be in an active state, the work thread 304b may be in an idle state or an inactive state, and the work thread 304c may be in an active state. In response to receiving the relay signal from the relay thread 302a, the work thread 304a may acquire a parallel task and execute the parallel task 316a). In response to receiving the relay signal from the relay thread 302b, the work thread 304b may wake up 318, and acquire and execute a parallel task 316b. In response to receiving the relay signal from the relay thread 302b, the work thread 304c may acquire a parallel task and execute the parallel task 316c. While acquiring and executing a parallel task, the work threads 304a-304c may enter into a work state. In various embodiments, some of the relay threads 302a-302b may also attempt to acquire and execute parallel tasks entering into the work state, while leaving at least one relay thread 302a-302b in the active, idle, or inactive state so that it may be available to receive additional direct initial signals from the initial thread 300.[0054] In various embodiments, the relay threads 302a-302b and work threads 304a- 304c may be in the work state, in which the relay threads 302a-302b and work threads 304a-304c may be executing a task and may not receive or accept a direct initial signal from the initial thread 300 or a relay signal from the relay threads 302a-302b. Upon completing the task executed during the work state, the relay threads 302a-302b and work threads 304a-304c may attempt to acquire a parallel task, or may change to an active, idle, or inactive state. To which state the relay threads 302a-302b and work threads 304a-304c changes from the work state may be determined by various factors, including latency, power, or resource requirements for executing the parallel tasks, power or resource availability, programming, and availability of parallel tasks.[0055] FIG. 4 illustrates an example of task signaling off a critical path of execution with indirect initial signaling according to an embodiment. The descriptions with reference to the example illustrated in FIG. 3 also apply to the example illustrated in FIG. 4 with the following differences. Rather than direct initial signaling between the initial thread 300 and the relay threads 302a-302b, the initial thread 300 may indirectly signal the relay threads 302a-302b by initiating a change in value of a signal representative ("rep. signal") 400. The signal representative 400 may include any hardware device to which the relay threads 302a-302b have access and can detect a change in the representative signal. In a non-limiting example, the representative signal 400 may include a Boolean flag written to a location in a memory device (e.g., a cache memory, a random access memory, a register, other solid-state volatile memory) that may indicate the creation of or lack of parallel tasks. The initial thread 300 may send a modification signal 402 to the signal representative 400 to indicate the creation of parallel tasks. [0056] Regardless of the state of the relay threads 302a-302b, the relay threads may periodically check 404a-404e the signal representative 400 for an indication of the creation of parallel tasks. For example, a relay thread 302a in an active state may repeatedly check (404a-404c) the signal representative 400 for an indication of the creation of parallel tasks until it detects the indication of the creation of parallel tasks. In response to detecting an indication of the creation of parallel tasks, the relay thread 302a may proceed in the manner described with reference to the example illustrated in FIG. 3 after the relay thread 302a received the direct initial signal from the initial thread 300. A relay thread 302b that is in an idle state or an inactive state may repeatedly wake up 406a, 406b and check (404d, 404e) the signal representative 400 for an indication of the creation of parallel tasks until it detects the indication of the creation of parallel tasks. In various embodiments, a relay thread 302b in an idle state may wake up 406a, 406b and check 404d, 404e more frequently than a relay thread that is in an inactive state. In response to detecting the indication of the creation of parallel tasks, the relay thread 302b may proceed in the manner described with reference to the example illustrated in FIG. 3 after the relay thread 302b received the direct initial signal from the initial thread 300 and woke up.[0057] FIG. 5 illustrates a state progression 500 for a thread used to implement task signaling off a critical path of execution according to an embodiment. In various embodiments, the state progression 500 may apply for any thread, including the initial thread, the relay threads, and the work threads. In determination block 502, a thread may determine whether it has acquired a parallel task to execute.[0058] In response to determining that it has acquired a tasks to execute (i.e., determination 502 = "Yes"), the thread may enter a work state 504 to execute the task and return to determine whether another task is acquired in determination block 502 in response to completing the task. In various embodiments, an initial thread executing its acquired task during the work state 504 may encounter a parallel section, such as a loop, and enter a spawn state 506 to create parallel tasks and to send either direct or indirect initial signals 508 to notify a relay thread of the creation of the parallel task. Upon completion of the creation of parallel tasks and signaling the relay threads, the initial thread may return to the work state 504.[0059] In response to determining that it has not acquired tasks to execute (i.e., determination 502 = "No"), the thread may determine whether to enter an active state 512 in determination block 510. The determination of whether to enter an active state may be determined by various factors, including latency, power, or resource requirements for executing the parallel tasks, power or resource availability, programming, and availability of parallel tasks.[0060] In response to determining to enter the active state 512 (i.e., determination block 510 = "Yes"), the thread may enter into and remain in the active state 512 checking for a direct or indirect initial signal 508 indicating the creation of a parallel task. In response to receiving a direct or indirect initial signal 508, the thread may enter into a signal state 514 and send a relay signal 516a-516c to other threads in wait states 518a-518c. The thread in the signal state 514 may determine whether it has acquired a parallel task to execute in determination block 502.[0061] In response to determining not to enter the active state 512 (i.e., determination block 510 = "No"), the thread may enter into and remain in a wait state 518a-518c (e.g., varying levels of idle and inactive) until either receiving a relay signal 516a- 516c or until surpassing a state change threshold triggering a change to a lower wait state 518a-518c.[0062] In response to receiving a relay signal 516a-516c, the threads in the wait state 518a-518c may each determine whether it has acquired a parallel task to execute in determination block 502. In various embodiments, the state change threshold triggering a change to a lower wait state 518a-518c may include various threshold values corresponding with latency, power, or resource requirements for executing the parallel tasks, power or resource availability, programming, availability of parallel tasks, and time. The thread entering a wait state may enter directly into one of any wait states 518a-518c based on these factors, for example. In an embodiment, the thread may enter into wait state 518a being the highest wait state (i.e., having the lowest latency to switching to an active or work state from among the wait states 518a-518c).[0063] In response to surpassing the state change threshold, the thread may enter a next lower wait state 518b (i.e., having a higher latency to switching to an active or work state compared to the highest wait state 518a). This step down in wait state level may continue each time another state change threshold is surpassed until reaching a lowest wait state 518c (i.e., having the highest latency to switching to an active or work state from among the wait states 518a-518c).[0064] FIG. 6 illustrates an embodiment method 600 for initial signaling in task signaling off a critical path of execution. The method 600 may be executed in a computing device using software, and/or general purpose or dedicated hardware, such as the processor.[0065] In block 602, the computing device may execute a task of a process of a program using an initial thread. In block 604, the computing device may encounter a parallel section, such as a loop, during the execution of the function. The process may include a loop, for which the process becomes an outer process and iterations of the loop include at least one execution of an inner process. The inner process may be configured such that the inner process may be executed in parallel with the outer process and iterations of the inner process as well. For example, the initial thread may execute the outer process while one or more other threads (e.g., work threads or relay threads) execute iterations of the inner process.[0066] In block 606, the computing device may interrupt the execution of the task. In block 608, the computing device may create parallel tasks for executing the inner loop. The parallel tasks may be configured so that together all of the parallel tasks include all of the iterations of the inner loop; however, it is not necessary for each parallel task to be configured to execute the same number of iterations of the inner loop.[0067] In block 610, the computing device may send an initial signal. In various embodiments, depending on the configuration of the computing device, the initial signal may be either a direct initial signal or an indirect initial signal. A direct initial signal may be sent by the computing device between processors or processor cores of the computing device via communication networks within the computing device, such as shared or common busses or a network on chip, or via dedicated direct signaling lines or wires connecting the processors or processor cores. The direct initial signal may be sent from the processor or processor core executing the initial thread to a processor or processor core executing a relay thread. In various embodiments, a group of all relay threads in active or wait states may be available to receive a direct initial signal from any initial thread. In various embodiments, sets of relay threads, smaller than a set of all relay threads of the computing device, may be designated to receive direct initial signals from a particular initial thread. In various embodiments, the initial thread may be configured to broadcast the direct initial signal, which may be ignored by relay threads that are not associated with the initial thread, or send the direct initial signal to associated relay threads.[0068] As discussed herein, the relay threads may be in various execution states, including active and wait states (e.g., idle and inactive). The direct initial signal may be configured to trigger a relay thread in a wait state to wake up and to indicate to the relay thread that parallel tasks have been created for execution by threads other than the initial thread.[0069] An indirect initial signal may be sent by the computing device between the processor or processor core of the computing device executing the initial thread and a memory device (e.g., a cache memory, a random access memory, a register, other solid-state volatile memory) via communication networks within the computing device, such as shared or common busses or a network on chip. The indirect initial signal may be configured to modify the memory device such that it may indicate the creation of or lack of parallel tasks. For example, a designated location in the memory device may be accessible by at least one relay thread to determine whether there are any parallel tasks created. The indirect initial signal may trigger a change in the data stored at the memory location such that the change may indicate to the relay thread that a parallel task has been created. Various examples may include a Boolean flag written to the memory location, and the Boolean flag having two values representing either a lack of parallel tasks or the creation of parallel tasks. In other examples, the value of the Boolean flag may not specifically represent either the lack of the creation of parallel tasks; rather a change in the Boolean flag value may indicate the creation of parallel tasks. In various embodiments, the relay thread may need to be at least temporarily in an active state to check for whether the indirect initial signal was sent to the memory device.[0070] In block 612, the computing device may resume executing the interrupted task.[0071] FIG. 7 illustrates an embodiment method 700 for relay signaling in task signaling off a critical path of execution. The method 700 may be executed in a computing device using software, and/or general purpose or dedicated hardware, such as the processor.[0072] In block 702, the computing device may wait for a direct or indirect initial signal from the initial thread. Regardless of the state of a relay thread, there may be at least a momentary wait between checks for a direct or indirect initial signal, even if it is during the time when the relay thread waits for the return of an issued check for a direct or indirect initial signal.[0073] In optional block 704, the computing device may check for a direct or indirect initial signal. In various embodiments, the relay thread may check for a direct or indirect initial signal from an active state. The relay thread may be persistently in an active state, or may transition between a wait state (e.g., idle or inactive) and an active state to check for a direct or indirect initial signal. In various embodiments, a relay thread may skip checking for a direct or indirect initial signal when in a wait state as checking may consume resources. As discussed herein, the relay thread may check a designated location of a memory device to detect an indirect initial signal.[0074] Concurrently with various blocks of the method 700 (e.g., concurrent with one or more of block 702 and optional block 704), in optional determination block 706, the computing device may determine whether a state change threshold is surpassed. In various embodiments, the state change threshold may include various threshold values corresponding with latency, power, or resource requirements for executing the parallel tasks, power or resource availability, programming, availability of parallel tasks, and time.[0075] In response to determining that a state change threshold is not surpassed (i.e., optional determination block 706 = "No"), the computing device may return to waiting for a direct or indirect initial signal from the initial thread in block 702.[0076] In response to determining that a state change threshold is surpassed (i.e., optional determination block 706 = "Yes"), the computing device may change the state of the relay thread in optional block 722. In various embodiments, the relay thread may be downgraded from an active state to one of a number of levels of wait states (e.g., idle or inactive), downgraded from one level of wait state to a lower level of wait state, upgraded from one level of wait state to a higher level of wait state, or upgraded from a level of wait state to an active state.[0077] In block 702, the computing device may wait for a direct or indirect initial signal from the initial thread. In block 708, the computing device may receive a direct or indirect initial signal from the initial thread. In various embodiments, receiving a direct initial signal may include receiving the signal from the initial thread via communication networks within the computing device.[0078] In various embodiments, receiving an indirect initial signal may include retrieving data from the designated location of the memory device indicating the creation of parallel tasks during the check for the indirect initial signal in optional block 704. In various embodiments, the relay thread may receive a direct or indirect initial signal from an initial thread with which it is not associated and may ignore the direct or indirect initial signal.[0079] In optional block 710, the computing device may wake up the relay thread from a wait state. Optional block 710 may not be implemented for a relay thread in an active state.[0080] In block 712, the computing device may send a relay signal to at least one work thread. As with the various embodiments of the initial thread discussed with reference to FIG. 6, a group of all work threads in active or wait states may be available to receive a relay signal from any relay thread. In various embodiments, sets of work threads, smaller than a set of all work threads of the computing device, may be designated to receive relay signals from a particular relay thread.[0081] In various embodiments, the relay thread may be configured to broadcast the relay signal, which may be ignored by work threads that are not associated with the relay thread, or send the relay signal to associated work threads. The work threads may be in various execution states, including active and wait states (e.g., idle and inactive). The relay signal may be configured to trigger a work thread in a wait state to wake up and to indicate to the work thread that parallel tasks have been created for execution by threads other than the initial thread.[0082] The signaling overhead for signaling the work threads is absorbed by the relay thread rather than the initial thread so that the initial thread may return to executing its task without having to wait for the acquisition of the parallel tasks it created.[0083] In optional block 714, the computing device may wait for the at least one work thread to acquire a parallel task. This waiting overhead is absorbed by the relay thread rather than the initial thread so that the initial thread may return to executing its task without having to wait for the acquisition of the parallel tasks it created. [0084] In optional determination block 716, the computing device may determine whether any parallel tasks remain. In response to determining that parallel tasks remain (i.e., optional determination block 716 = "Yes"), the computing device may acquire a remaining parallel task for the relay thread in optional block 718. In various embodiments, the relay thread may aid in executing the parallel tasks. However, there may be limits as to how many relay threads may execute parallel tasks so that there are sufficient relay threads available to handle relay signaling for any subsequent parallel tasks. In various embodiments, work threads that finish executing a parallel task before a relay thread finishes executing a parallel task may be reassigned as a relay thread to make up for a dearth of relay threads. Similarly, an excess of relay threads may result in a relay thread getting reassigned as a work thread.[0085] In optional block 720, the computing device may use the relay thread to execute the acquired parallel task. Upon completing the execution of the acquired parallel task, the computing device may return to determining whether parallel tasks remain in optional determination block 716. In response to determining that no parallel tasks remain (i.e., optional determination block 716 = "No"), the computing device may return to waiting for a direct or indirect initial signal from the initial thread in block 702.[0086] FIG. 8 illustrates an embodiment method 800 for task execution in task signaling off a critical path of execution. The method 800 may be executed in a computing device using software, and/or general purpose or dedicated hardware, such as the processor.[0087] In block 802, the computing device may wait for a relay signal from the relay thread. Regardless of the state of a work thread, there may be at least a momentary wait between checks for a relay signal, even if it is during the time when the work thread waits for the return of an issued check for a relay signal.[0088] In optional block 804, the computing device may check for a relay signal. In various embodiments, the work thread may check for a relay signal from an active state. The work thread may be persistently in an active state, or may transition between a wait state (e.g., idle or inactive) and an active state to check for a relay signal. In various embodiments, a work thread may skip checking for a relay signal when in a wait state as checking may consume resources.[0089] Concurrently with various blocks of the method 800 (e.g., concurrent with one or more of block 802 and optional block 804), the computing device may determine whether a state change threshold is surpassed in optional determination block 806. In various embodiments, the state change threshold may include various threshold values corresponding with latency, power, or resource requirements for executing the parallel tasks, power or resource availability, programming, availability of parallel tasks, and time.[0090] In response to determining that a state change threshold is not surpassed (i.e., optional determination block 806 = "No"), the computing device may return to waiting for a relay signal from the relay thread in block 802.[0091] In response to determining that a state change threshold is surpassed (i.e., optional determination block 806 = "Yes"), the computing device may change the state of the work thread in optional block 818. In various embodiments, the work thread may be downgraded from an active state to one of a number of levels of wait states (e.g., idle or inactive), downgraded from one level of wait state to a lower level of wait state, upgraded from one level of wait state to a higher level of wait state, or upgraded from a level of wait state to an active state.[0092] In block 802, the computing device may wait for a relay signal from the relay thread. In block 808, the computing device may receive a relay signal from the relay thread. In various embodiments, receiving a relay signal may include receiving the signal from the relay thread via communication networks within the computing device. In various embodiments, the work thread may receive a relay signal from a relay thread with which it is not associated and may ignore the relay signal. [0093] In optional block 810, the computing device may wake up the work thread from a wait state. Optional block 810 may not be implemented for a work thread in an active state.[0094] In block 812, the computing device may acquire a parallel task for the work thread. In block 814, the computing device may use the work thread to execute the acquired parallel task.[0095] Upon completing the execution of the acquired parallel task, in determination block 816, the computing device may determine whether parallel tasks remain. In response to determining that parallel tasks remain (i.e., determination block 816 = "Yes"), the computing device may acquire a remaining parallel task for the work thread in block 812. In response to determining that no parallel tasks remain (i.e., determination block 816 = "No"), the computing device may return to waiting for a relay signal from the relay thread in block 802.[0096] The various embodiments (including, but not limited to, embodiments discussed above with reference to FIGs. 1-8) may be implemented in a wide variety of computing systems, which may include an example mobile computing device suitable for use with the various embodiments illustrated in FIG. 9. The mobile computing device 900 may include a processor 902 coupled to a touchscreen controller 904 and an internal memory 906. The processor 902 may be one or more multi-core integrated circuits designated for general or specific processing tasks. The internal memory 906 may be volatile or non- volatile memory, and may also be secure and/or encrypted memory, or unsecure and/or unencrypted memory, or any combination thereof.Examples of memory types that can be leveraged include but are not limited to DDR, LPDDR, GDDR, WIDEIO, RAM, SRAM, DRAM, P-RAM, R-RAM, M-RAM, STT- RAM, and embedded dynamic random access memory (DRAM). The touchscreen controller 904 and the processor 902 may also be coupled to a touchscreen panel 912, such as a resistive-sensing touchscreen, capacitive-sensing touchscreen, infrared sensing touchscreen, etc. Additionally, the display of the computing device 900 need not have touch screen capability.[0097] The mobile computing device 900 may have one or more radio signal transceivers 908 (e.g., Peanut, Bluetooth, ZigBee, Wi-Fi, RF radio) and antennae 910, for sending and receiving communications, coupled to each other and/or to the processor 902. The transceivers 908 and antennae 910 may be used with the above- mentioned circuitry to implement the various wireless transmission protocol stacks and interfaces. The mobile computing device 900 may include a cellular network wireless modem chip 916 that enables communication via a cellular network and is coupled to the processor.[0098] The mobile computing device 900 may include a peripheral device connection interface 918 coupled to the processor 902. The peripheral device connection interface 918 may be singularly configured to accept one type of connection, or may be configured to accept various types of physical and communication connections, common or proprietary, such as USB, Fire Wire, Thunderbolt, or PCIe. The peripheral device connection interface 918 may also be coupled to a similarly configured peripheral device connection port (not shown).[0099] The mobile computing device 900 may also include speakers 914 for providing audio outputs. The mobile computing device 900 may also include a housing 920, constructed of a plastic, metal, or a combination of materials, for containing all or some of the components discussed herein. The mobile computing device 900 may include a power source 922 coupled to the processor 902, such as a disposable or rechargeable battery. The rechargeable battery may also be coupled to the peripheral device connection port to receive a charging current from a source external to the mobile computing device 900. The mobile computing device 900 may also include a physical button 924 for receiving user inputs. The mobile computing device 900 may also include a power button 926 for turning the mobile computing device 900 on and off. [0100] The various embodiments (including, but not limited to, embodiments discussed above with reference to FIGs. 1-8) may be implemented in a wide variety of computing systems, which may include a variety of mobile computing devices, such as a laptop computer 1000 illustrated in FIG. 10. Many laptop computers include a touchpad touch surface 1017 that serves as the computer's pointing device, and thus may receive drag, scroll, and flick gestures similar to those implemented oncomputing devices equipped with a touch screen display and described above. A laptop computer 1000 will typically include a processor 1011 coupled to volatile memory 1012 and a large capacity nonvolatile memory, such as a disk drive 1013 of Flash memory. Additionally, the computer 1000 may have one or more antenna 1008 for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver 1016 coupled to the processor 1011. The computer 1000 may also include a floppy disc drive 1014 and a compact disc (CD) drive 1015 coupled to the processor 1011. In a notebook configuration, the computer housing includes the touchpad 1017, the keyboard 1018, and the display 1019 all coupled to the processor 1011. Other configurations of the computing device may include a computer mouse or trackball coupled to the processor (e.g., via a universal serial bus (USB) input) as are well known, which may also be used in conjunction with the various embodiments.[0101] The various embodiments (including, but not limited to, embodiments discussed above with reference to FIGs. 1-8) may be implemented in a wide variety of computing systems, which may include any of a variety of commercially available servers for compressing data in server cache memory. An example server 1100 is illustrated in FIG. 11. Such a server 1100 typically includes one or more multi-core processor assemblies 1101 coupled to volatile memory 1102 and a large capacity nonvolatile memory, such as a disk drive 1104. As illustrated in FIG. 11, multi-core processor assemblies 1101 may be added to the server 1100 by inserting them into the racks of the assembly. The server 1100 may also include a floppy disc drive, compact disc (CD) or DVD disc drive 1106 coupled to the processor 1101. The server 1100 may also include network access ports 1103 coupled to the multi-core processor assemblies 1101 for establishing network interface connections with a network 1105, such as a local area network coupled to other broadcast system computers and servers, the Internet, the public switched telephone network, and/or a cellular data network (e.g., CDMA, TDMA, GSM, PCS, 3G, 4G, LTE, or any other type of cellular data network).[0102] Computer program code or "program code" for execution on a programmable processor for carrying out operations of the various embodiments may be written in a high level programming language such as C, C++, C#, Smalltalk, Java, JavaScript, Visual Basic, a Structured Query Language (e.g., Transact-SQL), Perl, or in various other programming languages. Program code or programs stored on a computer readable storage medium as used in this application may refer to machine language code (such as object code) whose format is understandable by a processor.[0103] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the operations of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of operations in the foregoing embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the operations; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular.[0104] The various illustrative logical blocks, modules, circuits, and algorithm operations described in connection with the various embodiments may beimplemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and operations have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the claims.[0105] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a fieldprogrammable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.[0106] In one or more embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non- transitory computer-readable medium or a non-transitory processor-readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module that may reside on a non-transitory computer- readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.[0107] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the claims. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
Methods and apparatus are provided for operating an embedded processor system that includes a processor and a cache memory. The method includes filling one or more lines in the cache memory with data associated with a first task, executing the first task, and, in response to a cache miss during executing of the first task, performing a cache line fill operation and, during the cache line fill operation, executing a second task. The cache memory may notify the processor of the line fill operation by generating a processor interrupt or by notifying a task scheduler running on the processor.
CLAIMS 1. A method for operating an embedded processor system that includes a processor and a cache memory, comprising: filling one or more lines of the cache memory with data associated with a first task; executing the first task including accessing data in the cache memory; and in response to a cache miss during execution of the first task, performing a cache line fill operation and, during the cache line fill operation, executing a second task. 2. A method as defined in claim 1, wherein the data comprises one or more operands associated with the first task. 3. A method as defined in claim 1, wherein the data comprises one or more instructions associated with the first task. 4. A method as defined in claim 1, further comprising notifying the processor of the cache line fill operation. 5. A method as defined in claim 4, wherein the step of notifying the processor of the cache line fill operation comprises generating a processor interrupt. 6. A method as defined in claim 4, wherein the step of notifying the processor of the cache line fill operation comprises notifying a task scheduler running on the processor. <Desc/Clms Page number 12> 7. A method as defined in claim 1, wherein the step of executing a second task comprises executing a task of higher priority than the first task. 8. A method as defined in claim 1, wherein the first and second tasks are executed on a single processor. 9. A method as defined in claim 1, wherein the first and second task are executed on first and second processors, respectively. 10. A method as defined in claim 1, further comprising comparing an address associated with the cache line fill operation to a specified address range and notifying the processor of the cache line fill operation only if the result of the address range comparison meets a predetermined criteria. 11. A method as defined in claim 10, wherein the address range used in the address range comparison is programmable. 12. A method as defined in claim 1, further comprising accessing data associated with the second task in the cache memory. 13. A method as defined in claim 1, wherein the step of executing the second task comprises fetching instructions and operands for the second task. 14. A method as defined in claim 1, further comprising resuming the first task when the cache line fill operation has completed. <Desc/Clms Page number 13> 15. An embedded processor system comprising: a cache memory for storing data associated with a first task, said cache memory including a cache controller for detecting a cache miss, for performing a cache line fill operation in response to the cache miss and for generating a cache miss notification; and a processor for executing the first task and, in response to a cache miss notification during execution of the first task, executing a second task during the cache line fill operation. 16. An embedded processor system as defined in claim 15, wherein the data comprises one or more operands associated with the first task. 17. An embedded processor system as defined in claim 15, wherein the data comprises one or more instructions associated with the first task. 18. An embedded processor system as defined in claim 15, wherein the cache miss notification comprises a processor interrupt. 19. An embedded processor system as defined in claim 15, wherein the cache miss notification comprises a notification to a task scheduler running on said processor. 20. An embedded processor system as defined in claim 15, wherein the second task has higher priority than the first task. <Desc/Clms Page number 14> 21. An embedded processor system as defined in claim 15, wherein the first and second tasks are executed on a single processor. 22. An embedded processor system as defined in claim 15, wherein said processor comprises first and second processors and wherein said first and second tasks are executed on said first and second processors, respectively. 23. An embedded processor system as defined in claim 15, wherein said cache memory further comprises an address range comparison circuit for comparing a memory address associated with the cache miss to a specified address range and for enabling generation of the cache miss notification only when the result of the address range comparison meets a predetermined criteria. 24. An embedded processor system as defined in claim 23, wherein the specified address range is programmable. 25. An embedded processor system as defined in claim 15, wherein said cache memory is configured for storing data associated with the second task. 26. An embedded processor system as defined in claim 15, wherein said processor includes means for fetching instructions and operands for executing the second task. <Desc/Clms Page number 15> 27. An embedded processor system as defined in claim 15, wherein said processor further includes means for resuming execution of the first task when the cache line fill operation has completed. 28. An embedded processor system as defined in claim 15, wherein said cache memory includes two or more line fill buffers. 29. An embedded processor system as defined in claim 15, wherein said cache memory includes two or more copyback buffers.
<Desc/Clms Page number 1> METHODS AND APPARATUS FOR IMPROVING THROUGHPUT OF CACHE-BASED EMBEDDED PROCESSORS BY SWITCHING TASKS IN RESPONSE TO A CACHE MISS CROSS REFERENCE TO RELATED APPLICATION This application claims the benefit of provisional application Serial No. 60/315, 655, filed August 29, 2001, which is hereby incorporated by reference in its entirety. FIELD OF THE INVENTION The present invention relates to digital processing systems and, more particularly, to methods and apparatus for improving processor performance by switching tasks in response to a cache miss. BACKGROUND OF THE INVENTION Embedded processors, such as those used in wireless applications, may include a digital signal processor, a microcontroller and memory on a single chip. In wireless applications, processing speed is critical because of the need to maintain synchronization with the timing of the wireless system. Low cost, embedded processor systems face unique performance challenges, one of which is the constraint to use low-cost, slow memory, while maintaining high throughput. In the example of wireless applications, a digital signal processor (DSP) is often employed for computation intensive tasks. In this system, low-cost, off-chip flash memory forms the bulk storage capacity of the system. However, the flash memory access time is much longer than the minimum cycle time of the digital signal processor. To achieve high performance on the DSP, it should execute from local memory which is much faster than the off-chip flash memory. <Desc/Clms Page number 2> Embedded processor systems may implement the local memory with some form of fill-on-demand cache memory control instead of or in addition to simple RAM, which requires another processor or a direct memory access (DMA) controller to load code and/or data into the local memory prior to or after the processor requires the code and/or data. When the DSP encounters a cache miss, the cache hardware must fill a cache line from the slower memory in the memory hierarchy. This fill-ondemand aspect of the cache often means that the DSP is stalled while all or part of the cache line is filled. Accordingly, there is a need for methods and apparatus for improving the throughput of cache-based embedded processors. SUMMARY OF THE INVENTIONAccording to a first aspect of the invention, a method is provided for operating an embedded processor system that includes a processor and a cache memory. The method comprises filling one or more lines of the cache memory with data associated with a first task, executing the first task, and, in response to a cache miss during execution of the first task, performing a cache line fill operation and, during the cache line fill operation, executing a second task. According to another aspect of the invention, an embedded processor system comprises a cache memory for storing data associated with a first task, and a processor for executing the first task. The cache memory includes a cache controller for detecting a cache miss, for performing a cache fill operation in response to the cache miss and for generating a cache miss notification. The processor, in response to a cache miss notification <Desc/Clms Page number 3> during execution of the first task, executes a second task during the cache fill operation. BRIEF DESCRIPTION OF THE DRAWINGSFor a better understanding of the present invention, reference is made to the accompanying drawings, which are incorporated herein by reference and in which: FIG. 1 is a simplified block diagram of a prior art embedded processor system ;FIG. 2 is a simplified block diagram of an embedded processor system in accordance with an embodiment of the invention ;FIG. 3 is a block diagram of an embodiment of the cache memory shown in FIG. 2 ; andFIG. 4 is a flow diagram of a routine implemented by the cache controller in accordance with an embodiment of the invention. DETAILED DESCRIPTIONA block diagram of a prior art digital processing system is shown in FIG. 1. A processor such as a digital signal processor (DSP) 10 and a cache memory 12 are located on a single processing chip 14. Cache memory 12 may be an instruction cache or a data cache. Some systems may include a data cache and an instruction cache. An off-chip flash memory 20 is coupled to cache memory 12. Processing chip 14 may include other components, such as an on-chip memory, a microcontroller for executing microcontroller instructions, a direct memory access (DMA) controller and various interfaces to off-chip devices. <Desc/Clms Page number 4> The cache memory 12 and the flash memory 20 form a memory hierarchy in which cache memory 12 has relatively low latency and relatively low capacity, and flash memory 20 has relatively high latency and relatively high capacity. In operation, DSP 10 executes instructions and accesses data and/or instructions in cache memory 12. The low latency cache memory 12 provides high performance except when a cache miss occurs. In the case of a cache miss, a cache line fill operation is required to load the requested data from flash memory 20. The time required to load a cache line from flash memory 20 may be several hundred clock cycles of DSP 10. During the line fill operation, the DSP 10 is stalled, thereby degrading performance. A simplified block diagram of a digital processing system in accordance with an embodiment of the invention is shown in FIG. 2. Like elements in FIGS. 1 and 2 have the same reference numerals. An example of a suitable DSP is disclosed in PCT Publication No. WO 00/687 783, published November 16,2000. However, the invention is not limited to any particular digital signal processor. Further, the DSP 10 may be replaced by a microcontroller, a general purpose microcomputer or any other processor. According to a feature of the invention, instead of stalling the DSP 10 for the duration of the cache line fill operation, the DSP 10 is redirected to execute an alternative software task, such as an interrupt service routine (ISR). Processing of the first software task can resume at a later time, when the cache line fill operation has completed. Referring to FIG. 2, a cache miss interrupt generator 30 detects a cache line fill operation, wherein cache memory 12 performs a cache line fill operation from flash memory 20, and generates an interrupt to DSP 10. In response, DSP 10 executes a second software task during the cache line fill operation. The disclosed method <Desc/Clms Page number 5> enhances performance by utilizing processor time in which the processor would otherwise be stalled waiting for completion of the cache line fill operation. A software organization wherein the software is organized as multiple independent threads, which are managed by an operating system (OS) scheduler, can also take advantage of this approach. In this case, a new software thread may be started during the cache line fill operation. The multithreaded software organization can be viewed as a more general superset of the main routine/interrupt service routine model. The main/ISR model effectively includes two software threads, and the processor interrupt hardware functions as the task scheduler. The elements of a system employing this approach are: (1) a processor with a much faster cycle time than the memory subsystems it accesses ; (2) a processor sequencer organization which, upon recognizing an interrupt assertion of higher priority than the current task, aborts the instructions which have already entered the instruction pipeline and redirects instructions fetched to the new task. This functionality allows a load operation to start and to generate a memory access, but then be aborted, allowing another task to start; (3) code and/or data caches between the processor and the slower memory subsystems; and (4) software modularity such that independent tasks (e. g. , interrupt processing or multiple threads) are available to run on the processor at any time. The system may optionally include circuitry to signal the operating system that a cache miss has occurred, allowing the operating system to start the next pending software task/thread. Without this circuit, the processor stalls on a cache miss in the conventional way, unless an unrelated interrupt occurs while the processor is stalled. With the additional circuitry, the <Desc/Clms Page number 6> system can guarantee that the interrupt will always be taken on a cache miss. Another option is to include address range checking circuitry, such that the interrupt on a cache miss is generated only if the memory address associated with the cache miss is within a specified address range. The address range may be fixed or programmable. As an optional enhancement in embedded systems with multiple memory subsystems, with different access latencies (e. g., off-chip flash memory and on-chip SRAM memory), the cache can employ multiple line fill and copyback buffers to further enhance overall throughput. This enhancement also requires either separate buses between the cache controller and each of the memory systems, or a common bus employing out-of-order line fill protocols (e. g. , bus data tagging). Referring again to FIG. 2, when the DSP 10 generates a memory access which misses the cache memory 12, but is cacheable, the cache controller generates a cache line fill operation to the off-chip flash memory 20. The access time to fetch the entire cache line from flash memory can be hundreds of processor cycles. The cache miss interrupt generator 30 determines that a cache line fill operation has been requested by the cache controller and generates an interrupt to DSP 10. Since the DSP 10 aborts the instructions in the pipeline upon detection of an interrupt, it aborts the instruction which generated the cache line miss and begins execution of the interrupt service routine. The interrupt service routine determines the next appropriate step. For example, the ISR may determine that a high priority task, which is resident in the local memory system, is available to run. As long as the ISR hits in the local cache (or, as is often the case, the ISR executes out of local RAM, which is accessed in parallel with the local cache), then the DSP 10 is <Desc/Clms Page number 7> not stalled for the lengthy time required to complete the cache line fill operation. When the ISR has run to completion, execution returns to the lower priority task which generated the cache miss. In the more general multithreaded software model, the interrupt invokes the operating system scheduler, which then passes execution to the current highest priority software thread which can run in the available local memory resources. That software thread either (a) runs to completion, or (b) is preempted by the scheduler at some point, such that another thread can run, such as the thread that was preempted on the cache miss, assuming that the cache line fill operation has now been completed. A block diagram of an embodiment of cache memory for implementing the present invention is shown in FIG. 3. The cache memory of Fig. 3 corresponds to the cache memory 12 and the cache miss interrupt generator 30 of Fig. 2. As is conventional, the cache memory includes a tag array 100, a data array 102, hit/miss logic 104, a store buffer 106 and a write buffer 108. The cache memory further includes a cache controller 110 having circuitry for generating a cache miss signal, one or more line fill buffers 112A and 112B and one or more copyback buffers 114A and 114B. The cache memory may further include an address range compare circuit 120. When a read access is generated by DSP 10 during execution of a first task or thread, the read address is supplied to hit/miss logic 104. The tag array 100 stores upper address bits to identify the specific address source in memory that the cached line represents. The tags are compared with the read address to determine whether the requested data is in the cache. In the case of a hit, the read data is supplied to the DSP 10. In the case of a miss, a miss signal is supplied to cache controller 110 and a cache line fill operation <Desc/Clms Page number 8> is initiated. In the cache line fill operation, a cache line containing the requested data is read from flash memory 20. The cache line is loaded into tag array 100 and data array 102 through line fill buffer 112 and is available for use by DSP 10. In the case of a cache miss, cache controller 110 supplies a cache miss signal to DSP 10 to initiate execution of a second task or thread by DSP 10. In the case of a cache miss, the cache line that is replaced may be copied to flash memory 20 through copyback buffer 114A, 114B. Optionally, the cache memory may include two or more line fill buffers 112A, 112B and two or more copyback buffers 114A, 114B for enhanced performance in executing a second software task during the cache line fill operation. Address range compare circuit 120 may optionally be provided to limit the address range over which a second task is executed during the cache line fill operation. In particular, the address range compare circuit 120 receives an upper address limit and a lower address limit, which may be fixed or programmable. Address range compare circuit 120 also receives the memory load address supplied to flash memory 20 in the case of a cache line fill operation. The address range compare circuit 120 may be configured to determine if the memory load address is between the upper address limit and the lower address limit, either inclusively or exclusively. In another approach, address range compare circuit 120 may determine if the memory load address is outside the range between the upper address limit and the lower address limit. In any case, if a specified comparison criteria is satisfied, a signal is supplied to cache controller 110 to enable the cache miss signal to be supplied to DSP 10. <Desc/Clms Page number 9> A flow chart of a routine for improving processor performance by switching tasks in response to a cache miss operation is shown in FIG. 4. In step 200, the processor (DSP 10) executes task A by referencing operands and/or instructions in cache memory 12. In step 202, cache memory 12 determines if a cache miss has occurred. If a cache miss has not occurred, the processor continues to execute task A in step 200. In the case of a cache miss, cache memory 12 begins a cache line fill operation in step 204. The cache line fill operation loads a cache line containing the requested data from the flash memory 20 into cache memory 12. In step 206, the address range compare circuit 120 in cache memory 12 compares the cache miss address to a selected address range as described above. In step 208, a determination is made as to whether the cache miss address meets a specified address range comparison criteria. If the cache miss address does not meet the address range comparison criteria, the processor waits for the cache line fill operation to complete in step 210 and returns to execution of task A in step 200. If the cache miss address meets the address range comparison criteria, the processor is notified to change tasks in step 212. With reference to FIG. 3, cache controller 110 sends a cache miss signal to DSP 10. The processor then executes task B in step 214 during the cache line fill operation. It will be understood that steps 206,208 and 210 associated with address range comparison are optional in the process of FIG. 4. Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the invention. <Desc/Clms Page number 10> Accordingly, the foregoing description and drawings are by way of example only. What is claimed is:
A pipeline of memory banks is configured to store and retrieve a forwarding address by distributing portions of the address across the memory banks and subsequently searching for the distributed values. A first value of the address is recoverable by searching for a value, stored by a first memory bank, by consuming a predetermined number of bits of a data unit from a data packet. If present, a subsequent value of the address is recoverable by searching another memory bank of the pipeline for a value of the address contained by a node of a linked list. The pipeline recovers the address by combining value found at the first memory bank with the value found by the node of the linked list at the other memory bank.
What is Claimed is; 1. A network device including a search engine, the search engine, comprising: a pipeline of multiple memory banks including a first memory bank and at least one subsequent memory bank, the first memory bank partitioned to store values of a data structure corresponding to a portion of a network address, and the at least one subsequent memory banks partitioned to store any of other values corresponding to the portion of the network address and a node of a linked list, the node of the linked list comprising an element corresponding to the portion of the network address and a pointer to another node of the linked list of a third subsequent memory bank; one or more memory controllers configured to perform a first search, of a series of searches, of the values stored by the first memory bank and, responsively to the first search, to perform a second search, of the series of searches, by searching either the other values stored in the at least one subsequent memory bank or the node of the linked list. 2. The search engine of claim 1 , wherein the one or more memory controllers are further configured to consume a first portion of a data unit during a first search and to consume, when the memory controller is to perform the second search of the other values stored in the subsequent memory bank, a second portion of the data unit during the second search based on results of the first search. 3. The search engine of claim 1, wherein the one or more memory controllers are further configured to perform a subsequent search, of another series of searches, of the values stored by the first memory bank during the same clock cycle as the second search. 4. The search engine of claim 3, wherein the one or more memory controllers are further configured to perform, during the same clock cycle, the second search of the other values and a traversal of the linked list in response to a result of the subsequent search. 5. The search engine of claim 1, wherein a most subsequent one of the one or more memory controllers is further configured to combine the result of the first search and a result of the second search into the network address. 6. A match engine device method, comprising: performing a series of partial searches, in a search pipeline, on a packet of a stream of packets, each of the series of partial searches occurring in at least one of multiple memory banks arranged in a search pipeline; and completing a search when a last resource, subsequent to all other resources, of the search pipeline is searched and a next address has been determined by which to forward the packet. 7. The match engine device method of claim 6, further comprising: receiving a data unit of the packet at a first memory bank, extracting a first portion of the data unit; performing a first partial search, of the series of partial searches, at the first memory bank by using the first portion of the data unit; outputting an indication, of the first partial search of the first memory bank, and the data unit to a subsequent one of the memory banks; performing a second partial search at the subsequent one of the memory banks; determining that the first partial search points to a linked list distributed among subsequent ones of the multiple resources; and completing a search by traversing the linked list. 8. The match engine device method of claim 7, further comprising performing another first partial search, of another series of partial searches, for another packet of the series of packets at the first resource during a same clock cycle as the second partial search . 9. The match engine device method of claim 7, wherein the extracting the first portion of the data unit comprises extracting a predetermined number of bits of the data unit. 10. The memory engine device method of claim 6, further comprising: receiving a plurality of subsequent packets supplied to the search pipeline, each subsequent packet being respectively supplied at a next clock cycle, and , writing to at least one of the multiple resources in response to at least one of the subsequent packets. 11. The memory engine device method of claim 6, further comprising receiving, by at least one of the resources, a request to re-configure a plurality of memory positions of a plurality of stored values; and re-configuring the plurality of memory positions. 12. A match engine device, comprising: a memory space including a communicatively coupled series of separate memory devices, each memory device of the separate memory devices storing a multiplicity of values; a memory space front end configured to receive a data unit corresponding to a packet received from a network, to generate a first value corresponding to the data unit, to search the first memory device to determine whether the first value corresponds to a value stored in a first separate memory device of the series of separate memory devices, and to output an indication of a result of the search; a transfer logic configured to generate a respective different value corresponding to the data unit for respective ones of the subsequent separate memory devices based on results of searching at one or more previous memory devices, to search the subsequent separate memory device to determine whether the respective different value corresponds to a stored value in the respective subsequent separate memory device, and to output an indication of a result of the search. 13. The match engine device of claim 12, wherein when the transfer logic is further configured to determine whether a position indicated by the first value matches a position of the stored value of the first separate memory device, the stored value is redirected to another position within the first separate memory device. 14. The match engine device of claim 12, in response to determining that a position indicated by the respective different value matches a position of the stored value of the respective subsequent physically separate memory device, the stored value is redirected to another position to be stored in at least one of the plurality of separate memory devices. 15. The match engine device of claim 12, wherein a one of the plurality of separate memory device comprises a node of a linked list representing both a value of a forwarding address and a pointer to another node of the linked list stored by another one of the plurality of separate memory devices. 16. The match engine device of claim 12, wherein each of the plurality of separate memory devices is physically separated from each of the other memory devices.
INTERNAL SEARCH ENGINE ARCHITECTURE CROSS-REFERENCE TO RELATED APPLICATION [1] This application claims priority to and the benefit of both U.S. Provisional Patent Application No. 61/830,786 filed June 4, 2013 and U.S. Provisional Patent Application No. 61/917,215 filed December 17, 2013 the disclosures of which are incorporated by reference herein in their entirety. BACKGROUND [2] The present disclosure relates to a network device that processes packets. [3] The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventors, to the extent it is described in the background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. [4] As part of their packet processing, network devices perform various search operations including exact match (EM) searching and longest prefix matching (LPM) on packet data. Conventionally, memory space for these searching and matching operations is comparatively poorly utilized. Moreover, the efficient management of the memory space for EM and LPM searching can be challenging. Conventional methodologies for classification typically require elaborate and expensive memory components and can be overly time consuming for some applications. SUMMARY [5] One or more example embodiments of the disclosure generally relate to storing and retrieving network forwarding information, such as forwarding of OS I Layer 2 MAC addresses and Layer 3 IP routing information. A pipeline of memory banks stores the network addresses as a pipelined data structure. The pipelined data structure is distributed as values of a data structure, separately searchable, at each of the memory banks. [6] A first memory bank of the pipeline stores values of the data structure, and each value stored at the first memory bank points to a next memory bank containing a next value of the data structure. BRIEF DESCRIPTION OF THE DRAWINGS [7] Fig. 1 shows a pipeline of memory banks according to example embodiments. [8] Fig. 2 shows a data value distributed among memory banks of a pipeline according to example embodiments. [9] Fig. 3 shows a pipeline of memory banks containing EM tables and buckets according to example embodiments. [10] Fig. 4 shows a flow diagram of an example method according to example embodiments. [1 1] Fig. 5 shows a pipeline of memory banks utilizing a combination of EM engine accessible and LPM engine accessible tables according to example embodiments. [12] Fig. 6 is a flow diagram of an example method according to example embodiments. DETAILED DESCRIPTION [13] In the following discussion, descriptions of well-known functions and constructions are omitted for increased clarity and conciseness. [14] Fig. 1 shows a network 100, a network device 1 10, and a pipeline 200. The network device 1 10 is configured to buffer any received packet and to send the packets to the pipeline 200. The pipeline 200 includes a processor core 201, a memory bank 21 1, a memory bank 221 , and a memory bank 231. [ 15] The network device 1 10 is configured to request the pipeline 200 to perform resource intensive lookup operations such as a plurality of searches for values of one or more packet forwarding addresses in response to receiving a packet, such as packet 101, from a network 100. Further the network device 1 10 is configured to receive a serial stream of packets (not shown) where at least a portion of one or more packets of the stream, after a processing operation at the network device 1 10, is used by the network device 1 10 to perform lookup operations on received serial stream of packets. [16] The pipeline 200 comprises a plurality of memory banks, memory bank 21 1, memory bank 221 , memory bank 231 , each of the memory banks being configured to perform at least one lookup operation per clock cycle, in an embodiment. Accordingly, a total number of lookup operations during a single clock cycle is directly proportional to the total number of memory banks, in an embodiment. [17] Further, each memory bank of the pipeline 200 is configured to perform a respective lookup operation for a respective packet of the serial stream of packets and to both pass an indication of the result to a next memory bank and to also receive a next packet of the serial stream of packets. Accordingly, a total number of packets, or portions of packets, for which the pipeline 200 is configured to search at a distinct clock cycle is directly proportional to the total number of memory banks, in an embodiment. [ 18] According to an example embodiment, the processor core 201 is configured to process data packets received from the network, to selectively send data to the controller 210 of pipeline 200 and to receive data from controller 230. The received data from controller 230 indicates a result of a plurality of lookup operations, each one of the lookup operations being performed at a respective one of the memory banks. [19] The network device 1 10 includes a packet processor (not shown) which is configured to send a data unit to the pipeline 200 for an inquiry regarding a network packet forwarding operation such as finding a next location for forwarding and/or routing. The pipeline 200 is a resource external to the packet processing engine of the network device 1 10 and is configured to perform resource intensive processing operations, such as an address lookup, as part of packet processing. [20] According to example embodiments, processing core 201 includes a pipeline of programmable processors, multiple processing engines that are arranged as an ASIC pipeline, or a multiplicity of non-pipelined run-to-completion processors. [21] The pipeline 200 is typically part of the network device and functions as a resource that is used by the packet processor for various address look operations. In accordance with an embodiment, although part of the network device, pipeline 200 is external to the packet processor. [22] The pipeline 200 allows for both storage and retrieval of a plurality of values of a network forwarding and/or routing address stored as a data structure which is distributed among memory banks of a pipeline. As used herein, the terms forwarding address and routing address are used interchangeably. The pipeline processor is configured to store, to search for, and to re-combine the distributed values found by searching each component of the data structure, in an embodiment. [23] According to an example embodiment, the pipeline 200 forms a data structure corresponding to a respective one of the forwarding addresses and each memory bank stores a portion of the data structure. As such, a first portion of the data structure is stored at a first memory bank and subsequent memory banks each respectively store subsequent portions of the data structure. Retrieving the data structure requires traversing ones of the memory banks containing the portions. [24] Both memory compactness and efficient retrieval are achieved by distributing the portions of the data structure among the memory banks through balancing stride tables and prefix list tables by where they are most effective. Stride tables are better where stored data has high entropy and prefix list tables where stored data has low entropy. [25] The decreased searching complexity is directly related to splitting the forwarding address into a distributed data structure. Each portion of the data structure requires less space and less computational complexity for retrieval than would a non-split forwarding address or a non-pipelined data structure, because each portion of the data structure is stored at a respective partitioned area of a memory bank. [26] According to an example embodiment, each partitioned area of the memory bank is a physically separate memory bank having a separate input/output bus. According to an example embodiment, the respective partitioned areas are logical partitions of a same physical memory bank. [27] The partitioned area is addressable and decreases the computational complexity which would be expected when all available memory positions of a non-partitioned memory bank were available for storing the portion of the data structure. Furthermore, an operation of retrieving each portion of the data structure is simplified since each memory bank contains only a portion of the data structure thereby allowing for a first operation, regarding a first forwarding address, to search the memory bank 211 for the partial component of the data structure, and then move on to the next memory bank 221 while a second operation, regarding some other search, is allowed access to the first memory bank 211. Accordingly, the pipeline 200 allows for a plurality of partial searches, each for a different forwarding address, to be performed in parallel. [28] Fig. 1 shows the lookup pipeline 200 to include a controller 210, a controller 220, and a controller 230 communicatively coupled to the processor core 201 and also communicatively coupled to each of memory bank 211, memory bank 221 , and memory bank 231 respectively. Furthermore, there are a plurality of memory banks and respective controllers (not seen) between memory bank 221 and memory bank 231, according to example embodiments. [29] The memory bank 211 includes at least one partitioned area, such as a stride table 215. A stride table includes at least one function to be combined with a portion of a data unit. When the function and data unit are combined, the portion of the data unit is consumed. Consumption of the data unit is subsequently used to determine a value stored by the stride table. In other words, the stride table 215 is configured to reveal a value of a data structure when its local function, a key function, is combined with a data unit, thereby completing the key. [30] According to an example embodiment, a stride table comprises 16 stride value entries. A stride table comprises a block address which points to a prefix list from the current stride table. The most significant bits, of a block address which points to a prefix list, are shared among all stride value entries pointing to a prefix list from the current stride table. [31] A stride table also comprises a block address which points to another stride table from the current stride table. The most significant bits, of a block address which points to another stride table, are shared among all stride value entries pointing to a stride table. [32] A stride table also comprises a 20-bit next hop pointer belonging to a previous stride value entry. According to an example embodiment, when the next hop pointer is OxFFFFF then the next hop pointer is invalid and the next hop pointer from a previous stride is used. [33] According to an example embodiment, a local function of the stride table includes a number of bits of an incomplete value. The portion of the data unit, another number of bits, is combined with the local function to indicate a position within the stride table, the position containing a value of the data structure. Further, when another portion of the data unit or a portion of another data unit combines a number of its bits with those of the local function of the stride table, a different position within the stride table is indicated. [34] The memory bank 221 includes at least two logical partitions respectively including a stride table 225 and a prefix table 226. [35] The stride table 225 includes a second local function to be combined with a data unit to reveal a second value of a data structure. This data unit is a non-consumed part of the data unit used by stride table 215. This second value of the data structure refers to a second portion of a forwarding address subsequent to a first portion of a forwarding address. In an embodiment, the forwarding address is a layer 3 routing address, however in other embodiments the forwarding address is a layer 2 address. [36] The prefix table 226 contains a plurality of nodes of a plurality of linked lists. Each node contains both an element corresponding to one or more multiple respective forwarding addresses as a portion of a data structure, and a pointer to a node contained by a prefix table of another memory bank. One of the nodes, when searched, reveals a second portion of the data structure corresponding to a forwarding address portion subsequent to the first portion of the forwarding address, where the first portion of the forwarding address is found at stride table 215. [37] The memory bank 231 includes two partitioned areas each respectively including a stride table 235 and a prefix table 236. [38] The stride table 235 includes a third local function to be combined with a data unit to reveal a third value of a data structure. This third value of the data structure referring to a third portion of a forwarding address subsequent to the second portion of the forwarding address, where the second portion of the forwarding address is found at stride table 225. [39] The prefix table 236 contains a plurality of nodes of a plurality of linked lists. One of the nodes, when searched, reveals a third portion of the data structure corresponding to a forwarding address portion subsequent to a second portion of the forwarding address, where the second portion of the forwarding address is found at the stride table 225. [40] According to an example embodiment, a portion of a data unit 111, such as a portion of a packet 101 received by a network device 1 10 from a network 100, is used to activate a function of stride table 215. A portion of the data unit 111 combines with a function, such as a key, stored by the stride table 215 to determine a value of the data structure corresponding to a forwarding address. Hereinafter, combining a portion of the data unit 111 with a function of a memory bank is referred to as "the portion." Each entry in a prefix list contains all remaining portions of the forwarding address, after initial portions have been consumed by stride tables. [41] According to an example embodiment, either a next portion of the data unit 1 11 is consumed by memory bank stride table 225, or a node of a linked list stored by a prefix table 226 is searched. Hereinafter, it is noted that a first portion of the data structure corresponding to a forwarding address is stored as a first type of data, such as that stored by a stride table or an exact match table, and a second portion of the same data structure is stored by a prefix table. [42] Therefore, an operational complexity of retrieving a network address from a pipelined data structure, as herein described, is decreased by utilizing and mixing attributes of searching stride tables, exact match tables, and prefix tables. The operational complexity refers to finding some value within a predetermined number of operations. Accordingly, as the data structure is distributed among respective partitions of a plurality of memory banks, each partition becomes more compact and easier to maintain thereby decreasing the operational complexity, as described above. [43] According to an example embodiment, the pipeline 200 is configured to store a remainder of a data structure as nodes of a linked list distributed among prefix tables when a stride table utilization level reaches a predetermined and configurable threshold. [44] According to an example embodiment, a value stored by a stride table is determined to direct the pipeline 200 to search another stride table or a prefix table, and a value stored by a prefix table is determined and then used to direct the pipeline 200 to search any other prefix table found at any subsequent one of the memory banks of the pipeline 200. [45] Hereinafter, the terms "previous" and "subsequent" will be used as follows: Memory bank 221 and memory bank 211 are "previous" to memory bank 231. Memory bank 211 is "previous" to memory bank 221, and memory bank 231 is "subsequent" to memory bank 221. Memory bank 221 and memory bank 231 are "subsequent" to memory bank 211. [46] Each of the controllers, controller 210, controller 220, and controller 230, is configured to control read and write access, by hardware, software, or a combination to a respective one of the memory bank 211, memory bank 221, and memory bank 231. The memory banks are further configured to interchange their stored data to balance storage compaction with the complexity of the searching mechanisms. According to an example embodiment, more or fewer than the illustrated number of controllers are configured to determine when a searched and reading operation is requested to be performed at a respective memory bank of the pipeline 200 to determine the address to read. [47] Each of the stride tables, stride table 215, stride table 225, and stride table 235, is configured such that a portion of an address, searchable by a longest prefix match mechanism, is stored in a distributed or pipelined manner. [48] In an example embodiment, "pipelined" refers to locations which are physically separated and, in this case, memory banks which are capable of storing respective parts of a forwarding address as a data structure. Subsequent retrieval of such pipelined data requires traversal of each memory bank containing the values which, when each of the values is retrieved, the reassembled value corresponds to a forwarding address. Further, each memory bank includes a mechanism indicating where to search for a value in a next memory bank. [49] A value of a stride table, a table containing values each corresponding to a portion of a respective forwarding address, is searchable by consuming a predetermined number of bits of a data unit, such as a packet or portion of a packet received from a network. Accordingly, the values stored by the memory banks of the pipeline 200 have a predetermined correspondence to at least a part of a respectively received data unit. [50] A value of a prefix table comprises a node of a linked list containing both an element corresponding to a portion of a respective forwarding address and a pointer to a node contained by a prefix table of another memory bank. [51] According to example embodiments a network device 110 receives a packet 101, of a series of packets (not shown), from a network 100 and subsequently performs various packet processing operations on the packet at a packet processor (not shown) associated with the network device. During processing, some of the operations, such as address lookups, are provided by dedicated engines associated with the network device external to the packet processor. The external operations use a data unit resulting from the packet processing operation, such as an extracted data unit of an Open Systems Interconnection (OSI) Layer 2 MAC addressing or OSI Layer 3 IP routing protocol, to the pipeline 200. The data unit 111 a portion corresponding to the packet 101 or a value translated from a portion of the packet 101, such as the above described OSI layer information. The data unit 111 is used to determine that a memory bank 21 1 is to be searched for a first portion of the data structure. According to an example embodiment, when the controller 210 searches the stride table 215 of memory bank 211, the value used in addressing the stride table stride table 215 of the memory bank 211 is not used with a local hash function in for directly determining a value of the data structure. Subsequently, the controller 220 determines that stride table 225 of memory bank 221 contains a next value by consuming another one of the portions of the data unit 111. According to an example embodiment, both the value addressing the memory bank 211. [52] The controller 210 extracts information corresponding to a portion of the data structure from the stride table 215 of the memory bank 211 by allowing portion of the data unit 111 to be consumed by a local function of the stride table 215. [53] Upon consuming the portion of the data unit 11 1 and extracting the value from the stride table 215, the controller 210 determines that a next portion of the desired forwarding address is found at memory bank 221 by determining either that a next portion of the data unit 111 is to be consumed by the stride table 215 or that a next portion of the data structure is searchable at prefix table 226. [54] According to an example embodiment, the network device 110 is configured to receive a serial stream of packets (not shown), each over a finite time period, and to transmit at least a respective data unit 111 to the pipeline 200. P T/IB2014/001894 [55] The processing core 201 extracts a serial stream of data units, in an embodiment, each data unit causing the packet processor of the network device 110 to initiate a search for a respective forwarding address stored as a respective distributed data structure among the memory banks 211— 231 of pipeline 200. Each data unit is used by a respective one of the controllers at each subsequent memory bank at a next clock cycle allowing for respectively parallel processing of the serial stream of packets such that a first lookup occurs during the same clock cycle as a second lookup, each lookup being for a respective one of the packets of the serial stream of packets. Hereinafter, it is noted that operations involve parallel processing, and a number of reading and writing operations available per clock cycle is directly related to the number of memory banks when at full operational capacity, that is, when each respective controller is searching one of the memory banks. Further, parallel processing refers to performing, during the same clock cycle, a plurality of lookups, each of the plurality of lookups occurring at a respective one of the memory banks and each of the plurality of lookups corresponding to a lookup to find a value of a respective data structure. [56] According to the determination by the controller 210 with respect to the data unit 111 and a result of searching the memory bank 211, an indication is sent to the controller 220 to continue the searching operations through the stride table 225 or the prefix table 226. [57] Likewise, the controller 220 determines when a value of the stride table 225 indicates a requirement to search the stride table 235 or the prefix table 236 of the subsequent memory bank 231. [58] Further, the controller 230 receives an indication from a previous controller regarding a required search of the stride table 235 or the prefix table 236. [59] According to example embodiments, when a last value of a linked list, such as a last value of the data structure or a null node of a linked list, is found at one of the respective prefix tables, the overall next forwarding address is carried out by the following paths including traveling along the remainder of the pipeline until network access is available or immediately exiting the pipeline for network forwarding or further processing, such as dropping the packet 101 or applying a quality of service to the packet 101 or stream of serial packets. [60] Controller 210 stores values as stride table components. Controller 220 and controller 230 store values as stride table components and as nodes of a linked list based on a predetermined weighting function which is reconfigurable based on a current storage usage of the memory banks. [61] It is noted that, in an embodiment, pipeline 200 takes advantage of a processing time interval for processing received packets 101 to perform an address lookup operation. Thus, as an engine that is external to the processing core 201, the lookup pipeline 200 is configured to utilize a portion of the time interval to perform a pipelined address lookup operation in parallel to other processing operations that performed on the received packet at processing core 201. [62] Fig. 2 illustrates the manner in which respective portions of a data unit 1410, such as an address for at least the OSI routing operation protocols described above, are stored as a corresponding data structure, in various memory banks, and subsequently searched. Fig. 2 shows a pipeline 200, memory bank 211, memory bank 221, and memory bank 231 in accordance with an embodiment. Each of the memory banks, memory bank 211, memory bank 221, and memory bank 231, includes a respective stride table, such as stride table 215, stride table 225, and stride table 236. Memory bank 221 and memory bank 231 each include respective prefix tables, prefix table 226 and prefix table 236. [63] According to an example embodiment, the pipeline 200 is configured to use a number of bits of a data unit 1410, such as a header 1411 and corresponding to the stored data structure, to begin a searching operation. In the embodiment, the data unit 1410 is received from a network (not shown) as part of a network routing operation. The header 1411 is determined to address the stride table 215 of memory bank 211. According to an example embodiment, the number of bits of the data unit used to begin the searching operation corresponds to the least significant bits of the data unit 1410. [64] A value of the data structure corresponding to the address portion 1413 is searchable at stride table 215 of memory bank 211. A value of the data structure corresponding to a second address portion 1414 is searchable as a value at stride table 225 and as a node of a linked list at prefix table 226. Values of the data structure corresponding to intermediate portions 1415 of the data unit 1410 are searchable at any number of immediately subsequent, intermediate memory banks of the pipeline 200 as values at stride table 225 and as a node of a linked list at prefix table 226. A value of the data structure corresponding to a final value 1416 of address 1400 is searchable as a final node of a linked list at prefix table 236. [65] According to an example embodiment, the value 1414, values 1415, and value 1416 are stored in prefix list table value entries in a single memory bank, such as at memory bank 221. According to another example embodiment, value 1414 is stored as stride table data in memory bank 221 and values 1415 and value 1416 are stored by prefix list table value entries in a single memory bank, such as a memory bank subsequent to memory bank 221. [66] Further, accessing the pipeline 200 includes first using a predetermined number of bits of a data packet, such as bits corresponding to a header 1411, as an address which directs the pipeline to search the stride table 215 of memory bank 211, as indicated by the address, for a value corresponding to the respective portion of the data structure. The pipeline 200 is further configured such that a value of stride table 215 directs the pipeline to perform a search for another value at stride table 225 and a search for a node of a linked list at prefix table 226. [67] This process continues throughout the pipeline 200 until the address is recovered by re-assembling the distributed data structure. [68] Fig. 3 shows a network 100 a network device 1 10, and a pipeline 200. The pipeline 200 comprises processing core 201, controller 210, controller 220, controller 230, memory bank 211, memory bank 221, and memory bank 231. [69] Memory bank 211, memory bank 221, and memory bank 231 each respectively include partitioned areas such as exact match (EM) table 115, EM table 125, and EM table 135. [70] Each address 710 of an EM table can store two full keys, for example address 721 stores first key 711 and second key 712. These keys are combinable with portions of the data unit 111 according to embodiments described above. A predetermined number of bits of a data unit 111 and first key 711 correspond to each other, a value, such as a portion of a network forwarding address, is output. If the bits of the data unit 111 and the first key 711 do not correspond, the same portion of the data unit 111 is extracted and combined with the second key 712, and when the bits of the data unit data unit 111 correspond to key 712, a value, such as a portion of a network forwarding address, is output. The bits of the data unit 111, as referred to in this paragraph, correspond to the least significant bits of the data unit 111; however, this is merely an example embodiment, and other portions of the data unit 111 are likewise utilized. [71] Accordingly, each respective controller utilizing an EM table is capable of recovering a number of values of the data structure equal to the number of keys stored by an address. Although not shown, any address of the addresses 720 of EM table 115 contains any suitable number of keys. According to an example embodiment, address 722 contains the same number of keys as address 721. Further, each of the addresses of any EM table is addressable by at least a portion of the data unit 111. [72] Fig. 3 further shows that the network device 110 receives a packet 101 from the network 100. The processing core initiates a search of the pipeline 200 by sending a data unit 111 to the lookup pipeline 200. The data unit 111 corresponds to at least a portion of the packet 101 from which a portions of a data structure is to be searched for by comparing the search data, a portion of the data unit 111, with the respective key 711 and key 712. [73] Each of controller 210, controller 220, and controller 230 is configured to search respective memory banks according to the data unit 111 and an indication of the search results from a previous memory bank. [74] According to an example embodiment, the pipeline 200 uses the data unit 111 to determine a correspondence with a local function of the EM table 115 for searching the EM table 115 of memory bank 211. Subsequent banks of the pipeline allow the result to exit the pipeline 200 when a match is found at memory bank 211. Controller 210 outputs data unit 212 and indication 213 corresponding to a result of the search of memory bank 211. [75] Controller 220 is configured to receive the data unit 212 and indication 213, and to perform a search of the stride table 225 of memory bank 221. Controller 220 is further configured to output a data unit 222 and an indication of the search and result found by controller 220. [76] Controller 230 is configured to receive a data unit 228 and an indication 229 of the search found at a previous memory bank. The data unit 228 and indication 229 are received by the controller 230 from a controller (not shown) disposed between controller 220 and controller 230. The controller 230 performs a search of EM table 135 of memory bank 231. According to example embodiments, a positive match in any of memory bank 211 and memory bank 221 is carried to subsequent memory banks through indication 213 and indication 223, respectively, together with either a network address, such as a forwarding address or a data structure corresponding to a network address by which the packet 101, the data unit 111, or the data unit 228 is to be forwarded. [77] Fig. 3 also illustrates an architecture for populating the memory banks of the pipeline 200. [78] According to an example embodiment, the pipeline 200 is configured to receive a data unit of a received packet. The controller 210 is configured to generate a first value, such as a hash key, to correspond to an expected data unit, to be received from the network 100, and to search the memory bank 211 to determine whether the newly determined value corresponds to or collides with any previously stored value. [79] When the newly determined value collides with a previously stored value, the controller 210 redirects the stored value to another position of another memory device of the pipeline. [80] The controller 210 is further configured to output an indication of a result of the search, such as an indication to a subsequent memory device of a collision and subsequent redirection. [81] The controller 220, which provides a transfer logic, operates between the controller 210 and the memory bank 221, in an embodiment. The controller 220 generates a different respective hash value corresponding to the data unit 111 for the memory bank memory bank 221, and the controller 220 also searches the memory bank 221 to determine whether the respective different hash value corresponds to any stored hash value in the respective memory bank 221 (i.e. it performs a search for a collision). The controller 220 subsequently outputs an indication of a result of the collision search at the controller 230. [82] Fig. 4 is a flow diagram 400 of an example algorithm and method, according to example embodiments, when a data unit is received by the pipeline device. The example method of Fig. 4 applies to multiple example embodiments in which the pipeline device is utilized, for example the pipeline 200 of Fig. 3. Processing begins at S400 at which the pipeline 200 receives a data unit from a packet of a network session. Processing continues at S401. [83] At S402, a controller of the pipeline uses the data unit to find a value in a memory bank by combining a portion of the data unit with a key stored implicitly, as a local function, as part of and address in the to-be- searched table. The first memory bank of the pipeline to be searched is indicated by at least a predetermined and configurable portion of a data unit with a key or plurality of keys of the addressed table. Processing continues at S403. [84] At S403, a controller of the pipeline uses a last found value, such as a value of the found data structure or a next portion of the data unit, to determine a location, an address of a table, to search in a subsequent one of the memory banks of the pipeline. According to an example embodiment, S403 is performed by the pipeline during a longest prefix match operation. Processing continues at S404. [85] At S404, a controller of the pipeline determines whether an address has been determined based on a currently found value. When an address has not been determined, then the processing continues at S403. When an address has been determined, then the processing continues at S405. [86] At S405, the pipeline outputs the address. [87] Fig. 5 shows a pipeline 200 including a processor core 201, an LPM engine 1110, a memory bank 211, a memory bank 221, and a memory bank 231. [88] The LPM engine 1110 is configured to control access to the specific pipeline storing data structures corresponding to a received data unit. According to an example embodiment, the data structure, accessed by the LPM engine 1110 through the memory banks of the pipeline, corresponds to a network address and is stored by the pipeline as a combination of stride table value entries 1120 and prefix table value entries 1130. [89] Memory bank 211 includes an area, which according to an example embodiment is partitioned, including stride table 1121 and stride table 1122 each configured to store values of a plurality of data structures. Each of the stored values respectively corresponds to both a portion of an address and an additional indication, such as an indication of a location to search for a next value of the address at a table of another memory bank. [90] Memory bank 221 includes an area, which according to an example embodiment is partitioned, including stride table 1123 and stride table 1124, each configured to store values of a plurality of data structures, with each value corresponding to a portion of an address and an indication to search for a next value of the distributed address at a stride table of another memory bank or an indication to search for a next value of the distributed address at a node of a linked list stored by a prefix table of another memory bank. Memory bank 221 also includes a separate area, which according to an example embodiment is partitioned, including prefix table 1133 and prefix table 1134, each configured to store a plurality of nodes of a plurality of linked lists, each node pointing to a node contained by a prefix table of a different memory bank. [91] Memory bank 231 includes a partitioned area including stride table 1125 and stride table 1126, each table being configured to store a plurality of last value entries each entry indicating a last value of a distributed address. Memory bank 231 also includes an area, which according to an example embodiment is partitioned, including prefix table 1135 and prefix table 1136, each table being configured to store a plurality of last nodes of a plurality of linked lists of data structures. [92] According to an example embodiment, the LPM engine 1110 controls the pipeline 200 to search for a value stored by the pipeline of stride table value entries 1210, and if indicated, the prefix table value entries 1130. The LPM engine 1110 also determines whether the searched value directs the LPM 1140 to search a next memory bank for another value of the data structure corresponding to a portion of the address. [93] According to an example embodiment, the illustrated data structure, ends at memory bank 231 with a final value of the address stored as a node at prefix table 1135. The pipeline 200 assembles a complete address by combining the values found throughout the memory banks and outputs the result 1150. [94] According to an example embodiment, the processing core completes processing of a packet, using the address result 1150 returned from the pipeline 200, and subsequently forwards the packet according to the result 1150 to a next location such as another network address. According to an example embodiment, the result 1150 is a next hop pointer which is 20 bits wide. [95] Fig. 6 is a flow diagram 600 of an example algorithm and method, according to example embodiments, performed when data is received by the pipeline device. The example method of Fig. 6 applies to multiple example embodiments in which the pipeline device is utilized. Processing begins at S601 as the pipeline receives a data unit from a packet of a network session. Processing continues at S602. [96] At S602 a controller of the pipeline uses the data unit to find a value, in a memory bank, by consuming a predetermined number of bits of the data unit. According to an example embodiment, in a 4-bit mode, four bits of the data unit are used to select an entry of a table, and in a 5-bit mode, the most significant bit of the portion of the data unit is used to select the table while the remaining four bits select the entry. [97] The portion of the data unit corresponds to a memory bank and a position within a stride table of the indicated memory bank to be searched. Processing continues at S603. [98] At S603 the controller of the pipeline determines if the value found at S602 indicates a next stride table or a next prefix table is to be searched for a next value of an address. If a next stride table is indicated, the processing continues at S603 by searching a portion of the next stride table at a position indicated by consuming a number of bits of the data unit. If a next prefix table is indicated, the processing continues at S604. [99] At S604, the controller of the pipeline has determined that a next value of the address is to be found at a node of a linked list stored in a subsequent memory bank. The controller searches the node of the linked list at the subsequent memory bank and determines both a value of the address and a pointer to another node of the linked list at a next memory bank. Processing continues at S605. [100] At S605, the controller determines if the final value of the address has been determined. According to an example embodiment, the final value of the address is determined to be found when a last node of the linked list is found. If the controller determines that the final value has not been found, processing continues at S604. If the controller determines that the final value has been found, processing continues at S606. [101] At S606, the controller outputs the result of the value, which is a next address for forwarding a packet corresponding to the original data unit used at S601. [102] Although the inventive concept has been described above with respect to the various example embodiments, it is noted that there can be a variety of permutations and modifications of the described features by those who are familiar with this field, without departing from the technical ideas and scope of the features, which shall be defined by the appended claims. [103] Further, while this specification contains many features, the features should not all be construed as limitations on the scope of the disclosure or the appended claims. Certain features described in the context of separate embodiments can also be implemented in combination. Conversely, various features described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. [104] Although the drawings describe operations in a specific order and/or show specific arrangements of components, one should not interpret that such specific order and/or arrangements are limited, or that all the operations performed and the components disclosed are needed to obtain a desired result. Accordingly, other implementations are within the scope of the following claims.
A knob or joystick apparatus detects gesture based actions of a user's fingers and/or hand. A user grasps the knob or joystick and moves the knob or joystick in either rotational direction, e.g., clockwise or counter clockwise, moves the knob or joystick horizontally/vertically or any combination thereof, and/or presses or pulls the knob or joystick in or out. Capacitive sensors are used in combination with a digital device, e.g., microcontroller, for detecting, decoding and interpreting therefrom various gesturing movements. A user may grasp a knob and either moves his/her fingers in a rotational, horizontal/vertical, and/or in/out movement(s) along an axis of the knob. During the motion(s) of the user's fingers, portions of an outer covering of the knob are deflected inwards toward capacitive sensors, wherein the movement(s) of the deflected portion(s) of the outer covering are detected, decoded and interpretations are made therefrom of various gesturing movements.
CLAIMWhat is claimed is:1. A knob based gesture interface, comprising:a curved substrate;a first plurality of capacitive sensor elements disposed on a face of the curved substrate;a deformable electrically insulated space surrounding the first plurality of capacitive sensor elements and the curved substrate; andan electrically conductive and mechanically deformable curved plane surrounding the deformable electrically insulated space, the first plurality of capacitive sensor elements and the curved substrate;wherein when at least one mechanical force is applied to at least one location on the electrically conductive and mechanically deformable curved plane the at least one mechanical force causes that at least one location on the plane to be biased toward at least one of the first plurality of capacitive sensor elements, whereby the at least one of the first plurality of capacitive sensor elements changes capacitance value.2. The knob based gesture interface according to claim 1 , wherein the electrically conductive and mechanically deformable curved plane is knob shaped.3. The knob based gesture interface according to claim 1, further comprising: a second plurality of capacitive sensor elements disposed on the face of the curved substrate and on the same plane thereof as the first plurality of capacitive sensor elements;the deformable electrically insulated space surrounds the first and second plurality of capacitive sensor elements and the curved substrate; andthe electrically conductive and mechanically deformable curved plane surrounds the deformable electrically insulated space, the first and second plurality of capacitive sensor elements and the curved substrate;wherein when at least one mechanical force is applied to at least one location on the electrically conductive and mechanically deformable curved plane the at least one mechanical force causes that at least one location on the plane to be biased toward at least one of the first and/or second plurality of capacitive sensor elements, whereby the at least one of the first and/or second plurality of capacitive sensor elements change capacitance value.4. The knob based gesture interface according to claim 1 , wherein when a plurality of mechanical forces are applied to a plurality of locations on the electrically conductive and mechanically deformable curved plane the plurality of mechanical forces cause those locations of the plane to be biased toward respective ones of the first plurality of capacitive sensor elements, whereby the respective ones of the first plurality of capacitive sensor elements change capacitance values.5. The knob based gesture interface according to claim 3, wherein when a plurality of mechanical forces are applied to the electrically conductive and mechanically deformable curved plane that cause it to be biased toward respective ones of the first and/or second plurality of capacitive sensor elements, the respective ones of the first and/or second plurality of capacitive sensor elements change capacitance values.6. The knob based gesture interface according to claim 4, wherein sequential changes in the capacitance values of the respective ones of the first plurality of capacitive sensor elements determine a direction of the plurality of mechanical forces applied to the electrically conductive and mechanically deformable curved plane. 7. The knob based gesture interface according to claim 6, wherein the direction is a rotational direction.8. The knob based gesture interface according to claim 5, wherein sequential changes in the capacitance values of the respective ones of the first and/or second plurality of capacitive sensor elements determine at least one direction of the plurality of mechanical forces applied to the electrically conductive and mechanically deformable curved plane.9. The knob based gesture interface according to claim 8, wherein the at least one direction is a rotational direction.10. The knob based gesture interface according to claim 8, wherein the at least one direction is a linear direction. 1 1. The knob based gesture interface according to claim 1 , further comprising: a third plurality of capacitive sensor elements disposed on another face of the curved substrate substantially perpendicular to the face thereof;the deformable electrically insulated space further surrounding the third plurality of capacitive sensor elements and the another face of the curved substrate; andthe electrically conductive and mechanically deformable curved plane surrounding the deformable electrically insulated space, the third plurality of capacitive sensor elements and the another face of the curved substrate;wherein when a mechanical force is applied to at least one location on the electrically conductive and mechanically deformable curved plane the at least one mechanical force causes that at least one location on the plane to be biased toward at least one of the third plurality of capacitive sensor elements, whereby the at least one of the third plurality of capacitive sensor elements changes capacitance value.12. The knob based gesture interface according to claim 1, wherein sequential changes in the capacitance values of the respective ones of the third plurality of capacitive sensor elements determine a direction of the mechanical force applied to the electrically conductive and mechanically deformable curved plane.13. A knob based interface, comprising:a curved substrate;a first plurality of capacitive sensor elements disposed on a face of the curved substrate;an electrically insulated space surrounding the first plurality of capacitive sensor elements and the curved substrate;an electrically conductive curved plane surrounding the electrically insulated space, the first plurality of capacitive sensor elements and the curved substrate; and a target electrically and mechanically coupled to an inside face of the electrically conductive curved plane, wherein the target and electrically conductive curved plane are adapted to rotate around the first plurality of capacitive sensor elements;wherein when the electrically conductive curved plane and target are rotated around the first plurality of capacitive sensor elements, the target is proximate to at least one of the first plurality of capacitive sensor elements, whereby the at least one of the first plurality of capacitive sensor elements changes capacitance value.14. The knob based interface according to claim 13, further comprising: a second plurality of capacitive sensor elements disposed on the face of the curved substrate and on the same plane thereof as the first plurality of capacitive sensor elements;the electrically insulated space surrounds the first and second plurality of capacitive sensor elements and the curved substrate; andthe electrically conductive curved plane surrounds the electrically insulated space, the first and second plurality of capacitive sensor elements and the curved substrate;wherein when the electrically conductive curved plane and target move substantially perpendicular to a plane of rotation thereof, the target is proximate to at least one of the first and/or second plurality of capacitive sensor elements, the at least one of the first and/or second plurality of capacitive sensor elements changes at least one capacitance value. 15. The knob based interface according to claim 13, wherein the at least one first plurality of capacitive sensor elements having the capacitance value change indicates a rotation position of the electrically conductive curved plane.16. The knob based interface according to claim 14, wherein the at least one first and/or second plurality of capacitive sensor elements having the at least one capacitance value change indicates a rotation position of the electrically conductive curved plane and a position thereof perpendicular to the rotation position.17. The knob based interface according to claim 13, further comprising an information display disposed on another face of the curved substrate substantially perpendicular to the face thereof. 18. The knob based interface according to claim 17, wherein the information display is an alpha-numeric display.19. The knob based interface according to claim 17, wherein the information display is plurality of light emitting diodes (LEDs) located around a circumference of the another face of the curved substrate, wherein at least one of the plurality of light emitting diodes indicates rotation position of the curved substrate.20. A gesturing apparatus, comprising:a base;a pivot means rotationally attached to the base;a shaft having a first end coupled to the pivot means;a first target attached at a location of the shaft toward the first end thereof; a second target attached to and is disposed around the shaft toward a second end thereof;a first plurality of capacitive sensor elements disposed around the shaft toward to the first end thereof; anda second plurality of capacitive sensor elements disposed around the shaft toward to the second end thereof;wherein when the shaft is rotated substantially perpendicular to the base, the first target is proximate to at least one of the first plurality of capacitive sensor elements, whereby the at least one of the first plurality of capacitive sensor elements changes capacitance value that is used in determining rotation position of the shaft; andwherein when the shaft is tilted away from being substantially perpendicular to the base, the second target is closer to at least one of the second plurality of capacitive sensor elements, whereby the at least one of the second plurality of capacitive sensor elements changes capacitance value that is used in determining tilt position of the shaft.21. The gesturing apparatus according to claim 20, further comprising:a third plurality of capacitive sensor elements disposed around the shaft and located between the first and second plurality of capacitive sensor elements;a positioning means allowing motion of the shaft toward or away from the base, wherein the first target is proximate to at least one of the first and/or third plurality of capacitive sensor elements, whereby the at least one of the first and/or third plurality of capacitive sensor elements changes capacitance value that is used in determining rotation position of the shaft and position toward or away from the base.22. The gesturing apparatus according to claim 20, further comprising a knob attached to the second end of the shaft.23. The gesturing apparatus according to claim 20, further comprising a control stick attached to the second end of the shaft.
KNOB BASED GESTURE SYSTEMTECHNICAL FIELDThe present disclosure relates to a knob based gesture system, e.g., a knob that rotates about an axis and/or can be pushed in or pulled out, or simulations thereof; and more particularly, to a knob based gesture system that uses capacitive touch sensors that require physical force on the touch sensor(s) during gesturing motions and further shields the capacitive touch sensors from extraneous unwanted activation by inadvertent proximity of a user.BACKGROUND Capacitive touch sensors are used as a user interface to electronic equipment, e.g., calculators, telephones, cash registers, gasoline pumps, etc. The capacitive touch sensors are activated (controls a signal indicating activation) by a change in capacitance of the capacitive touch sensor when an object, e.g., user finger tip, causes the capacitance thereof to change. Referring to Figure 1, depicted is a prior technology capacitive touch sensor generally represented by the numeral 100. The prior technology capacitive touch sensor 100 comprises a substrate 102, a sensor element 1 12 and a protective covering 108, e.g., glass. When a user finger tip 110 comes in close proximity to the sensor element 1 12, the capacitance value of the sensor element 1 12 changes. This capacitance change is electronically processed (not shown) so as to generate a signal indicating activation of the capacitive touch sensor 100 by the user (only finger tip 1 10 thereof shown). The protective covering 108 may be used to protect the sensor element 1 12 and for marking of the sensor 100.Problems exist with proper operation of the sensors 100 that may be caused by water, oil, mud, and/or food products, e.g., ketchup and mustard, either false triggering activation or inhibiting a desired activation thereof. Also problems exist when metallic objects (not shown) come in near proximity of the sensor element 112 and cause an undesired activation thereof. When there are a plurality of sensors 100 arranged in close proximity to each other, e.g., circumference and/or top of a knob arrangement, activation of the intended ones of the sensors 100 may cause unintended ones of neighbor sensor(s) 100 to undesirably actuate because of the close proximity of the user finger tip 110, or other portion of the user hand (not shown). This activation of unintended ones of the neighbor sensor(s) 100 may be caused when touching the intended ones of the sensors 100 and a portion of the user's hand also is sufficiently close to the unintended ones of the sensor(s) 100 for activation thereof.SUMMARYThe aforementioned problems are solved, and other and further benefits achieved by the capacitive touch sensors disclosed herein.According to an embodiment, a knob based gesture interface may comprise: a curved substrate; a first plurality of capacitive sensor elements disposed on a face of the curved substrate; a deformable electrically insulated space surrounding the first plurality of capacitive sensor elements and the curved substrate; and an electrically conductive and mechanically deformable curved plane surrounding the deformable electrically insulated space, the first plurality of capacitive sensor elements and the curved substrate; wherein when at least one mechanical force may be applied to at least one location on the electrically conductive and mechanically deformable curved plane the at least one mechanical force causes that at least one location on the plane to be biased toward at least one of the first plurality of capacitive sensor elements, whereby the at least one of the first plurality of capacitive sensor elements changes capacitance value.According to a further embodiment, the electrically conductive and mechanically deformable curved plane may be knob shaped. According to a further embodiment, a second plurality of capacitive sensor elements may be disposed on the face of the curved substrate and on the same plane thereof as the first plurality of capacitive sensor elements; the deformable electrically insulated space may surround the first and second plurality of capacitive sensor elements and the curved substrate; and the electrically conductive and mechanically deformable curved plane may surround the deformable electrically insulated space, the first and second plurality of capacitive sensor elements and the curved substrate; wherein when at least one mechanical force may be applied to at least one location on the electrically conductive and mechanically deformable curved plane the at least one mechanical force causes that at least one location on the plane may be biased toward at least one of the first and/or second plurality of capacitive sensor elements, whereby the at least one of the first and/or second plurality of capacitive sensor elements may change capacitance value. According to a further embodiment, when a plurality of mechanical forces may be applied to a plurality of locations on the electrically conductive and mechanically deformable curved plane the plurality of mechanical forces cause those locations of the plane to be biased toward respective ones of the first plurality of capacitive sensor elements, whereby the respective ones of the first plurality of capacitive sensor elements change capacitance values. According to a further embodiment, when a plurality of mechanical forces may be applied to the electrically conductive and mechanically deformable curved plane that cause it to be biased toward respective ones of the first and/or second plurality of capacitive sensor elements, the respective ones of the first and/or second plurality of capacitive sensor elements change capacitance values. According to a further embodiment, sequential changes in the capacitance values of the respective ones of the first plurality of capacitive sensor elements may determine a direction of the plurality of mechanical forces applied to the electrically conductive and mechanically deformable curved plane. According to a further embodiment, the direction may be a rotational direction.According to a further embodiment, sequential changes in the capacitance values of the respective ones of the first and/or second plurality of capacitive sensor elements may determine at least one direction of the plurality of mechanical forces applied to the electrically conductive and mechanically deformable curved plane. According to a further embodiment, the at least one direction may be a rotational direction. According to a further embodiment, the at least one direction may be a linear direction.According to a further embodiment, a third plurality of capacitive sensor elements may be disposed on another face of the curved substrate substantially perpendicular to the face thereof; the deformable electrically insulated space may further surround the third plurality of capacitive sensor elements and the another face of the curved substrate; and the electrically conductive and mechanically deformable curved plane may surround the deformable electrically insulated space, the third plurality of capacitive sensor elements and the another face of the curved substrate; wherein when a mechanical force may be applied to at least one location on the electrically conductive and mechanically deformable curved plane the at least one mechanical force may cause that at least one location on the plane to be biased toward at least one of the third plurality of capacitive sensor elements, whereby the at least one of the third plurality of capacitive sensor elements may change capacitance value. According to a further embodiment, sequential changes in the capacitance values of the respective ones of the third plurality of capacitive sensor elements may determine a direction of the mechanical force applied to the electrically conductive and mechanically deformable curved plane.According to another embodiment, a knob based interface may comprise: a curved substrate; a first plurality of capacitive sensor elements disposed on a face of the curved substrate; an electrically insulated space surrounding the first plurality of capacitive sensor elements and the curved substrate; an electrically conductive curved plane surrounding the electrically insulated space, the first plurality of capacitive sensor elements and the curved substrate; and a target electrically and mechanically coupled to an inside face of the electrically conductive curved plane, wherein the target and electrically conductive curved plane may be adapted to rotate around the first plurality of capacitive sensor elements; wherein when the electrically conductive curved plane and target may be rotated around the first plurality of capacitive sensor elements, the target may be proximate to at least one of the first plurality of capacitive sensor elements, whereby the at least one of the first plurality of capacitive sensor elements changes capacitance value. According to a further embodiment, a second plurality of capacitive sensor elements may be disposed on the face of the curved substrate and on the same plane thereof as the first plurality of capacitive sensor elements; the electrically insulated space may surround the first and second plurality of capacitive sensor elements and the curved substrate; and the electrically conductive curved plane may surround the electrically insulated space, the first and second plurality of capacitive sensor elements and the curved substrate; wherein when the electrically conductive curved plane and target move substantially perpendicular to a plane of rotation thereof, the target may be proximate to at least one of the first and/or second plurality of capacitive sensor elements, the at least one of the first and/or second plurality of capacitive sensor elements changes at least one capacitance value. According to a further embodiment, the at least one first plurality of capacitive sensor elements having the capacitance value change may indicate a rotation position of the electrically conductive curved plane. According to a further embodiment, the at least one first and/or second plurality of capacitive sensor elements having the at least one capacitance value change may indicate a rotation position of the electrically conductive curved plane and a position thereof perpendicular to the rotation position. According to a further embodiment, an information display may be disposed on another face of the curved substrate substantially perpendicular to the face thereof. According to a further embodiment, the information display may be an alpha-numeric display. According to a further embodiment, the information display may be plurality of light emitting diodes (LEDs) located around a circumference of the another face of the curved substrate, wherein at least one of the plurality of light emitting diodes may indicate a rotation position of the curved substrate.According to yet another embodiment, a gesturing apparatus may comprise: a base; a pivot means rotationally attached to the base; a shaft having a first end coupled to the pivot means; a first target attached at a location of the shaft toward the first end thereof; a second target attached to and may be disposed around the shaft toward a second end thereof; a first plurality of capacitive sensor elements disposed around the shaft toward to the first end thereof; and a second plurality of capacitive sensor elements disposed around the shaft toward to the second end thereof; wherein when the shaft may be rotated substantially perpendicular to the base, the first target may be proximate to at least one of the first plurality of capacitive sensor elements, whereby the at least one of the first plurality of capacitive sensor elements changes capacitance value that may be used in determining rotation position of the shaft; and wherein when the shaft may be tilted away from being substantially perpendicular to the base, the second target may be closer to at least one of the second plurality of capacitive sensor elements, whereby the at least one of the second plurality of capacitive sensor elements changes capacitance value that may be used in determining tilt position of the shaft.According to a further embodiment, a third plurality of capacitive sensor elements may be disposed around the shaft and located between the first and second plurality of capacitive sensor elements; a positioning means may allow motion of the shaft toward or away from the base, wherein the first target may be proximate to at least one of the first and/or third plurality of capacitive sensor elements, whereby the at least one of the first and/or third plurality of capacitive sensor elements may change capacitance value that may be used in determining rotation position of the shaft and position toward or away from the base. According to a further embodiment, a knob may be attached to the second end of the shaft. According to a further embodiment, a control stick may be attached to the second end of the shaft. BRIEF DESCRIPTION OF THE DRAWINGSA more complete understanding of the present disclosure thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings wherein: Figure 1 is a schematic cross section elevational view of a prior technology capacitive touch sensor.Figure 2 is a schematic cross section elevational view of a plurality of capacitive touch sensors, according to specific example embodiments of this disclosure;Figure 3 is a schematic block diagram of a user interface having a plurality of capacitive touch sensors, according to the teachings of this disclosure;Figure 4 illustrates schematic isometric cross section elevational views of a plurality of capacitive touch sensors arranged as a circular knob assembly, according to a specific example embodiment of this disclosure;Figure 5 illustrates schematic isometric cross section elevational views of a plurality of capacitive touch sensors arranged as a circular knob assembly, according to another specific example embodiment of this disclosure;Figure 6 illustrates schematic isometric cross section elevational and top views of a plurality of capacitive touch sensors arranged as a circular knob assembly, according to yet another specific example embodiment of this disclosure; Figure 7 illustrates schematic isometric cross section elevational and top views of a plurality of capacitive touch sensors arranged as a circular knob assembly, according to still another specific example embodiment of this disclosure;Figure 8 illustrates schematic cross section elevational and top views of a plurality of capacitive touch sensors arranged on a top surface of a circular knob, according to specific example embodiments of this disclosure;Figure 9 illustrates schematic cross section elevational, top and bottom views of a plurality of capacitive touch sensors arranged as a circular knob assembly having multiple axes of linear movement and rotation, according to still another specific example embodiment of this disclosure; Figure 10 illustrates schematic cross section elevational, top and bottom views of a plurality of capacitive touch sensors arranged as a circular knob assembly having multiple axes of linear movement and rotation, according to another specific example embodiment of this disclosure; Figure 11 illustrates schematic cross section elevational, top and bottom views of a plurality of capacitive touch sensors arranged as a control lever assembly having multiple axes of linear movement and rotation, according to still another specific example embodiment of this disclosure; andFigure 12 illustrates schematic isometric elevational views of visual displays embedded in top portions of circular knobs, according to specific example embodiments of this disclosure.While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein, but on the contrary, this disclosure is to cover all modifications and equivalents as defined by the appended claims.DETAILED DESCRIPTIONAccording to the teachings of this disclosure, a knob or joystick apparatus may be used to detect gesture based actions of a user's fingers and/or hand. According to some embodiments described herein a user may grasp the knob or joystick and move the knob or joystick in either rotational direction, e.g., clockwise or counter clockwise, move the knob or joystick horizontally/vertically or any combination thereof, or depress/pull the knob or joystick in or out. Capacitive sensors may be used in combination with a digital device, e.g., microcontroller, for detecting, decoding and interpreting therefrom various gesturing movements. According to some other embodiments described herein a user grasps a knob and either moves his/her fingers in a rotational, horizontal/vertical, and/or in/out movement(s) along an axis of the knob. During the motion(s) of the user's fingers, portions of an outer covering of the knob are deflected inwards toward capacitive sensors, wherein the movement(s) of the deflected portion(s) of the outer covering are detected, decoded and interpretations are made therefrom of various gesturing movements. Referring now to the drawings, the details of example embodiments are schematically illustrated. Like elements in the drawings will be represented by like numbers, and similar elements will be represented by like numbers with a different lower case letter suffix.Referring to Figure 2, depicted is a schematic cross section elevational view of a plurality of capacitive touch sensors, according to specific example embodiments of this disclosure. The capacitive touch sensors, generally represented by the numeral 200, may comprise a substrate 202, capacitive sensor elements 212, a deformable space 216, and an electrically conductive deformable plane 206. The conductive deformable plane 206 may be connected to a power supply common and/or grounded (not shown) to form a capacitor with each of the capacitive sensor elements 212, and for improved shielding of the capacitive sensor elements 212 from electrostatic disturbances and false triggering thereof. The deformable space 216 may be filled with, for example but not limited to, air, nitrogen, elastic silicon rubber, etc. An optional protective deformable cover 208 may be provided over the conductive deformable plane 206 and proximate to the deformable space 216. Each of the capacitive sensor elements 212 may be connected through connections232 to a capacitance measurement circuit, e.g., capacitive sensor analog front end (AFE) 304 (Figure 3), and the conductive deformable plane 206 is normally connected through connection 230 to a power supply common and/or ground (not shown). However, the conductive deformable plane 206 may be connected through connection 230 to a digital output of a digital processor 306 (Figure 3) and used as both one plate of the capacitor formed with the capacitive sensor element 212 when grounded. Or the conductive deformable plane 206 may be connected to a capacitive measurement input of the AFE 304 when the digital output from the digital processor 306 is in a high impedance off state. For example, the connection 230 is coupled to an input of the AFE 304 and the digital output is connected in parallel to the same input of the AFE 304. When the output is at a logic low, the conductive deformable plane 206 is at the power supply common, and when at a high impedance (off) the conductive deformable plane 206 may function as a capacitive sensor element similar to what is shown in Figure 1. E.g., the digital output acts as a shunt switch that when closed shorts the conductive deformable plane 206 to ground and when open enables conductive deformable plane 206 to function as a capacitive sensor element 1 12 {e.g., see Figure 1). This configuration for the conductive deformable plane 206 may be used as a proximity detector, e.g., as a user finger and/or hand approaches the capacitive sensor (conductive deformable plane 206) a "system wakeup" signal may be generated in the digital processor 306 (Figure 3).The conductive deformable plane 206 is physically deformable over the deformable space 216 so that when a force, e.g., a user's finger 110, presses down onto the conductive deformable plane 206, the distance between at least one of the capacitive sensor elements 212 and the conductive deformable plane 206 is reduced, thereby changing the capacitance of those at least one capacitive sensor elements 212. A capacitance change detection circuit (not shown) monitors the capacitance values of the capacitive sensor elements 212, and when any one or more of the capacitance values change {e.g. , increases) a sensor activation signal may be generated (not shown). The conductive deformable plane 206 may be metal or other electrically conductive material, or the conductive deformable plane 206 may be plated, coated, attached, etc., to an inside face of the optional protective deformable cover 208.The capacitive touch sensors 200 are substantially immune to false triggering caused by a user in close proximity to the sensor target because a correct area of the conductive deformable plane 206 must be deformed in order for the capacitance value of a capacitive sensor element 212 to change, e.g., requires an actuation force from the user's finger 1 10. In addition, stray metallic objects will not substantially affect the capacitance values of the capacitive sensor elements 212 for the same reason. Furthermore the assembly of the capacitive touch sensors 200 can be sealed within the physically deformable electrically insulated space 216 and thus may be substantially immune to fluid contamination thereof. As the user's finger 110 moves in a direction along the surface of the conductive deformable plane 206 the capacitance values of the capacitive sensor elements 212 proximate to the deformation of the conductive deformable plane 206 will change and locations and direction of the user's finger 110 may thereby be determined. The capacitive sensor elements 212 are electrically conductive and may be comprised of metal such as, for example but not limited to, copper, aluminum, silver, gold, tin, and/or any combination thereof, plated or otherwise. The capacitive sensor elements 212 may also be comprised of non-metallic conductive material. The substrate 202 and capacitive sensor element 212 may be, for example but are not limited to, a curved printed circuit board having conductive (e.g., metal) areas etched thereon, a curved ceramic substrate with conductive areas thereon, clear or translucent glass or plastic with conductive areas thereon, etc. Referring to Figure 3, depicted is a schematic block diagram of a user interface having a plurality of capacitive touch sensors, according to the teachings of this disclosure. A plurality of capacitive touch sensors 200 may be arranged as shown in Figures 4-11 and as more fully described hereinafter. When a mechanical force is applied to at least one location on the conductive deformable plane 206, it will come closer to at least one of the capacitive sensor elements 212 proximate to the deformation in the conductive deformable plane 206 and will thereby change the capacitance value(s) of the at least one associated capacitive sensor 200, e.g., increase the capacitance value(s) thereof. This change in capacitance value(s) of the at least one associated capacitive sensor 200 may be detected by the AFE 304 and the digital processor 306 may read the output of the AFE 304 to determine which one(s) of the capacitive sensor(s) 200 has(ve) increased in capacitance value(s). The AFE 304 and digital processor 306 may be part of a digital device 302, e.g., microcontroller, application specific integrated circuit (ASIC), programmable logic array (PLA), etc. The digital processor 306 and AFE 304 may be part of a mixed signal (analog and digital circuits) integrated circuit device, e.g., mixed signal capable microcontroller.The capacitive touch AFE 304 measures the capacitance value of each capacitive sensor 200 and may convert the capacitance values into respective analog direct current (dc) voltages that are read and converted into digital values with an analog-to-digital converter (ADC) (not shown) and sent to the digital processor 306. Various methods of measuring capacitance change may be used. For example, but not limited to, capacitance measurement using: a charge time measurement unit (CTMU), see Microchip Application Note AN 1250; a capacitive sensing module (CSM), see Microchip TB3064 "mTouch™ Projected Capacitive Touch Screen Sensing Theory of Operation"; a capacitive voltage divider (CVD) measurement, see Microchip Application Note AN1298; wherein all are hereby incoiporated by reference herein and available at www.microchip.com.Referring to Figure 4, depicted are schematic isometric cross section elevational views of a plurality of capacitive touch sensors arranged as a circular knob assembly, according to a specific example embodiment of this disclosure. A knob assembly, generally represented by the numeral 400, may comprise a curved substrate 402, a plurality of capacitive sensor elements 412, a physically deformable electrically insulated space 416, and an electrically conductive and mechanically deformable curved plane 406. The plurality of capacitive sensor elements 412 are disposed on the curved substrate 402. It is contemplated and within the scope of this disclosure that the plurality of capacitive sensor elements 412 may be disposed on either side of the curved substrate 402. The physically deformable electrically insulated space 416 surrounds the plurality of capacitive sensor elements 412 and the curved substrate 402. The electrically conductive and mechanically deformable curved plane 406 surrounds the physically deformable electrically insulated space 416 and the plurality of capacitive sensor elements 412.When at least one mechanical force 420 is applied to at least one portion of the electrically conductive and mechanically deformable curved plane 406, that at least one portion thereof will move closer to at least one capacitive sensor element 412 proximate thereto, thereby changing (e.g., increasing) the capacitance valve of that at least one capacitive sensor element 412. The change in capacitance value(s) of the at least one capacitive sensor element 412 may be detected by the digital device 302 (Figure 3) which may thereby determine a location(s) and direction of the force(s) 420. The force(s) 420 may be fingers 1 10 of a user's hand grasping the electrically conductive and mechanically deformable curved plane 406 and rotating around the circumference thereof to activate a control operation therefrom. The electrically conductive and mechanically deformable curved plane 406 and/or the physically deformable electrically insulated space 416 may remain stationary with or may rotate around the curved substrate 402 during a rotational movement of the user's fingers 1 10 grasping the electrically conductive and mechanically deformable curved plane 406.Referring to Figure 5, depicted are schematic isometric cross section elevational views of a plurality of capacitive touch sensors arranged as a circular knob assembly, according to another specific example embodiment of this disclosure. A knob assembly, generally represented by the numeral 500, may comprise a curved substrate 502, a first plurality of capacitive sensor elements 512, a second plurality of capacitive sensor elements 514, a physically deformable electrically insulated space 516, and an electrically conductive and mechanically deformable curved plane 506. The first and second plurality of capacitive sensor elements 512 and 514 are disposed on the curved substrate 502. It is contemplated and within the scope of this disclosure that the first and second plurality of capacitive sensor elements 512 and 514 may be disposed on either side of the curved substrate 502. The physically deformable electrically insulated space 516 surrounds the first and second plurality of capacitive sensor elements 512 and 514, and the curved substrate 502. The electrically conductive and mechanically deformable curved plane 506 surrounds the physically deformable electrically insulated space 516, and the first and second plurality of capacitive sensor elements 512 and 514. It is also contemplated and within the scope of this disclosure that that more than two rows of capacitive sensor elements may be disposed on the curved substrate 502, e.g., third, fourth, fifth, etc., plurality of capacitive sensor elements.When at least one mechanical force 520 is applied to at least one portion of the electrically conductive and mechanically deformable curved plane 506, that at least one portion thereof will move closer to at least one capacitive sensor element(s) 512 and/or 514 proximate thereto, thereby changing {e.g., increasing) the capacitance valve(s) of that at least one capacitive sensor element 512 and/or 514. The change in capacitance value(s) of the at least one capacitive sensor element 512 and/or 514 may be detected by the digital device 302 (Figure 3) which may thereby determine a location(s) and direction(s) of the force(s) 520. The force(s) 520 may be fingers 1 10 of a user's hand grasping the electrically conductive and mechanically deformable curved plane 506 and rotating around and/or up or down the circumference thereof to activate a control operation(s) therefrom. The electrically conductive and mechanically deformable curved plane 506 and/or the physically deformable electrically insulated space 516 may remain stationary with or may rotate around and/or up or down the curved substrate 502 during a rotational and/or up or down movement of the user's fingers 1 10 grasping the electrically conductive and mechanically deformable curved plane 506. Force 520a will change the capacitance value(s) of the capacitive sensor element(s) 514, force 520b will change the capacitance values of capacitive sensor elements 512 and 514, and force 520c will change the capacitance value(s) of the capacitive sensor element(s) 512.Referring to Figure 6, depicted are schematic isometric cross section elevational and top views of a plurality of capacitive touch sensors arranged as a circular knob assembly, according to yet another specific example embodiment of this disclosure. A knob assembly, generally represented by the numeral 600, may comprise a curved substrate 602, a plurality of capacitive sensor elements 612, an electrically insulated space 616, an electrically conductive target 628, and an electrically conductive curved plane 626. The plurality of capacitive sensor elements 612 are disposed on the curved substrate 602. It is contemplated and within the scope of this disclosure that the plurality of capacitive sensor elements 612 may be disposed on either side of the curved substrate 602. The electrically insulated space 616 surrounds the plurality of capacitive sensor elements 612 and the curved substrate 602. The target 628 may be mechanically and electrically coupled to the conductive curved plane 626. The conductive curved plane 626 surrounds the electrically insulated space 616 and the plurality of capacitive sensor elements 612. The conductive curved plane 626 and the target 628, mechanically coupled thereto, rotate around the curved substrate 602 and the plurality of capacitive sensor elements 612. Optionally, a location post 630 may be provided for maintaining the mechanical positions between the conductive curved plane 626 and the curved substrate 602 while the conductive curved plane 626 rotates around the curved substrate 602. The electrically insulated space 616 may be air and/or a deformable electrically insulating material. As the conductive curved plane 626 rotates, the target 628 moves around the curved substrate 602 and when proximate to at least one capacitive sensor element 612 the capacitance value of that at least one capacitive sensor element 612 will change, e.g., increase in value. The change in capacitance value(s) of the at least one capacitive sensor element 612 may be detected by the digital device 302 (Figure 3) which may thereby determine the sequential location(s) thereof and therefrom determine the direction of rotation of the target 628. The conductive curved plane 626 may rotate during a rotational movement of the user's fingers 1 10 grasping the electrically conductive and mechanically deformable curved plane 606, or by any other means of rotation thereof.Referring to Figure 7, depicted are schematic isometric cross section elevational and top views of a plurality of capacitive touch sensors arranged as a circular knob assembly, according to still another specific example embodiment of this disclosure; A knob assembly, generally represented by the numeral 700, may comprise a curved substrate 702, a first plurality of capacitive sensor elements 712, a second plurality of capacitive sensor elements 714, an electrically insulated space 716, an electrically conductive target 728, and an electrically conductive curved plane 726. The first and second plurality of capacitive sensor elements 712 and 714 are disposed on the curved substrate 702. It is contemplated and within the scope of this disclosure that the first and second plurality of capacitive sensor elements 712 and 714 may be disposed on either side of the curved substrate 702. The electrically insulated space 716 surrounds the first and second plurality of capacitive sensor elements 712 and 714, and the curved substrate 702. The target 728 may be mechanically and electrically coupled to the conductive curved plane 726. The conductive curved plane 726 surrounds the electrically insulated space 716, and the first and second plurality of capacitive sensor elements 712 and 714. It is also contemplated and within the scope of this disclosure that that more than two rows of capacitive sensor elements may be disposed on the curved substrate 702, e.g., third, fourth, fifth, etc., plurality of capacitive sensor elements.The conductive curved plane 726 and the target 728, mechanically coupled thereto, rotate around, and/or up or down the curved substrate 702 and the plurality of first and second capacitive sensor elements 712 and 714. Optionally, a location post 730 may be provided for maintaining the mechanical positions between the conductive curved plane 726 and the curved substrate 702 while the conductive curved plane 726 rotates around the curved substrate 702. The electrically insulated space 716 may be air and/or a deformable electrically insulating material .As the conductive curved plane 726 rotates, the target 728 moves around the curved substrate 702 and when proximate to at least one of the first and/or second capacitive sensor element(s) 712 and/or 714 the capacitance value(s) of that at least one first and/or second capacitive sensor element(s) 712 and/or 714 will change, e.g., increase in value. The change in capacitance value(s) of the at least one first and/or second capacitive sensor element(s) 712 and/or 714 may be detected by the digital device 302 (Figure 3) which may thereby determine the sequential location(s) thereof and therefrom determine the direction(s) of rotation, and/or in or out motion of the target 728. The conductive curved plane 726 may rotate, and/or move in or out during a rotational motion(s), and/or in or out motion(s) of the user's fingers 110 grasping the electrically conductive and mechanically deformable curved plane 706, or by any other means of rotating, and/or in or out motion(s) of the electrically conductive and mechanically deformable curved plane 706.Referring to Figure 8, depicted are schematic cross section elevational and top views of a plurality of capacitive touch sensors arranged on a top surface of a circular knob, according to specific example embodiments of this disclosure. A circular knob, e.g., any of the embodiments disclosed herein, may have a plurality of capacitive sensor elements 812 disposed on a circular substrate 802, a physically deformable electrically insulated space 816, and an electrically conductive deformable plane 806 on a top surface of a circular portion of the conductive deformable plane 806. The physically deformable electrically insulated space 816 is over the plurality of capacitive sensor elements 812 and the circular substrate 802. The conductive defbrmable plane 806 may surround the physically deformable electrically insulated space 816.When at least one mechanical force 820 is applied to at least one portion of the conductive deformable plane 806 that at least one portion thereof will move closer to at least one capacitive sensor element 812 proximate thereto, thereby changing (e.g., increasing) the capacitance valve of that at least one capacitive sensor element 812. The change in capacitance value(s) of the at least one capacitive sensor element(s) 812 may be detected by the digital device 302 (Figure 3) which may thereby determine a location(s) and direction(s) of the at least one force 820. The force(s) 820 may be a finger(s) 1 10 of a user's hand pushing down on the top surface of the conductive deformable plane 806 to activate a control operation therefrom. Gesturing with two fingers spreading apart may represent a positive zoom, and gesturing with two fingers moving together may represent a negative zoom. One finger moving in a direction may represent movement of a mouse pointer, and a finger tap may represent an enter command. Referring to Figure 9, depicted are schematic cross section elevational, top and bottom views of a plurality of capacitive touch sensors arranged as a circular knob assembly having multiple axes of linear movement and rotation, according to still another specific example embodiment of this disclosure. A knob assembly, generally represented by the numeral 900, may comprise a first plurality of capacitive sensor elements 912 arranged in a circle (Figure 9(c)), a second plurality of capacitive sensor elements 916 arranged in a circle (Figure 9(b)), a knob 906 attached to a first end of a shaft 932, a first target 928 attached to the shaft 932, a second target 930 above the first target 928 and also attached to the shaft 932, a pivot means 934, e.g., ball, semi-flexible coupling, etc.; attached to a second end of the shaft 932, a socket 936 adapted to receive the pivot means 934, and a base 940 attached to the socket 936. Wherein the combination of the pivot means 934 and the socket 936 allow the knob 906 to rotate and/or tilt linearly in substantially all directions. The pivot means 934 and the socket 936 may also be adapted to position the shaft 932 substantially perpendicular to the plane of the base 940 when there is no force being applied to the knob 906.When the shaft 932 is substantially perpendicular to the plane of the base 940 the second target 930 is substantially equidistant between the second plurality of capacitive sensor elements 916 and the capacitive values thereof may also be substantially the same if each of the second plurality of capacitive sensor elements 916 has substantially the same area as the other ones of the second plurality of capacitive sensor elements 916. When the knob 906 is tilted in a direction caused by a force, e.g., from a user's finger(s) 1 10 or palm of the user's hand, the second target 930 will be biased toward at least one of the second capacitive sensor elements 916 and away from at least one other of the other second capacitive sensor elements 916 located at substantially 180 degrees opposite thereof. This will cause the capacitance value of the second capacitive sensor element(s) 916 closer to the second target 930 to increase and the capacitance value of the other second capacitive sensor element(s) 916 farther from the second target 930 to decrease. The first plurality of capacitive sensor elements 912 in combination with the first target 928 may be used in determining rotation position of the knob 906, see Figure 9(c). The first capacitive sensor element 9 2 closest to the first target 928 will have a capacitance value change (increase) that is different from the other ones of the first capacitive sensor elements 912, whereby the knob 906 rotational position may thereby be determined. Any change in capacitance value(s) of the first or second capacitive sensor elements 512 or 516 may be detected by the digital device 302 (Figure 3) which may thereby determine the rotational location and/or tilt direction caused by the Ibrce(s) applied to the knob 906.Referring to Figure 10, depicted are schematic cross section elevational, top and bottom views of a plurality of capacitive touch sensors arranged as a circular knob assembly having multiple axes of linear movement and rotation, according to another specific example embodiment of this disclosure. A knob assembly, generally represented by the numeral 1000, may comprise a first plurality of capacitive sensor elements 1012 arranged in a circle (Figure 10(c)), a second plurality of capacitive sensor elements 1014 above the first plurality of capacitive sensor elements 012 and also arranged in a circle (Figure 10(c)), a third plurality of capacitive sensor elements 1016 arranged in a circle (Figure 10(b)), a knob 1006 attached to a first end of a shaft 1032, a first target 1028 attached to the shaft 1032, a second target 1030 above the first target 1028 and also attached to the shaft 1032, a pivot means 1034, e.g., ball, semi-flexible coupling, etc.; attached to a second end of a shaft 1032a, a socket 1036 adapted to receive the pivot means 1034, a means for positioning 1038, e.g., spring, elastic foam (not shown), etc., for biasing the position of the knob 1006 as shown in Figure 10(a); and a base 1040 attached to the socket 1036. Wherein the combination of the pivot means 1034, the socket 1036 and the positioning means 1038 allow the knob 1006 to rotate and/or tilt linearly in substantially all directions, and/or move in or out. The pivot means 1034 and the socket 1036 may also be adapted to position the shaft 1032 substantially perpendicular to the plane of the base 1040 when there is no force being applied to the knob 1006. When the shaft 1032 is substantially perpendicular to the plane of the base 1040 the second target 1030 is substantially equidistant between the third plurality of capacitive sensor elements 1016 and the capacitive values thereof may also be substantially the same if each of the third plurality of capacitive sensor elements 1016 has substantially the same area as the other ones of the third plurality of capacitive sensor elements 1016. When the knob 1006 is tilted in a direction caused by a force, e.g., from a user's finger(s) 1 10 or palm of the user's hand, the second target 1030 will be biased toward at least one of the third capacitive sensor elements 1016 and away from at least one other of the other third capacitive sensor elements 1016 located at substantially 180 degrees opposite thereof. This will cause the capacitance value of the third capacitive sensor element(s) 1016 closer to the second target 1030 to increase and the capacitance value of the other third capacitive sensor element(s) 1016 farther from the second target 1030 to decrease.The first and second plurality of capacitive sensor elements 1012 and 1014 in combination with the first target 1028 may be used in determining rotation, and/or in/out position(s) of the knob 1006, see Figure 10(c). The first capacitive sensor element 1012 closest to the first target 1028 will have a capacitance value change (increase) different from the other ones of the first plurality of capacitive sensor elements 1012 when the knob 1006 is pushed to an in position and, likewise, the second plurality of capacitive sensor elements 1014 when the knob 1006 is in an out position, whereby the knob 1006 rotational position may thereby be determined. The capacitance values of the first and second capacitive sensors 1012 and 1014, e.g., ratiometric changes in the capacitance values thereof, proximate to the target 1028 may also be used in determining vertical (in/out) position of the knob 1006. Any change in capacitance value(s) of the first, second and/or third capacitive sensor elements 1012, 1014, 1016 may be detected by the digital device 302 (Figure 3) which may thereby determine the rotational location, tilt direction, and in/out position caused by the force(s) applied to the knob 1006. Referring to Figure 1 1 , depicted are schematic cross section elevational, top and bottom views of a plurality of capacitive touch sensors arranged as a control lever assembly having multiple axes of linear movement and rotation, according to still another specific example embodiment of this disclosure. A control lever assembly, generally represented by the numeral 1 100, may comprise a first plurality of capacitive sensor elements 1 1 12 arranged in a circle (Figure 1 1 (c)), a second plurality of capacitive sensor elements 11 14 above the first plurality of capacitive sensor elements 1112 and also arranged in a circle (Figure 11(c)), a third plurality of capacitive sensor elements 11 16 arranged in a circle (Figure 1 1 (b)), a control lever 1 106, e.g., "joy stick," attached to a first end of a shaft 1 132, a first target 1 128 attached to the shaft 1 132, a second target 1 130 above the first target 1 128 and also attached to the shaft 1 132, a pivot meansl l34, e.g., ball, semi-flexible coupling, etc.; attached to a second end of a shaft 1 132a, a socket 1 136 adapted to receive the pivot means 1 134, a means for positioning 1 138, 1 140, e.g., spring, elastic foam, etc., for biasing the position of the control lever 1106 as shown in Figure 1 1 (a); and a base 1 140 attached to the socket 1 136. Wherein the combination of the pivot means 1 134, the socket 1 136 and the positioning means 1 138, 1 140 allow the control lever 1 106 to rotate and/or tilt linearly in substantially all directions, and/or move in or out. The pivot means 1 134 and the socket 1136 may also be adapted to position the shaft 1 132 substantially perpendicular to the plane of the base 1 140 when there is no force being applied to the control lever 1 106. When the shaft 1 132 is substantially perpendicular to the plane of the base 1 140 the second target 1 130 is substantially equidistant between the third plurality of capacitive sensor elements 1 1 16 and the capacitive values thereof may also be substantially the same if each of the third plurality of capacitive sensor elements 11 16 has substantially the same area as the other ones of the third plurality of capacitive sensor elements 11 16. When the control lever 1 106 is tilted in a direction caused by a force, e.g., from a user's finger(s) 110 or palm of the user's hand, the second target 1 130 will be biased toward at least one of the third capacitive sensor elements 11 16 and away from at least one other of the other third capacitive sensor elements 1 1 16 located at substantially 180 degrees opposite thereof. This will cause the capacitance value of the third capacitive sensor element(s) 1 116 closer to the second target 1 130 to increase and the capacitance value of the other third capacitive sensor element(s) 1 1 16 farther from the second target 1 130 to decrease. The first and second plurality of capacitive sensor elements 1 1 12 and 1 114 in combination with the first target 1 128 may be used in determining rotation, and/or in/out position(s) of the control lever 1106, see Figure 11(c). The first capacitive sensor element 1 1 12 closest to the first target 1128 will have a capacitance value change (increase) different from the other ones of the first plurality of capacitive sensor elements 1 1 12 when the control lever 1 106 is pushed to an in position and, likewise, the second plurality of capacitive sensor elements 11 14 when the control lever 1 106 is in an out position whereby the control lever rotational position may thereby be determined. The capacitance values of the first and second capacitive sensors 1 1 12 and 1 1 14, e.g., ratiometric changes in the capacitance values thereof, proximate to the target 1128 may also be used in determining vertical (in/out) position of the control lever 1106. Any change in capacitance value(s) of the first, second and/or third capacitive sensor elements 11 12, 1 1 14, 1 1 16 may be detected by the digital device 302 (Figure 3) which may thereby determine the rotational location, tilt direction, and in/out position caused by the force(s) applied to the control lever 1 106.Referring to Figure 12, depicted are schematic isometric elevational views of visual displays embedded in top portions of circular knobs, according to specific example embodiments of this disclosure. Light emitting diodes 1242 may be embedded into or on the top portion of the knob and may be arranged in a circular pattern to indicate rotational position of the knob. A visual display 1244, e.g., alpha-numeric LED, LCD, etc.; may be embedded into or on the top portion of the knob and provide information to the user.While embodiments of this disclosure have been depicted, described, and are defined by reference to example embodiments of the disclosure, such references do not imply a limitation on the disclosure, and no such limitation is to be inferred. The subject matter disclosed is capable of considerable modification, alteration, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent art and having the benefit of this disclosure. The depicted and described embodiments of this disclosure are examples only, and are not exhaustive of the scope of the disclosure.
Method and apparatus for assessing coverage of production rules of a programming language by one or more test programs. A set of production rules that define the programming language is input, along with a test program. The production rules that are covered by the test program are determined and coverage of production rules by the test program is reported.
What is claimed is:1. A computer-implemented method for assessing coverage of production rules of a programming language by at least one test program, comprising:inputting a set of production rules that define the programming language;inputting a test program;determining the production rules that are covered by the test program;reporting coverage of the set of production rules by the test program; andannotating reported production rules with data that identify positional information relative to code in the test program derived from the reported production rules.2. The method of claim 1, further comprising:determining for each production rule one of a plurality of levels of coverage by the test program, wherein the levels of coverage include fully covered, partially covered, and not covered; andreporting in association with each production rule information that indicates the determined level of coverage.3. The method of claim 2, further comprising color-coding the information that indicates the determined level of coverage of a production rule.4. The method of claim 1, further comprising:inputting a plurality of test programs;determining the production rules that are covered by the test programs; andreporting collective coverage of the set of production rules by the test programs.5. The method of claim 4, further comprising:identifying production rules covered by two or more of the plurality of test programs; andreporting each production rule covered by two or more of the plurality of test programs.6. The method of claim 4, further comprising:determining for each production rule one of a plurality of levels of coverage by the plurality of test programs, wherein the levels of coverage include fully covered, partially covered, and not covered; andreporting in association with each production rule information that indicates the determined level of coverage.7. The method of claim 6, further comprising color-coding the information that indicates the determined level of coverage of a production rule.8. An apparatus for assessing coverage of production rules of a programming language by at least one test program, comprising:means for inputting a set of production rules that define the programming language;means for inputting a test program;means for determining the production rules that are covered by the test program;means for reporting coverage of the set of production rules by the test program; andmeans for annotating reported production rules with data that identify positional information relative to code in the test program derived from the reported production rules.9. A computer-implemented method for assessing coverage of production rules of a programming language by at least one test program, comprising:inputting a set of production rules that define the programming language;generating a first tree of nodes that represents the set of production rules;inputting a test program;generating a second tree of nodes that represents production rules that derive the test program;comparing nodes of the first tree to nodes of the second tree;displaying the first tree with information associated with the nodes that indicates coverage of the set of production rules by the test program; andannotating nodes in the first tree with data that identify positional information relative to code in the test program derived from the reported production rules.10. The method of claim 9, further comprising:determining for each node in the first tree one of a plurality of levels of coverage by the test program, wherein the levels of coverage include fully covered, partially covered, and not covered; anddisplaying in association with each node in the first tree information that indicates the determined level of coverage.11. The method of claim 10, further comprising color-coding the information that indicates the determined level of coverage.12. The method of claim 9, further comprising:inputting a plurality of test programs;generating respective trees of nodes that represent production rules that derive the plurality of test programs;comparing nodes of the first tree to nodes of the plurality of trees; anddisplaying the first tree with information associated with the nodes that indicates coverage of the set of production rules by the plurality of test programs.13. The method of claim 12, further comprising:identifying nodes of the first tree that correspond to nodes of two or more of the plurality of trees; anddisplaying information in association with each node of the first tree covered by two or more of the plurality of test programs.14. The method of claim 12, further comprising:determining for each node in the first tree one of a plurality of levels of coverage by the plurality of test programs, wherein the levels of coverage include fully covered, partially covered, and not covered; anddisplaying in association with each node in the first tree information that indicates the determined level of coverage.15. The method of claim 14, further comprising color-coding the information that indicates the determined level of coverage of a node in the first tree.16. The method of claim 9, wherein generating the first tree comprises generating for recursive production rules a selected number of levels in the tree.17. The method of claim 9, wherein nodes in the first tree are interconnected and branches formed, and generating the first tree comprises replicating selected branches in the tree a selected number of times.18. An apparatus for assessing coverage of production rules of a programming language by one or more test programs, comprising:means for inputting a set of production rules that define the programming language;means for generating a first tree of nodes that represents the set of production rules;means for inputting a test program;means for generating a second tree of nodes that represents production rules that derive the test program;means for comparing nodes of the first tree to nodes of the second tree;means for displaying the first tree with information associated with the nodes that indicates coverage of the set of production rules by the test program; andmeans for annotating nodes in the first tree with data that identify positional information relative to code in the test program derived from the reported production rules.19. An article of manufacture for assessing coverage of production rules of a programming language by a test program, comprising:a computer-readable medium configured with instructions for causing a computer to perform the steps of,inputting a set of production rules that define the programming language;inputting a test program;determining the production rules that are covered by the test program;reporting coverage of the set of production rules by the test program; andannotating reported production rules with data that identify positional information relative to code in the test program derived from the reported Production rules.20. An article of manufacture for assessing coverage of production rules of a programming language by a test program, comprising:a computer-readable medium configured with instructions for causing a computer to perform the steps of,inputting a set of production rules that define the programming language;generating a first tree of nodes that represents the set of production rules;inputting a test program;generating a second tree of nodes that represents production rules that derive the test program;comparing nodes of the first tree to nodes of the second tree;displaying the first tree with information associated with the nodes that indicates coverage of the set of production rules by the test program; andannotating nodes in the first tree with data that identify positional information relative to code in the test program derived from the reported production rules.
FIELD OF THE INVENTIONThe present disclosure generally relates to assessing the extent to which one or more test programs exercise features of a programming language.BACKGROUNDSoftware and hardware description languages are described by a set of rules or "productions" that define the syntax and grammar of the language. The rules are usually expressed in Backus-Naur form (BNF). The rules consist of a list of productions which describe the language. Developers of language tools (hardware and/or software) need to ensure the tools they are developing are fully tested under all conditions possible. For example, compilers and synthesizers need to be adequately tested to ensure that code is correctly compiled or synthesized and that the desired features of the language are fully supported.A number of test programs may be constructed with the goal of covering all of the productions in as much depth as is practical. Generally, test programs can be constructed to test all of the productions, at least to some degree. However, to test all possible different code sequences derivable from a production may not practical if a very large number of code sequences are derivable from the production. Thus, determining whether a tool has been adequately tested with a given set of test programs may be difficult.A system and method that address the aforementioned problems, as well as other related problems, are therefore desirable.SUMMARY OF THE INVENTIONThe disclosure describes various methods and apparatus for assessing coverage of production rules of a programming language by one or more test programs. A set of production rules that define the programming language is input, along with a test program. The production rules that are covered by the test program are determined and coverage of production rules by the test program is reported.It will be appreciated that various other embodiments are set forth in the Detailed Description and Claims which follow.BRIEF DESCRIPTION OF THE DRAWINGSVarious aspects and advantages of the invention will become apparent upon review of the following detailed description and upon reference to the drawings in which:FIG. 1 is a block diagram of an arrangement for evaluating language coverage by a test program;FIG. 2 is a flowchart of an example process for evaluating language coverage by a test program;FIG. 3 illustrates an example set of productions of a language specification;FIGS. 4A and 4B together illustrate an expanded tree of the set of productions from FIG. 3;FIG. 5A illustrates the translation of a first example test program into a tree of productions and subproductions that derive the first program;FIG. 5B illustrates the translation of a second example test program into a tree of productions and subproductions that derive the second program;FIGS. 6A and 6B together illustrate an annotated version of the expanded tree of the set of productions, with productions and subproductions highlighted to indicate test coverage by the first and second example test programs; andFIG. 7 illustrates an annotated version of the example set of productions in which the productions are highlighted to indicate test coverage by the first and second example test programs.DETAILED DESCRIPTIONFIG. 1 is a block diagram of an arrangement 100 for evaluating language coverage by a test program. Tool 102 is an example tool for processing program code of a particular language. For example, the tool may be a compiler for a programming language such as C, C++, Pascal and others, a synthesizer for a hardware description language (HDL) such as VHDL, Verilog, Abel and others, or a tool for developing Web pages and Web applications using languages such as HTML, Java and others. In one manner of testing whether the tool meets a set of requirements, a test program 104 is input to the tool, and the tool generates output 106. If the tool is a compiler, machine language code may the output. The output may then be compared to a set of expected results in order to verify whether the tool correctly processed the test program.Evaluator 108 analyzes the test program 104 relative to language specification 110 and generates data 112 that indicates which features of the language are exercised by the test program. In one embodiment, the language specification is set forth as a collection of productions and subproductions, for example, in Backus-Naur form. The evaluator generally compares those productions and subproductions that derive the test program to a selected finite set of possible combinations of productions and subproductions.In one embodiment, the coverage data generated by the evaluator indicates which productions and subproductions derive the test program. In a further embodiment, the productions and subproductions are listed, and each production and subproduction is color coded to indicate the extent of coverage. For example, a green coding indicates that the production and all its subproductions derive one or more parts of the program. A yellow coded production indicates that some but not all of the production's subproductions derive one or more parts of the program. A red coded production indicates that none of the subproductions of a production derive any part of the program.In another embodiment, the listing of productions and subproductions is annotated with data that reference locations in the test program where the code derived by the production/subproduction is found. The annotations may further indicate the name of the source file in which the test program is found.In still another embodiment, the evaluator may analyze multiple test programs for test coverage and output data that indicates the collective coverage of the language by the programs. Both color coding and annotating of productions and subproductions may be used to indicate the collective coverage.FIG. 2 is a flowchart of an example process for evaluating language coverage by a test program in accordance with various embodiments of the invention. The process of FIG. 2 is described in conjunction with processing of the example productions and test programs in FIGS. 3-7. A specification of the language is input (step 202), and a data structure is created to represent the productions and subproductions of the language (step 204). FIG. 3 illustrates an example set of productions 206 of a language specification, and FIGS. 4A and 4B together illustrate an expanded tree 208 generated from the set of productions.The example productions and subproductions of FIG. 3 are specified in BNF. The productions and subproductions are parsed and the expanded tree structure of FIGS. 4A and 4B is created. For each top-level production a subtree is created. For example, from the top-level production 210 in FIG. 3, the subtree that is generated includes a first branch that begins with a first repetition of the subproduction 212 INLINE_WORKAROUND and a second branch that begins with a second repetition of the subproduction 214 INLINE_WORKAROUND. The indentation in FIGS. 4A and 4B represents branches in the tree structure. For example, under production 212, there are branches 216, 218, 220, 222, and 224, which represent the subproductions 226, 228, 230, 232, and 234, respectively.In one embodiment, the user may specify the number of levels to which the productions and subproductions are expanded and the number of repetitions of productions and subproductions. The numbers of levels and repetitions may be chosen with the goal of verifying coverage of productions with statements of certain complexity balanced against limiting possible repetitive and redundant verification. For example, a statement in a test program may be recursively derived from the ACTION_STATEMENT and IF_STATEMENT productions 242 and 244 to a certain number of levels depending the complexity of the statement. With an example recurse level set at 2, the expansion of the ACTION_STATEMENT production is limited to the branches 246 and 248. The user may, for test-specific reasons, decide that this level of coverage verification is sufficient.The number of times that productions and subproductions from the language specification are represented in the expanded tree may also be specified by the user. Because multiple statements in a test program may be derived from the same production, the user may want to verify coverage of the production by some number of statements. For example, the grammar 206 permits multiple derivations from the INLINE_WORKAROUND production 252. However, the expanded tree of FIGS. 4A and 4B limits the number of repetitions of the production to 2 (branches 212 and 214). It can be seen that branches 212 and 214 are identical. As the example is further developed, it will be seen that branch 212 will be annotated with the first derivation from a source test program, and branch 214 will be annotated with the second derivation from the source test program. Any further derivations in the source are not tracked.Returning now to FIG. 2, the expanded tree structure of FIGS. 4A and 4B may be saved in persistent storage (step 262) in order to eliminate having to recreate the expanded tree for subsequent evaluations.One or more test programs to be evaluated are input (step 264), and tree structures are generated to represent the productions that derive the statements in the program(s) (step 266). It will be appreciated that compiler technology may be adapted to generate a tree representation of the productions that are followed in parsing the program.FIG. 5A illustrates the translation of a first example test program 302 into a tree 304 of productions and subproductions that derive the first program, and FIG. 5B illustrates the translation of a second example test program 306 into a tree 308 of productions and subproductions that derive the second program. Program 302 includes 5 lines of source code that are numbered 1-5, and program 306 includes 6 lines of source code numbered 1-6. The lines of source code are generally aligned with the branches of the tree structure that are generated to represent the productions that derive the code. For example, lines 1 and 2 of program 302 are aligned with branch 310, line 4 is aligned with branch 312, and line 5 is aligned with branch 314. From FIG. 5B, line 1 of program 306 is aligned with branch 316, and the code that begins on line 3 is aligned with branch 318.Returning now to FIG. 2, the representation of the productions that derive the input test program(s) (tree structures 304 and 308) may be saved in persistent storage (step 322) for subsequent analysis either alone or in combination with other programs. The process then proceeds to compare (step 324) the program productions to the representation of the language specification (e.g., expanded tree structure 208).In one embodiment, the comparison involves a comparison of the branches in the program productions to the branches in the expanded BNF tree. FIGS. 6A and 6B together illustrate an annotated version 600 of the expanded tree (FIGS. 4A and 4B) of the set of productions, with productions and subproductions highlighted to indicate test coverage by the first and second example test programs. The annotations and highlighting are further used to illustrate the comparison of the program production derivations to the expanded BNF tree.For example, the production=INLINE_WORKAROUND in branch 310 in the program structure 304 (FIG. 5A) matches the INLINE_WORKAROUND production 212 (FIG. 4A). The matching tree branches are both highlighted and annotated to inform the user of the coverage. For example, blocks 602, 604, 606, and 608 are drawn to illustrate which branches in the tree structure may be highlighted to indicate a match of the program tree branch 310 (FIG. 5A). Blocks 602 and 610 include the productions that derive the statement on line 1 of input test program 306 (FIG. 5B). Blocks 612, 614, 616, 618, 620, and 622 (FIG. 6B) include the productions that derive the remainder of programs 302 and 304. The highlighting may consist of displaying the text in the block with background and/or foreground colors (e.g., black on yellow) that are different from colors used to display non-matching branches in the tree structure (e.g., black on white).The branches in tree structure 600 that match the program tree structure 304 are annotated to indicate the file name of the input test program and the line number and line-relative character offset of the statements in the input test program derived from the productions listed in the tree structure. For example, the branch, production=INLINE_FILE in tree structure 600 is annotated with [test1.src 1, 0][test2.src 1,0], which indicates that this production derives statements in both input test program 302 (file name test1.src) and test program 306 (file name test2.src), and that the statements both begin on line 1, character offset 0.From the example it may be observed that limiting the expansion of the input productions and subproductions to 2 repetitions (INLINE_WORKAROUND repeated twice) means that complete derivations of some of the code from input test program 302 is not specified in expanded tree structure 600. For example, derivation of the statement on line 5 of test program 302 begins with the top-level production INLINE_WORKAROUND. However, the two branches 632 and 634 are annotated as deriving the statements on lines 1 and 4 of test program 302, and branch 622 terminates expansion of the tree structure at two repetitions. Thus, the additional productions and subproductions that derive the statement on line 5 are not present in the expanded tree 600.Returning now to FIG. 2, the process reports the user coverage of the productions and subproductions by the input test programs (step 652). In one embodiment, the coverage is reported by way of a color-coded and annotated listing of the input set of productions and subproductions. The color coding indicates the level of coverage of the various production and subproductions, and the annotations indicate the file names, line numbers, and character offsets of statements derived by the productions.FIG. 7 illustrates an example version 700 of the example set of productions in which the productions are highlighted and annotated to indicate test coverage by the test programs 302 and 306. In an example color-coded highlighting of productions, yellow may be used to indicate productions that are partially covered, green to indicate to full coverage, and red to indicate no coverage. Blocks with different line characteristics are used to represent the different colors in FIG. 7. Blocks with dashed lines (702 and 704) represent yellow highlighting or partial coverage, blocks with dotted lines (706, 708, 710, 712, 714, and 716) represent green highlighting or full coverage, and blocks with dash-dot-dash-dot-lines (718 and 720) represent red highlighting or no coverage.The coverage of the productions by the test program may be determined from the annotated expanded tree structure, for example, structure 600. If, collectively across all repetitions of a production in the expanded tree structure, all possible derivations of the production are annotated with a source statement, then the production is completely covered. If at least one of the subproductions of a production is not annotated under any of the repetitions of the production, and at least one other of the subproductions under the production is annotated, then the production is partially covered. If no repetition of a particular production or subproduction is annotated, then the production/subproduction is not covered.It will be appreciated that the various embodiments of the invention allow a tester to reduce the time expended in testing a tool by minimizing the number of test cases needed to achieve a desired level of coverage. For example, duplicate test cases may be identified and removed thereby eliminating extra time spent running duplicative tests. Duplicate tests may be identified by saving the coverage information as reported to the user (step 652), and comparing coverage information between test cases. The difference in coverage between test cases may be reported to the user to assist in deciding whether a test case should be retained.Those skilled in the art will appreciate that various alternative computing arrangements would be suitable for hosting the processes of the different embodiments of the present invention. In addition, the processes may be provided via a variety of computer-readable media or delivery channels such as magnetic or optical disks or tapes, electronic storage devices, or as application services over a network.The present invention is believed to be applicable to a variety of systems for analyzing the effectiveness test programs directed at language-processing tools. Other aspects and embodiments of the present invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and illustrated embodiments be considered as examples only, with a true scope and spirit of the invention being indicated by the following claims.
Techniques for secure remote debugging of SoCs are described. The SoC includes an intellectual property (IP) block, a microcontroller, and a fabric coupled to the IP block and the microcontroller. The IP block transmits, via the fabric, information regarding events within the IP block to the microcontroller. The microcontroller executes firmware including a network stack and a remote debugger program. Using the firmware, the microcontroller provides the event information to a device external to the SoC.
CLAIMSWhat is claimed is:1. A system on a chip (SoC), comprising:an intellectual property (IP) block to produce an event:a microcontroller to execute firmware comprising a network stack and a remote debugger program; anda fabric coupled to the microcontroller and the IP block;wherein the IP block is to transmit, over the fabric, information about the produced event to the microcontroller; andwherein the microcontroller is to use the network stack and the remote debugger program to provide the information about the produced event to a device external to the SoC.2. The SoC of claim 1, wherein the IP block is a processor,3. The SoC of claim 1, wherein the fabric is a sideband fabric.4. The SoC of claim 1, wherein the IP block comprises:a virtual test access port (vTAP); anda functional pipeline comprising:a hardware processing module; anda register.5. The SoC of claim 4, wherein the vTAP comprises:a slave interface coupled to the fabric;a master interface coupled to the fabric and to the functional pipeline; and a decoder coupled to the slave interface and to the functional pipeline; wherein the slave interface is to:receive a command from the microcontroller via the fabric; and transfer the command to the decoder;wherein the decoder is to:decode the command; andtransfer the decoded command to the functional pipeline;wherein the functional pipeline is to:execute the decoded command using the hardware processing module; andupdate the register in response to the execution of the decoded command; andwherein the master interface is to:read contents of the register;generate, based on the contents of the register; a response to the received command; andtransmit the generated response to the microcontroller via the fabric.6. The SoC of claim 5, wherein the microcontroller is to receive, via the remote debugger program and the network stack, the command from the device external to the SoC.7. The SoC of claim 6, further comprising:a non-volatile memory, wherein the microcontroller is to record into the oiatile memory:the command received from the external device; and the generated response corresponding to the received command.8. The SoC of claim 1, further comprising:a plurality of field-programmable fuses (FPFs), wherein the microcontroller is to destroy a respective FPF in the plurality of FPFs in response to a corresponding command received by the microcontroller from the device external to the SoC; wherein a respective FPF in the plurality of FPFs corresponds to a respective computing service offered by the SoC; andwherein the SoC is to provide the respective computing service if the respective FPF is not destroyed.9. The SoC of claim 8, wherein a subset of FPFs in the plurality of FPFs corresponds to a computing service offered by the SoC, wherein the subset of FPFs has a cardinality of operational fuses, and wherein the cardinality has a parity; andwherein the SoC is to provide the computing service only if the parity of the cardinality of the FPF subset is odd.10. The SoC of claim 1 , further comprising:a cryptographic key; wherein the microcontroller is to use the cryptographic key to authenticate the firmware prior to execution of the firmware.11. A method for remote debugging a system on a chip (SoC), the method comprising:producing, by an intellectual property (IP) block, an event;executing, by a microcontroller, a network stack and a remote debugging program;transmitting, by the IP block to the microcontroller, information about the produced event, the transmitting performed over a fabric connecting the IP block and the microcontroller; andproviding, by the microcontroller using the network stack and the remote debugging program, the information about the produced event to a device external to the SoC.- -ΔΔ The method of claim 11, wherein the IP block is a processor.13. The method of claim 11, wherein the fabric is a sideband fabric.14. The method of claim 11, wherein the IP block comprises:a virtual test access port (vTAP); anda functional pipeline comprising:a hardware processing module; anda register.15. The method of claim 14, wherein the vTAP comprises:a slave interface coupled to the fabric;a master interface coupled to the fabric and to the functional pipeline; and a decoder coupled to the slave interface and to the functional pipeline.16. The method of claim 15, further comprising:receiving, by the slave interface, a command from the microcontroller via the fabric;transferring, by the slave interface, the command to the decoder;decoding, by the decoder, the command;transferring, by the decoder, the decoded command to the functional pipeline;executing, by the functional pipeline, the decoded command using the hardware processing module;updating, by the functional pipeline, the register in response to the execution of the decoded command;reading, by the master interface, contents of the register;generating, by the master interface based on the contents of the register; a response to the received command; and transmitting, by the master interface, the generated response to the microcontroller via the fabric.17. The method of claim 16, wherein the microcontroller is to receive, via the remote debugger program and the network stack, the command from the device external to the SoC,18. The method of claim 17, further comprising:recording, by the microcontroller into a non-volatile memory:the command received from the external device; and the generated response corresponding to the received command.19. The method of claim 1 1 , wherein the SoC comprises:a plurality of field-programmable fuses (FPFs);wherein a respective FPF in the plurality of FPFs corresponds to a respective computing sendee offered by the SoC.20. The method of claim 11, further comprising:receiving, by the microcontroller from the device external to the SoC, a respective command to destroy a respective FPF in the plurality of FPFs; anddestroying, by the microcontroller, the respective FPF in the plurality ofFPFs.21. The method of claim 20, wherein a subset of FPFs in the plurality of FPFs corresponds to a computing service offered by the SoC, the subset of FPFs having a cardinality of operational fuses, the cardinality having a parity.22. The method of claim 21, further comprising:providing, by the SoC, the computing sendee only if the parity of the cardinality of the FPF subset is odd.23. The method of claim 11, further comprising:authenticating, by the microcontroller using a cryptographic key, the firmware prior to executing the firmware.24. At least one machine-readable medium including instructions which, when executed by a machine, cause the machine to perform operations of any of the methods of claims 11-23.25. An apparatus comprising means for performing any of the methods of claims 11-23.
[0001] This application claims priority to U.S. Patent Application Serial No.14/977,998, filed December 22, 2015, the content of which is hereby incorporated by reference in its entirety.TECHNICAL FIELD[0002] The present disclosure relates generally to system-on-a-chip (SoC) implementations, and specifically to secure remote debugging of SoCs.BACKGROUND[0003] An SoC is an integrated circuit (IC) that includes a number of computer components on a single chip substrate. An SoC may include any number of component blocks (e.g., intellectual property blocks or IP blocks) that perform a function, such as graphics processing, memory management, general or special- purpose processing, etc. The SoC may also include a fabric to connect the various IP blocks with each other (e.g., intra-chip communication) or with components external to the SoC (e.g., inter-chip communications) via an interconnect (e.g., bus).BRIEF DESCRIPTION OF THE DRAWINGS[0004] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments or examples discussed in the present document.[0001] FIG. 1 illustrates an example SoC that enables secure remote debugging, according to an embodiment. [0005] FIG. 2 illustrates an example IP block within an example SoC that enables secure remote debugging, according to an embodiment.[0006] FIG. 3 is a flowchart illustrating operations of an example SoC that enables secure remote debugging, according to an embodiment.[0007] FIG. 4, is a block diagram illustrating an example of a machine, upon which any one or more embodiments may be implemented.000 >;8 The present disclosure describes methods, systems, and computer program products that individually facilitate secure remote debugging of SoCs.0009 With the emerging dominance of the SoCs in the market, SoC platforms are becoming sophisticated and comprehensive platforms comprising a large number of IP blocks ranging from video/audio/image processing, to myriad number of sensors, to the low-level general purpose processors and input-output interfaces. Fixed-function and programmable IP blocks enable manufacturers to provide differentiation in the marketplace while reducing manufacturing costs.[0010] Unlike general processors, no two SoCs are the same; each SoC has unique thermal properties, fuse controller properties, etc. An SoCvendor/manufacturer may need to debug an SoC that has already shipped to the customer, such as an original equipment manufacturer (OEM). Some SoCs have security restrictions to prevent certain debugging procedures from being executed by anyone other than the SoC vendor.[0011] Until an SoC has completely booted, the only way to debug the SoC is by using special equipment, such as JTAG equipment to access the Test Access Port (T AP) through the JTAG registers of the SoC. Joint Test Action Group (JTAG) refers to the IEEE 1149. standard, Standard Test Access Port andBoundary-Scan Architecture for test access ports used for testing printed circuit boards using boundary scans. However, debugging using special equipment requires not only the special equipment, but also requires an engineer hired by the SoC vendor to have physical access to the SoC; this may become extremely expensive. SoC vendors spend millions of dollars each year in debugging SoCs that are already "in the field" (e.g., no longer under the physical control of the SoC vendor).[0012] Furthermore, to remotely debug SoCs that are currently available, the main processor of the SoC itself must execute debugging software; if the SoC platform has a problem and has failed to boot, remote debugging of the SoC platform is not possible. If the SoC platform successfully boots but is infected by malware, which suppresses notification of events to anti-virus or debugging software, debugging using software executing on the main processor may not uncover the suppressed events.[0013] In some embodiments disclosed herein, an SoC uses amicrocontroller to allow a user (e.g., a test engineer hired by the SoC vendor) to log into the SoC remotely, access the TAP of various IP blocks in the SoC, and debug the SoC using a virtual JTAG, even when the operating system of the SoC does not boot, or if malware executing on the main processor is suppressing anti-virus software. The microcontroller receives and records events (e.g., exceptions, traps, faults, etc.) from the main processor of the SoC via a hardware fabric that is internal to the SoC, thus enabling remote debugging/monitoring of the SoC. Software executing in an IP block cannot prevent the physical connection (e.g., the fabric) between the IP block and the microcontroller, nor may software transmit data over this physical connection; thus, even if the SoC has been infected by malware, the malware will not be able to prevent the microcontroller from receiving events from the main processor or any other IP block within the SoC.[0014] The microcontroller may also receive and record events from otherIP blocks within the SoC, and may also enable remote debugging of individual IP blocks withm the SoC. In some embodiments, the microcontroller has a network stack, allowing the microcontroller to (1) send events to a (remote) network entity that may (remotely) monitor the SoC, or (2) respond to debugging commands from a (remote) network entity. [0015] In some embodiments, the microcontroller also allows the platform functionality to be changed remotely by burning one or more Field Programmable Fuses (FPFs) within the SoC. Functionality may be added to or removed from the SoC by burning one or more fuses within the SoC. For example, eachfeature/service may have a corresponding fuse; when the microcontroller firmware boots, the firmware reads the fuses and exposes only those features/services whose corresponding fuse has NOT been blown. The terms "burning," "blowing out," and "self-destruct" indicate either a force external to a fuse causing the fuse to be destroyed or the fuse causing itself to be destroyed.[0016] Once a fuse is blown, that fuse cannot be restored. However, a bijection (i.e., a one-to-one correspondence) between a fuse and a feature/service is not the only option; the SoC may be manufactured in various fuse-feature configurations. For example, each feature/service may be associated with a corresponding set of fuses. A fuse set has a quantity (e.g., cardinality) of operational (e.g., not destroyed) fuses in the set, and the quantity has a parity (e.g., an "even" parity if the quantity is even, or an "odd" parity if the quantity is odd). The firmware of the microcontroller may be set in such a way as to enable a feature/sendee if in the parity of the corresponding fuse set is odd and to disable the feature/service if the parity of the corresponding fuse set is even (or vice versa). A finite number of fuses exist within an SoC; thus, switching between enabling and disabling a feature/service may only be done a finite number of times.[0017] FIG. 1 illustrates an example SoC 102 that enables secure remote debugging, according to an embodiment. The example SoC 102 includes a microcontroller 104, which executes firmware and is the first entity within the SoC 102 to boot when the SoC 102 is booted. The firmware of the microcontroller 104 includes a network stack, which allows the microcontroller 104 to communicate with external devices. In some embodiments, the microcontroller 104 is the interface between the SoC 102 and the outside world. The firmware of the microcontroller 04 also executes remote debugging software 106, which includes an interface that allows a remote user to debug the SoC 102 by issuing commands. In some embodiments, the remote debugging software 106 is loaded from local nonvolatile storage.[0018] Example SoC 102 includes IP blocks IPO 110, IP1 112, ... , IIW 114.An IP block may be of varying types, including general-purpose processors (e.g., in-order or out-of-order cores), fixed function units, graphics processors/engines, I/O controllers, display controllers, media processors, modems, network interface devices, etc. In some embodiments, IPO 110 is a general-purpose processor and the other IP blocks are special-purpose devices.10019] The microcontroller 104 and the IP blocks 110, 112, ... , 114 are connected to each other via a "fabric" 08, which is a hardware interconnect within the example SoC 102. A fabric 108 may be a "primary" fabric, which may be used for any "in-band communication" (e.g., memory, input/output (I/O), configuration, in-band messaging, etc.) between IP blocks, or a "sideband" fabric, which may be used for out-of-band communication (e.g., commands, statuses, interrupts, power management, fuse distribution, configuration shadowing, test modes, etc.) between IP blocks. In some embodiments, an SoC 02 has only a primary fabric; in other embodiments, an SoC 102 has both a primary fabric and a sideband fabric. In some embodiments, a sideband fabric is a routed "network" within the SoC 102, where each interface has a unique ID derived from its location in the SoC 102, The unique ID of an interface of an IP block is used to route transmissions to/from the IP block within the sideband fabric 108. Sideband fabrics (also known as "sideband networks") are used in some SoCs from Intel*'and ARM*.[0020] Each IP block 1 10, 112, 114 within SoC 102 has both a "master" interface and a "slave" interface. The master interface is used by an IP block when that IP block sends a packet on the fabric 108, whereas the slave interface is used by an IP block when that IP block receives a packet on the fabric 108. In some embodiments, each IP block has its own master signals and slave signals. An IP block's master interface sends packets on the fabric and the slave interface exposes registers for other IP blocks to read/write. Together, an IP block's master and slave interfaces are referred to as the IP block's "virtual test access port" (vTAP). A vTAP also includes a decoder, which decodes packets received by the slave interface. The microcontroller 104 may access an IP block's vTAP via the fabric 108.[0021] In an embodiment, IPl 112 is a general-purpose processor (e.g., aCPU), which exposes both primary and sideband interfaces to the fabric 108. The IPl 112 microcode injects instructions into the pipeline within IPl 112 and may read the IPl 112 pipeline state (e.g., command registers, memory, instruction registers, program counters, general-purpose registers, exception states, interrupt controller registers, etc.). The IPl 112 exposes the IPl 112 pipeline state to JTAG by entering into "probe mode," which is a mode or state of the IP l 112 m which test instructions or code may be executed to test the IPl 112. After testing of the IPl 112 is complete, the IPl 112 exits probe mode and may resume normal operation.[0022] The microcontroller 104 receives 118 commands 6 from a remote user; the microcontroller 104 then sends 120 the remote user commands 116 to the vTAP of IPl 112. The microcode of IPl 1 12 "executes" these remote user commands 1 16 on the pipeline within IP 1 112. After execution of every instruction, the microcode of IPl 112 reads the values in various state registers. The IPl 112 microcode then provides 122 these values, through the vTAP of IPl 112, to the remote debugging software 106. Using this mechanism, the remote debugging software 106 may be used to set break points on the IPl 112, inject interrupts, inject exceptions, monitor model -specific registers, monitor performance counters, set thermal limits, etc.[0023] The commands available between the microcontroller 104 and theIPl 1 12 is derived from the instruction set architecture (ISA) exposed by the IPl 112 to general-purpose software. For example, in an embodiment where IPl 1 12 is an Intel®x86 CPU, the "INT" x86 instruction exposed by I l 112 has acorresponding command for the microcontroller 104 to issue. In someembodiments, the microcontroller 104 may also issue a command to set a breakpoint register in the IPl 1 12.[0024] Embodiments are not limited to the remote debugging of general- purpose CPUs; other IP blocks may also be remotely debugged using the disclosed embodiments. For example, the disclosed embodiments may be used with a graphics IP block. A graphics IP block generally executes as follows: (1) the graphics IP block reads commands from memory and executes them; (2) kernel ("ring 0") software sets up the memory state and points the program counter (PC) of the graphics IP block to point to the appropriate location in memory; and (3) the graphics IP block then renders the bitmaps and writes back to memory. The microcontroller 104 firmware sets the graphics state by writing to shared memory between the graphics IP block and the microcontroller 04 and writes to the graphics registers through the sideband fabric 108. The microcontroller 104 may also use the vTAP of the graphics IP block to read the internal graphics registers by issuing vTAP commands through the sideband fabric 08.[0025] In some embodiments, the microcontroller 04 may issue commands for multiple vTAPs simultaneously; thus, the microcontroller 104 may be used to debug complex scenarios involving multiple IP blocks.[0026] FIG. 2 illustrates an example IP block 202 within an SoC 102 that enables secure remote debugging, according to an embodiment. Example IP block 202 may be any IP block 110, 112, ... , 114 as illustrated in FIG. 1 , and may be a general-purpose processor (e.g., in-order or out-of-order core), a fixed function unit, a graphics processor/engine, an I/O controller, a display controller, a media processor, a modem, a network interface device, etc. Example IP block 202 comprises a vTAP 204 and a functional pipeline 212,[0027] vTAP 204 comprises master interface 206 and slave interface 207.As described above, the slave interface 207 receives 208 packets from the fabric 108, and the master 206 interface transmits 226 packets onto fabric 108.Specifically, the slave interface 207 is used by IP block 202 to receive commands from the remote debugging software 106 over the sideband fabric 108, and the master interface 206 is used by IP block 202 to transmit responses generated by the functional pipeline 212 in response to the commands from the remote debugging software 106.[0028] vTAP 204 receives 208 a command and using decoder 210 decodes the command into an instruction/command that the functional pipeline 212 of II5block 202 understands (e.g., an x86 instruction, a read/write from a register, etc.). The decoder 210 then sends the decoded instruction/command to the functional pipeline 212 for processing.[0029] The logic of IP block 202 is illustrated in FIG. 2 as functional pipeline 212, which is a generic pipeline of modules 214, 216, ... , 218, and does not represent the processing pipeline of any particular IP block 202. The functional pipeline 212 executes the decoded instruction/command, and will update the registers and other state 220 as a result of this execution. After the execution of the instruction/command is complete, the vTAP 204 reads 222 the registers and other state 220 and generates a response back to the remote debugging software 106. The master interface 206 transmits 226 the generated response to the microcontroller 104 via the sideband fabric 108,[0030] FIG. 3 is a flowchart 300 illustrating operations of an example SoC 102 that enables secure remote debugging, according to an embodiment.[0031] The SoC 102 platform begins by booting up (operation 302).[0032] The microcontroller 104 takes control of the SoC 102 platform(operation 304).[0033] The microcontroller 104 authenticates its own firmware (operation 306). In some embodiments, a cryptographic (e.g., RSA) key is printed inside a fuse. Using the cryptographic key, the microcontroller 104 authenticates its own software.[0034] The microcontroller 104 loads the remote debugging software 106(operation 308).[0035] The remote debugging software 106 receives 118 commands 1 16 from a remote user (e.g., an SoC test engineer) and sends the commands 120 over the fabric 108 to be routed to the target IP block 202 (operation 310).[0036] The target IP block 202 receives the commands over its vT AP 204, decodes the commands using decoder 210, and executes the decoded commands using functional pipeline 212 (operation 312).[0037] The remote debugging software 106 reads 122 state from the targetIP block 202 and generates a response to the remote user (operation 314).[0038] FIG. 4 is a block diagram illustrating an example of a machine 400, upon which any one or more embodiments may be implemented. In alternative embodiments, the machine 400 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 400 may operate in the capacity of a server machine, a client machine, or both in a client-server network environment. In an example, the machine 400 may act as a peer machine in a peer-to-peer (P2P) (or other distributed) network environment. The machine 400 may implement or include any portion of the systems, devices, or methods illustrated in FIGs. 1-3, and may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a smart phone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, although only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations, etc.[0039] Examples, as described herein, may include, or may operate by, logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a machine-readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.[0040] Accordingly, the term "module" is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general -purpose hardware processor may be configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.[0041] Machine (e.g., computer system) 400 may include a hardware processor 402 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 404 and a static memory 406, some or all of which may communicate with each other via an interlink (e.g., bus) 408. The machine 400 may further include a display unit 410, an alphanumeric input device 412 (e.g., a keyboard), and a user interface (UI) navigation device 414 (e.g., a mouse), in an example, the display unit 410, input device 412 and UI navigation device 414 may be a touch screen display. The machine 400 may additionally include a storage device (e.g., drive unit) 416, a signal generation device 418 (e.g., a speaker), a network interface device 420, and one or more sensors 421, such as a global positioning system (GPS) sensor, compass, acceierometer, or other sensor. The machine 400 may include an output controller 428, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.) [0042] The storage device 416 may include a machine-readable medium 422 on which is stored one or more sets of data structures or instructions 424 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 424 may also reside, completely or at least partially, within the mam memory 404, within static memory 406, or within the hardware processor 402 during execution thereof by the machine 400. In an example, one or any combination of the hardware processor 402, the mam memory 404, the static memory 406, or the storage device 416 may constitute machine- readable media.[0043] Although the machine-readable medium 422 is illustrated as a single medium, the term "machine-readable medium" may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 424.[0044] The term "machine-readable medium" may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 400 and that cause the machine 400 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples may include solid-state memories, and optical and magnetic media. Accordingly, machine-readable media are not transitory propagating signals. Specific examples of machine-readable media may include non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; Random Access Memory (RAM); Solid State Drives (SSD); and CD-ROM and DVD-ROM disks.[0045] The instructions 424 may further be transmitted or received over a communications network 426 using a transmission medium via the network interface device 420 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 400, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. [0046] Additional Notes & Example Embodiments[0047] Each of these non-limiting examples can stand on its own, or can be combined in various permutations or combinations with one or more of the other examples.[0048] Example 1 is a system on a chip (SoC), comprising: an intellectual property (IP) block to produce an event: a microcontroller to execute firmware comprising a network stack and a remote debugger program; and a fabric coupled to the microcontroller and the IP block; wherein the IP block is to transmit, over the fabric, information about the produced event to the microcontroller; and wherein the microcontroller is to use the network stack and the remote debugger program to provide the information about the produced event to a device external to the SoC.[0049] In Example 2, the subject matter of Example 1 optionally includes, wherein the IP block is a processor.[0050] In Example 3, the subject matter of any one or more of Examples 1-2 optionally include, wherein the fabric is a sideband fabric.[0051] In Example 4, the subject matter of any one or more of Examples 1-3 optionally include, wherein the IP block comprises: a virtual test access port (vTAP); and a functional pipeline comprising: a hardware processing module; and a register.[0052] In Example 5, the subject matter of Example 4 optionally includes, wherein the vTAP comprises: a slave interface coupled to the fabric; a master interface coupled to the fabric and to the functional pipeline; and a decoder coupled to the slave interface and to the functional pipeline; wherein the slave interface is to: receive a command from the microcontroller via the fabric; and transfer the command to the decoder; wherein the decoder is to: decode the command; and transfer the decoded command to the functional pipeline; wherein the functional pipeline is to: execute the decoded command using the hardware processing module; and update the register in response to the execution of the decoded command; and wherein the master interface is to: read contents of the register; generate, based on the contents of the register; a response to the received command; and transmit the generated response to the microcontroller via the fabric.[0053] In Example 6, the subject matter of Example 5 optionally includes, wherein the microcontroller is to receive, via the remote debugger program and the network stack, the command from the device external to the SoC.[0054] In Example 7, the subject matter of Example 6 optionally includes: a non-volatile memory, wherein the microcontroller is to record into the non-volatile memory: the command received from the external device; and the generated response corresponding to the received command.[0055] In Example 8, the subject matter of any one or more of Examples 1-7 optionally include, further comprising: a plurality of field-programmable fuses(FPFs), wherein the microcontroller is to destroy a respective FPF in the plurality of FPFs in response to a corresponding command received by the microcontroller from the device external to the SoC; wherein a respective FPF in the plurality of FPFs corresponds to a respective computing service offered by the SoC; and wherein the SoC is to provide the respective computing service if the respective FPF is not destroyed.[0056] In Example 9, the subject matter of Example 8 optionally includes, wherein a subset of FPFs in the plurality of FPFs corresponds to a computing service offered by the SoC, wherein the subset of FPFs has a cardinality of operational fuses, and wherein the cardinality has a parity; and wherein the SoC is to provide the computing service only if the parity of the cardinality of the FPF subset is odd.[0057] In Example 10, the subject matter of any one or more of Examples 1-9 optionally include, further comprising: a cryptographic key; wherein the microcontroller is to use the cryptographic key to authenticate the firmware prior to execution of the firmware.[0058] Example 11 is a method for remote debugging a system on a chip(SoC), the method comprising: producing, by an intellectual property (IP) block, an event; executing, by a microcontroller, a network stack and a remote debugging program; transmitting, by the IP block to the microcontroller, information about the produced event, the transmitting performed over a fabric connecting the IP block and the microcontroller; and providing, by the microcontroller using the network stack and the remote debugging program, the information about the produced event to a device external to the SoC.[0059] In Example 12, the subject matter of Example 11 optionally includes, wherein the IP block is a processor.[0060] In Example 13, the subject matter of any one or more of Examples11-12 optionally include, wherein the fabric is a sideband fabric.[0061] In Example 14, the subject matter of any one or more of Examples11-13 optionally include, wherein the IP block comprises: a virtual test access port (vTAP); and a functional pipeline comprising: a hardware processing module; and a register.[0062] In Example 15, the subject matter of Example 14 optionally includes, wherein the vTAP comprises: a slave interface coupled to the fabric; a master interface coupled to the fabric and to the functional pipeline; and a decoder coupled to the slave interface and to the functional pipeline.[0063] In Example 16, the subject matter of Example 15 optionally includes: receiving, by the slave interface, a command from the microcontroller via the fabric; transferring, by the slave interface, the command to the decoder; decoding, by the decoder, the command; transferring, by the decoder, the decoded command to the functional pipeline; executing, by the functional pipeline, the decoded command using the hardware processing module; updating, by the functional pipeline, the register in response to the execution of the decoded command; reading, by the master interface, contents of the register; generating, by the master interface based on the contents of the register; a response to the received command; andtransmitting, by the master interface, the generated response to the microcontroller via the fabric.[0064] In Example 17, the subject matter of Example 16 optionally includes, wherein the microcontroller is to receive, via the remote debugger program and the network stack, the command from the device external to the SoC.[0065] In Example 18, the subject matter of Example 17 optionally includes: recording, by the microcontroller into a non-volatile memory: the command received from the external device; and the generated response corresponding to the received command.[0066] In Example 19, the subject matter of any one or more of Examples11-18 optionally include, wherein the SoC comprises: a plurality of field- programmable fuses (EPFs); wherein a respective FPF in the plurality of FPFs corresponds to a respective computing service offered by the SoC. [0067] In Example 20, the subject matter of any one or more of Examples1 1—19 optionally include, further comprising: receiving, by the microcontroller from the device external to the SoC, a respective command to destroy a respective FPF in the plurality of FPFs; and destroying, by the microcontroller, the respective FPF in the plurality of FPFs.[0068] In Example 21 , the subject matter of Example 20 optionally includes, wherein a subset of FPFs in the plurality of FPFs corresponds to a computing service offered by the SoC, the subset of FPFs having a cardinality of operational fuses, the cardinality having a parity.[0069] In Example 22, the subject matter of Example 21 optionally includes: providing, by the SoC, the computing service only if the parity of the cardinality of the FPF subset is odd.[0070] In Example 23, the subject matter of any one or more of Examples 1-22 optionally include, further comprising: authenticating, by the microcontroller using a cryptographic key, the firmware prior to executing the firmware. [0071] Example 24 is least one machine-readable medium including instructions, which when executed by a machine, cause the machine to perform operations of any of the methods of examples 11-23.[0072] Example 25 is an apparatus comprising means for performing any of the methods of examples 11-23.[0073] Example 26 is an apparatus for implementing remote debugging of a system on a chip (SoC), the apparatus comprising: means for producing, by an intellectual property (IP) block, an event; means for executing, by a microcontroller, a network stack and a remote debugging program; means for transmitting, by the IP block to the microcontroller, information about the produced event, the transmitting performed over a fabric connecting the IP block and the microcontroller; and means for providing, by the microcontroller using the network stack and the remote debugging program, the information about the produced event to a device external to the SoC.[0074] In Example 27, the subject matter of Example 26 optionally includes, wherein the IP block is a processor.[0075] In Example 28, the subject matter of any one or more of Examples26-27 optionally include, wherein the fabric is a sideband fabric.[0076] In Example 29, the subject matter of any one or more of Examples 26-28 optionally include, wherein the IP block comprises: a virtual test access port (vTAP); and a functional pipeline comprising: a hardware processing module; and a register.[0077] In Example 30, the subject matter of Example 29 optionally includes, wherein the vTAP comprises: a slave interface coupled to the fabric; a master interface coupled to the fabric and to the functional pipeline; and a decoder coupled to the slave interface and to the functional pipeline.[0078] In Example 31, the subject matter of Example 30 optionally includes: means for receiving, by the slave interface, a command from the microcontroller via the fabric; means for transferring, by the slave interface, the command to the decoder; means for decoding, by the decoder, the command; means for transferring, by the decoder, the decoded command to the functional pipeline; means for executing, by the functional pipeline, the decoded command using the hardware processing module; means for updating, by the functional pipeline, the register in response to the execution of the decoded command; means for reading, by the master interface, contents of the register; means for generating, by the master interface based on the contents of the register; a response to the received command; and means for transmitting, by the master interface, the generated response to the microcontroller via the fabric.[0079] In Example 32, the subject matter of Example 31 optionally includes, wherein the microcontroller is to receive, via the remote debugger program and the network stack, the command from the device external to the SoC.[0080] In Example 33, the subject matter of Example 32 optionally includes: means for recording, by the microcontroller into a non-volatile memory: the command received from the external device; and the generated responsecorresponding to the received command.[0081] In Example 34, the subject matter of any one or more of Examples26-33 optionally include, wherein the SoC comprises: a plurality of field- programmable fuses (EPFs); wherein a respective FPF in the plurality of FPFs corresponds to a respective computing service offered by the SoC.[0082] In Example 35, the subject matter of any one or more of Examples26-34 optionally include: means for receiving, by the microcontroller from the device external to the SoC, a respective command to destroy a respective FPF in the plurality of FPFs; and means for destroying, by the microcontroller, the respective FPF in the plurality of FPFs.[0083] In Example 36, the subject matter of Example 35 optionally includes, wherein a subset of FPFs in the plurality of FPFs corresponds to a computing service offered by the SoC, the subset of FPFs having a cardinality of operational fuses, the cardinality having a parity. [0084] In Example 37, the subject matter of Example 36 optionally includes: means for providing, by the SoC, the computing service only if the parity of the card inality of the EPF subset is odd.[0085] In Example 38, the subject matter of Example 35 optionally includes: means for authenticating, by the microcontroller using a cryptographic key, the firmware prior to executing the firmware.[0086] Conventional terms in the fields of computer networking and computer systems have been used herein. The terms are known in the art and are provided only as a non-limiting example for convenience purposes. Accordingly, the interpretation of the corresponding terms in the claims, unless stated otherwise, is not limited to any particular definition.[0087] Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that anyarrangement that is calculated to achieve the same purpose may be substituted for the specific embodiments shown. Many adaptations will be apparent to those of ordinary skill in the art. Accordingly, this application is intended to cover any adaptations or variations.[0088] The above detailed description includes references to theaccompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments that may be practiced. These embodiments are also referred to herein as "examples." Such examples may include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.[0089] In this document, the terms "a" or "an" are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of "at least one" or "one or more." In this document, the term "or" is used to refer to a nonexclusive or, such that "A or B" includes "A but not B," "B but not A," and "A and B," unless otherwise indicated. Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.[0090] In this Detailed Description, various features may have been grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment, and it iscontemplated that such embodiments may be combined with each other in various combinations or permutations. The scope of the embodiments should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.[0091] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description.
Some embodiments described herein include apparatuses and methods of forming such apparatuses. In one such embodiment, an apparatus may include a substrate, a first die, and a second die coupled to the first die and the substrate. The substrate may include an opening. At least a portion of the die may occupy at least a portion of the opening in the substrate. Other embodiments including additional apparatuses and methods are described.
CLAIMS What is claimed is: 1. An apparatus comprising: a substrate including an opening; a first die, a least a portion of the die occupying at least a portion of the opening; and a second die coupled to the first die and to the substrate. 2. The apparatus of claim 1, further comprising: first electrical connections directly coupled to the first die and directly coupled to the second die. 3. The apparatus of claim 2, wherein the first electrical connections include solder directly contacting a side of the first die and directly contacting a side of the second die. 4. The apparatus of claim 2, wherein the second die includes a first side and a second side opposite from the first side, the first electrical connections are on the first side of the second die, and the second die includes no electrical connections on the second side of the second die. 5. The apparatus of claim 1, further comprising: first electrical connections directly coupled to the first die and directly coupled to the second die; and second electrical connections directly coupled to the second die and directly coupled to the substrate. 6. The apparatus of claim 5, wherein the first and second electrical connections are on a same side of the second die. 7. The apparatus of claim 1, further comprising a heat dissipating device coupled to the first die, wherein the first die includes a first side and a second side opposite the first side, the second die is on the first side of the first die, and the heat dissipating device is on the second side of the first die. 8. The apparatus of claim 7, further comprising a base, the base including an opening, wherein at least a portion of the second die occupies at least a portion of the opening in the base. 9. The apparatus of claim 8, wherein the opening of the base has a length greater than a length of the second die. 10. The apparatus of claim 8, further comprising an additional heat dissipating device, the additional heat dissipating device coupled to the second die through the opening in the base. 11. The apparatus of claim 1 , further comprising: a base; electrical connections directly coupled to the second die and directly coupled to the substrate; and additional electrical connections directly coupled to the substrate and directly coupled to the base, wherein the electrical connections and the additional electrical connections are on a same side of the substrate. 12. The apparatus of claim 1, wherein the substrate is part of a ball grid array package. 13. An apparatus comprising : a base; a substrate coupled to the base, the substrate including an opening; a die, a least a portion of the die occupying at least a portion of the opening; and a structure coupled to the die through first electrical connections and coupled to the substrate through second electrical connections. 14. The apparatus of claim 13, wherein the first electrical connections and the second electrical connections are on a same side of the structure. 15. The apparatus of claim 13, further comprising third electrical connections coupled to the substrate and the base, wherein the second electrical connections and the third electrical connections are on a same side of the substrate. 16. The apparatus of claim 13, further comprising a heat dissipating device coupled to the die, wherein the die includes a first side and a second side opposite the first side, the first electrical connections are on the first side of the die, and the heat dissipating device is on the second side of the die. 17. The apparatus of claim 16, wherein the base includes an opening, wherein at least a portion of the structure occupies at least a portion of the opening in the base. 18. The apparatus of claim 17, further comprising an additional heat dissipating device coupled to the structure through the opening in the base. 19. The apparatus of claim 13, wherein the base includes a printed circuit board. 20. The apparatus of claim 19, wherein the structure includes an interposer. 21. The apparatus of claim 19, wherein the structure includes an additional die. 22. The apparatus of claim 19, wherein at least one of the die and structure includes a processor. 23. A method comprising: attaching a combination of a first die and a second die to an assembly, such that at least a portion of the first die occupies at least a portion of an opening in a substrate of the assembly, the first die and the second die coupled to each other by first electrical connections, wherein the second die is attached to the substrate through second electrical connections such that the first electrical connections and the second electrical connections are on a same side of the second die. 24. The method of claim 23, wherein attaching the combination of the first die and the second die to the assembly is performed such that the first die is coupled to a heat dissipating device of the assembly through a thermal interface material in the opening in the substrate. 25. The method of claim 24, further comprising: attaching the substrate to a base. 26. The method of claim 25, further comprising: attaching an additional heat dissipating device to the second die through an opening in the base.
STACKED-DIE PACKAGE INCLUDING DIE IN PACKAGE SUBSTRATE PRIORITY APPLICATION [0001] This application claims the benefit of priority to U.S. Application Serial No. 13/629,368, filed September 27, 2012, which is incorporated herein by reference in its entirety. TECHNICAL FIELD [0002] Embodiments pertain to semiconductor device packaging. Some embodiments relate to stacked-die packages. BACKGROUND [0003] Many electronic items, such as cellular phones, tablets, and computers, usually have a semiconductor die enclosed in an integrated circuit (IC) package. The die often has circuitry that may form a device, such as a memory device to store information or a processor to process information. The device in the die may generate heat when it operates. Thus, a thermal solution such as a heat sink is typically included in the IC package to cool the die. [0004] Some conventional IC packages may have multiple dice in order to increase memory storage capacity, processing capability, or both. To save area in some IC packages, the multiple dice may be stacked on top of each other. Stacked-die, however, may increase the overall thickness of the IC package, causing it to be unsuitable for use in some electronic items. Further, providing adequate thermal solutions for some IC packages to cool the stacked-die may pose a challenge. BRIEF DESCRIPTION OF THE DRAWINGS [0005] FIG. 1 shows a cross section of an apparatus in the form of electronic equipment including a package coupled to a base, according to some embodiments described herein. [0006] FIG. 2 shows dice after they are disassembled from the package of FIG. 1, according to some embodiments described herein. [0007] FIG. 3 shows a substrate after it is disassembled from the package of FIG. 1, according to some embodiments described herein. [0008] FIG. 4 shows a base after it is disassembled from the package of FIG. 1, according to some embodiments described herein. [0009] FIG. 5 shows a cross section of an apparatus in the form of electronic equipment including a heat dissipating device, according to some embodiments described herein. [0010] FIG. 6 shows a cross section of an apparatus in the form of electronic equipment, which may be a variation of the electronic equipment of FIG. 1, according to some embodiments described herein. [0011] FIG. 7 shows a cross section of an apparatus in the form of electronic equipment of FIG. 6 including a heat dissipating device, according to some embodiments described herein. [0012] FIG. 8 shows a cross section of an apparatus in the form of electronic equipment, which may be a variation of electrical equipment of FIG. 6, according to some embodiments described herein. [0013] FIG. 9 shows a cross section of an apparatus in the form of electronic equipment of FIG. 8 including a heat dissipating device, according to some embodiments described herein. [0014] FIG. 10 shows a cross section of an apparatus in the form of electronic equipment including a package coupled to a base having no openings, according to some embodiments described herein. [0015] FIG. 11 shows a base after it is disassembled from the package of FIG. 10, according to some embodiments described herein. [0016] FIG. 12 shows a cross section of an apparatus in the form of electronic equipment including a package having a structure coupled to a die, according to some embodiments described herein. [0017] FIG. 13 through FIG. 19 show methods of forming electronic equipments, according to some embodiments described herein. DETAILED DESCRIPTION [0018] FIG. 1 shows a cross section of an apparatus in the form of electronic equipment 100 including a package 101 coupled to a base 190, according to some embodiments described herein. Electronic equipment 100 may include or be included in electronic items such as cellular telephones, smart phones, tablets, e-readers (e.g., e-book readers), laptops, desktops, personal computers, servers, personal digital assistants (PDAs), web appliances, set-top boxes (STBs), network routers, network switches, network bridges, other types of devices or equipments. [0019] Package 101 in FIG. 1 may include a ball grid array (BGA) type package or another type of package. Base 190 may include a circuit board, such as a printed circuit board (PCB). Package 101 may include a die 110, a die 120, a substrate 130, a heat dissipating device 140, and a thermal interface material (TIM) 145. Die 110 may be stacked over die 120 to form a stacked-die. Die 110 and 120 may be coupled to each other by electrical connections 151. Die 120 may be coupled to substrate 130 by electrical connections 152. Substrate 130 may be coupled to base 190 by electrical connections 153. Package 101 may include material 161 between die 110 and die 120 and material 162 between die 120 and substrate 130. [0020] Electrical connections 151, 152, and 153 may include electrically conductive materials, such as solder or other electrically conductive materials. For example, electrical connections 151 and 152 may include Sn-Cu solder paste, Sn-Ag solder paste, Sn-Ag-Cu solder paste (e.g. SAC 305). Electrical connections 153 may include Sn-Ag-Cu solder paste (e.g. SAC 405, SAC 305). Materials 161 and 162 may include electrically non-conductive materials (e.g., underfill materials) such as epoxy or other electrically non-conductive materials. Heat dissipating device 140 may include metals (e.g., copper) or other materials. TIM 145 may include heat conducting material. Example materials for TIM 145 include polymer TIM, silver-filled epoxy, phase change material, thermal grease, indium solder, and other materials. [0021] Substrate 130 may include an organic substrate, ceramic substrate, or another type of substrate. Substrate 130 may include a package substrate (e.g., a substrate in a BGA package). Substrate 130 may include internal conductive paths such as conductive paths 156 and 157, to allow electrical communication among components, such as among components 198 and 199 (coupled to base 190) and die 110 and die 120. [0022] Substrate 130 includes a side (e.g., surface) 131 and a side (e.g., surface) 132 opposite from side 131. Substrate 130 may include an opening (e.g., a hole) 133. Conductive paths 156 and 157 may include vias filled with conductive materials (e.g., metals) that may be partially formed in substrate 130. As shown in FIG. 1, substrate 130 may include no conductive paths (e.g., no electrical vias) extending from side 131 to side 132 of substrate 130. Substrate 130 may include no active components (e.g., transistors). [0023] Each of die 110 and 120 may include a semiconductor (e.g., silicon) die. Each of die 110 and die 120 may include circuitry (not shown in FIG. 1) that may form part of a device (or devices) to perform one or more functions, such as storing information, processing information, or other functions. For example, die 110 may include a memory device (e.g., including transistors, memory cells, and other components) to store information. The memory device may include a flash memory device, a dynamic random access memory (DRAM) device, a static random access memory (SRAM), or another type of memory device. In another example, die 120 may include a processor (e.g., including transistors, arithmetic logic units, and other components) that may include a central process unit (CPU), a graphics processing unit (GPU), or both. The processor may also include application specific integrated circuits (ASIC). [0024] Die 110 and die 120 may include other combinations of devices. For example, die 110 may include a processor and die 120 may include a memory device. In another example, both die 110 and 120 may include either all processors or all memory devices. [0025] As shown in FIG. 1, die 110 includes a side (e.g., surface) 111 and a side (e.g., surface) 112 opposite from side 111. Side 111 may be an active side of die 110 where electrical connections (e.g., electrical connections 151) are located. Side 112 may be a backside of die 110 where no electrical connections are located. Die 120 includes a side (e.g., surface) 121 and a side (e.g., surface) 122 opposite from side 121. Side 121 may be an active side of die 120 where electrical connections (e.g., 151) are located. Side 122 may be a backside of die 120 where no electrical connections are located. [0026] Die 110 and 120 may be directly coupled (e.g., directly bonded) to each other in a face-to-face fashion, such that side 111 (e.g., active side) of die 110 and side 121 (e.g., active side) of die 120 may directly face each other. Electrical connections 151 may be directly coupled to die 110 and directly coupled to die 120, such that electrical connections 151 may be located directly between side 111 of die 110 and side 121 of die 120 and may directly contact sides 111 and 121. Electrical connections 152 may be directly coupled to die 120 and directly coupled to substrate 130, such that electrical connections 152 may be located directly between side 121 of die 120 and a side 131 of substrate 130 may directly contact sides 121 and 131. Electrical connections 151 (coupling die 120 to die 110) and electrical connections 152 (coupling die 120 to substrate 130) may be on the same side (e.g., side 121) of die 120. Electrical connections 152 (coupling substrate 130 to die 120) and electrical connections 153 (coupling substrate 130 to base 190) may be on the same side (e.g., side 131) of substrate 130. [0027] As shown in FIG. 1, at least a portion of die 110 may be located inside (e.g., partially or completely embedded in) opening 133, such that at least a portion of die 110 may occupy at least a portion of opening 133. At least a portion of a die (e.g., die 110) refers to either only a portion of the die (e.g., only a portion of die 110) or the entire die (e.g., the entire die 110). [0028] Die 120 may include no portions located inside opening 133 (e.g., the entire die 120 is outside opening 133). Therefore, no portions of die 120 may occupy any portion of opening 133. [0029] Heat dissipating device 140 may be arranged to dissipate heat from package 101, such as to dissipate heat from die 110 or both of die 110 and die 120. Heat dissipating device 140 may include a heat spreader (e.g., an integrated heat spreader) or another type of thermal solution. Heat dissipating device 140 may be directly coupled to side 112 (e.g., backside) of die 110 by TIM 145. TIM 145 may enhance heat conduction (e.g., from die 110 to heat dissipating device 140) to further improve (e.g., increase) heat dissipation from die 110. [0030] Heat dissipating device 140 may also be arranged to serve as a stiffener to improve the structure of package 101 (e.g., improve the structure of substrate 130). For example, as shown in FIG. 1, heat dissipating device 140 may be coupled (e.g., directly coupled) to side 132 of substrate 130. In some situations, such as when substrate 130 includes thin core substrate, a coreless substrate, or other relatively thin substrate, heat dissipating device 140 (as arranged in FIG. 1) may prevent (or reduce) warpage that may occur to substrate 130. [0031] Base 190 includes a side (e.g., surface) 191 and a side (e.g., surface) 192 opposite from side 191. Base 190 may include components (e.g., components 198 and 199) such as capacitors, resistors, transistors, integrated circuit chips, or other electrical components coupled to it or formed thereon. FIG. 1 shows an example where components 198 and 199 are located on only one side (e.g., side 191) of base 190. Components 198 and 199, however, may be located on both sides (e.g., sides 191 and 192) of base 190. Base 190 may include an opening (e.g., a hole) 193. [0032] Die 110 and die 120 may communicate (e.g., electrically communicate) with each other through electrical connections 151. Electrical connections 151 may carry information (e.g., in the form of electrical signals) communicated between the die 110 and die 120. The information may include data information, control information, power and ground, or other information. Die 110 may include no electrical conductive paths (e.g., through silicon vias (TSVs)) between sides 1 11 and 112. Thus, electrical communication to and from die 110 (e.g., between die 110 and die 120) may be carried through electrical connections (e.g., electrical connections 151) on only side 111 of die 110. [0033] Die 120 and substrate 130 may communicate (e.g., electrically communicate) with each other through electrical connections 152. Electrical connections 152 may carry information (e.g., in the form of electrical signals) communicated between the die 120 and substrate 130. Die 120 may include no electrical conductive paths (e.g., TSVs) between sides 121 and 122. Thus, electrical communication to and from die 120 (e.g., between die 120 and die 110 and between die 120 and substrate 130) may be carried through electrical connections (e.g., electrical connections 151 and 152) on only side 121 of die 120. [0034] Die 110 and die 120 may communicate (e.g., electrically communicate) with other components (e.g., components 198 and 199 coupled to base 190) through electrical connections 151, 152, and 153. For example, die 110 and die 120 may communicate with component 198 through one or more paths (e.g., signal paths) that may include electrical connections 151, conductive path 154, electrical connections 152, conductive path 156, electrical connections 153, and conductive path 158. In another example, die 110 and die 120 may communicate with a component 199 on base 190 through one or more paths (e.g., signal paths) that may include electrical connections 151, conductive path 155, electrical connections 152, conductive path 157, electrical connections 153, and conductive path 159. [0035] FIG. 2 shows die 110 and 120 after they are disassembled from package 101 of FIG. 1. Lines 1-1 in FIG. 2 indicates locations of the cross sections of die 110 and 120 in FIG. 1. As shown in FIG. 2, die 110 may have a size (e.g., total surface area on side 111) less than the size (e.g., total surface area on side 121) of die 120. Die 110 includes a length 114. Die 120 includes a length 124, which may be greater than length 114 of die 110. A portion of electrical connections 151 may be on side 111 of die 110, and another portion of electrical connections 151 may be on side 121 of die 120. A portion of electrical connections 152 may also be on side 121 of die 120. [0036] FIG. 3 shows substrate 130 after it is disassembled from package 101 of FIG. 1. Line 1-1 in FIG. 3 indicates a location of the cross section of substrate 130 in FIG. 1. Opening 133 of substrate 130 includes a length 134, which may be greater than length 114 (FIG. 2) of die 110. As shown in FIG. 3, opening 133 may be part of a hole in a portion of substrate 130. A portion of electrical connections 152 and a portion of electrical connections 153 may be on side 131 of substrate 130. [0037] FIG. 4 shows base 190 after it is disassembled from package 101 of FIG. 1. Line 1-1 in FIG. 4 indicates the location of the cross section of base 190 in FIG. 1. Opening 193 of base 190 includes a length 194, which may be greater than length 124 (FIG. 2) of die 120. As shown in FIG. 4, opening 193 may be part of a hole in a portion of base 190. A portion of electrical connections 153 may be on side 191 of base 190. [0038] As shown in FIG. 1 and FIG. 3, including an opening (e.g., opening 133) in substrate 130 may allow for more options in the selection of the structure of die 110, die 120, or both of package 101. For example, with opening 133 in substrate 130, die 110, die 120 or both may be selected to be either a thin die (e.g., 50 nanometers (nm) or less in thickness) or a thick die (e.g., greater than 50 nm in thickness). Package 101 may allow a thick die to be included in it without impacting the profile (e.g., overall thickness) of package 101 because at least a portion of the die (e.g., die 110) may be inside opening 133 of substrate 130. This may improve (e.g., reduce) the profile of package 101 and may also improve (e.g., reduce) the overall thickness of electronic equipment 100. If a thick die (instead of a thin die) is included in package 101, cost may also be improved (e.g., reduced) because cost associated with a thick die may generally be lower than the cost associated with a thin die. [0039] Including an opening (e.g., opening 193) in base 190 (FIG. 1 and FIG. 4) may further improve the profile (e.g., overall thickness) of electrical equipment 100. For example, with opening 193 in base 190, a die (e.g., die 120) of package 101 may also be a thick die without impacting the profile of electrical equipment 100 because at least a portion of the die (e.g., die 120) may be inside opening 193 of base 190. [0040] Including an opening (e.g., opening 193) in base 190 may also allow for more options in the selection of additional types of thermal solution (besides heat dissipating device 140) for package 101, as described in more detail with reference to FIG. 5. [0041] FIG. 5 shows a cross section of an apparatus in the form of electronic equipment 500 including a heat dissipating device 540, according to some embodiments described herein. Electronic equipment 500 may include elements similar to or identical to those of electronic equipment 100 (FIG. 1). Thus, for simplicity, the description of similar or identical elements between FIG. 1 and FIG. 5 is not repeated in the description FIG. 5. Differences between electronic equipment 100 (FIG. 1) and electronic equipment 500 (FIG. 5) include heat dissipating device 540 and TIM 545 in electronic equipment 500. [0042] Heat dissipating device 540 may be arranged to dissipate heat from package 101, such as to dissipate heat from die 120 or both of die 110 and die 120. Heat dissipating device 540 may include a heat spreader (e.g., an integrated heat spreader) or another type of thermal solution. As shown in FIG. 5, heat dissipating device 540 may be directly coupled to side 122 of die 120 by a thermal interface material (TIM) 545. TIM 545 may enhance heat conduction (e.g., from die 120 to heat dissipating device 540) to further improve (e.g., increase) heat dissipation from die 120. [0043] Besides heat dissipating device 140 (e.g., on top of package 101), heat dissipating device 540 (at the bottom of package 101) may further improve thermal solutions for package 101. For example, in some situations, hot spots may occur in die 120 (e.g., at the bottom portion near side 122 of die 120) if heat dissipating device 540 is not included package 101. Coupling heat dissipating device 540 to die 120 as shown in FIG. 5 may eliminate or reduce such hot spots. This may further improve thermal solutions in package 101. [0044] FIG. 6 shows a cross section of an apparatus in the form of electronic equipment 600, which may be a variation of electronic equipment 100 of FIG. 1, according to some embodiments described herein. Electronic equipment 600 may include elements similar to or identical to those of electronic equipment 100 (FIG. 1). Thus, for simplicity, the description of similar or identical elements between FIG. 1 and FIG. 6 is not repeated in the description FIG. 6. Differences between electronic equipment 100 (FIG. 1) and electronic equipment 600 (FIG. 6) include the arrangement of die 120 and opening 193 of base 190. As shown in FIG. 6, die 120 may include no portions located inside opening 193 of substrate 130 (e.g., the entire die 120 is outside opening 193). Thus, no portions of die 120 may occupy any portion of opening 193 of substrate 130. [0045] FIG. 7 shows a cross section of an apparatus in the form of electronic equipment 700 including a heat dissipating device 740, according to some embodiments described herein. Electronic equipment 700 may include elements similar to or identical to those of electronic equipment 600 (FIG. 6). Thus, for simplicity, the description of similar or identical elements between FIG. 6 and FIG. 7 is not repeated in the description FIG. 7. Differences between electronic equipment 600 (FIG. 6) and electronic equipment 700 (FIG. 7) include the addition of heat dissipating device 740 and TIM 745 in electronic equipment 700. Heat dissipating device 740 may be arranged to dissipate heat from package 101, such as to dissipate heat from die 120 or both of die 110 and die 120. [0046] FIG. 8 shows a cross section of an apparatus in the form of electronic equipment 800, which may be a variation of electrical equipment 600 of FIG. 6, according to some embodiments described herein. Electronic equipment 800 may include elements similar to or identical to those of electronic equipment 600 (FIG. 6). Thus, for simplicity, the description of similar or identical elements between FIG. 6 and FIG. 8 is not repeated in the description FIG. 8. Differences between electronic equipment 600 (FIG. 6) and electronic equipment 800 (FIG. 8) include differences between a length 894 of opening 893 of base 890 and length 124 (FIG. 2) of die 120. Length 894 of opening 893 may be less than length 124 of die 120. Thus, as shown in FIG. 5, opening 893 of base 890 may directly face only a portion of side 122 of die 120 (e.g., opening 893 does not face the entire side 122 of die 120). In FIG. 1, opening 193 may directly face the entire side 122 of die 120. [0047] FIG. 9 shows a cross section of an apparatus in the form of electronic equipment 900 including a heat dissipating device 940, according to some embodiments described herein. Electronic equipment 900 may include elements similar to or identical to those of electronic equipment 800 (FIG. 8). Thus, for simplicity, the description of similar or identical elements between FIG. 8 and FIG. 9 is not repeated in the description FIG. 9. Differences between electronic equipment 800 (FIG. 8) and electronic equipment 900 (FIG. 9) include the addition of heat dissipating device 940 and TIM 945 in electronic equipment 900. Heat dissipating device 940 may be arranged to dissipate heat from package 101, such as to dissipate heat from die 120 or both of die 110 and die 120. [0048] FIG. 10 shows a cross section of an apparatus in the form of electronic equipment 1000 including a package 101 coupled to a base 1090 having no openings, according to some embodiments described herein. Electronic equipment 1000 may include elements similar to or identical to those of electronic equipment 100 (FIG. 1). Thus, for simplicity, the description of similar or identical elements between FIG. 1 and FIG. 10 is not repeated in the description FIG. 10. Differences between electronic equipment 100 (FIG. 1) and electronic equipment 1000 (FIG. 10) include differences in base 190 (FIG. 1) and base 1090 (FIG. 10). As shown in FIG. 10, base 1090 may include no openings facing die 120. Without openings in base 1090, die 120 may include a thin die. [0049] FIG. 11 shows base 1090 of FIG. 10 after it is disassembled from package 101 (FIG. 10). Line 10-10 in FIG. 11 indicates a location of the cross section of base 1090 in FIG. 10. As shown in FIG. 11, base 1090 may include no openings at portion 1196 that faces die 120 (FIG. 10). [0050] In the above description with respect to FIG. 1 through FIG. 11, each of electronic equipments 100, 500, 600, 700, 800, 900, and 1000 may include a top die (e.g., die 110) coupled to a bottom die (e.g., die 120). However, in some arrangements, the bottom die (e.g., die 120) may be replaced by a structure different from a die (e.g., a structure that does not include a die). For example, in some arrangements, an interposer may replace die 120. [0051] FIG. 12 shows a cross section of an apparatus in the form of electronic equipment 1200 including a package 101 having a structure 1220 coupled to die 110, according to some embodiments described herein. Electronic equipment 1200 may include elements similar to or identical to those of electronic equipment 100 (FIG. 1). Thus, for simplicity, the description of similar or identical elements between FIG. 1 and FIG. 12 is not repeated in the description FIG. 12. [0052] As shown in FIG. 12, structure 1220 includes a side 1221 and a side 1222 opposite from side 1221. Structure 1220 may include an interposer or another type of structure having conductive paths to provide communication between die 110 to other components (e.g., components 198 and 199). Structure 1220 may include components 1225 (e.g., passive components) such as capacitors, inductors, resistors, and other passive components. Structure 1220 may include no active components, such as transistors. FIG. 12 shows components 1225 being located on side 1222 of structure 1220 as an example. However, some or all of components 1225 may be inside structure 1220. In an alternative arrangement, structure 1220 may be replaced by a die (or alternatively may include a die), such as die 120 described above with reference to FIG. 1 through FIG. 1 1. [0053] FIG. 13 through FIG. 19 show methods of forming electronic equipments, according to some embodiments described herein. The electronic equipments formed by the methods described below with reference to FIG. 13 through FIG. 19 may include the electronic equipments (e.g., 100, 500, 600, 700, 800, 900, 1000, and 1200) described above with reference to FIG. 1 through FIG. 12. [0054] As shown in FIG. 13, method 1305 may include attaching die 1310 to die 1320. Die 1310 and die 1320 may correspond to die 110 and die 120, respectively, of FIG. 1 through FIG. 11. Alternatively, die 1320 in FIG. 13 may be replaced by a structure, such as structure 1220 of FIG. 12. In FIG. 13, die 1310 includes a side 1311 (e.g., active side) and side a 1312 (e.g., backside) opposite from side 1311. Sides 1311 and 1312 may include an active side and a backside, respectively, of die 1310. Die 1320 includes a side (e.g., surface) 1321 and a side (e.g., surface) 1322 opposite from side 1321. Sides 1321 and 1322 may include an active side and a backside, respectively, of die 1320. Side 1311 of die 1310 may include electrical connections (e.g., solder balls, solder bumps, or another type of conductive connection) 1351 formed thereon. Although not shown in FIG. 13, side 1321 of die 1320 may include electrical connections (e.g., conductive pads) formed thereon and to be bonded to electrical connections 1351 of die 1310. Die 1310 and die 1320 may be attached to each other (e.g., by flip chip technique), such that electrical connections 1351 of die 1310 may be bonded to corresponding electrical connections of die 1320 and form a controlled collapse chip connection (C4). [0055] In FIG. 13, attaching die 1310 to die 1320 in method 1305 may include arranging die 1310 and 1320 in face-to-face position, such that side 1311 of die 1310 may directly face side 1321 of die 1320. Attaching die 1310 to die 1320 may also include positioning (e.g., aligning) electrical connections 1351 of die 1310 in direct contact with corresponding electrical connections on side 1321 of die 1320. Then, a reflow process (e.g., reflow soldering process) may be performed to bond electrical connections 1351 of die 1310 to corresponding electrical connections of die 1320. [0056] FIG. 14 shows a combination (e.g., stacked-die) including die 1310 and 1320 after they have been attached (e.g., bonded) to each other. Electrical connections 1351 between die 1310 and die 1320 may correspond to electrical connections 151 (e.g., FIG. 1). As shown in FIG. 14, material (e.g., underfill material) 1461 may be formed between die 1310 and die 1320 and around electrical connections 1351. [0057] FIG. 15 shows a method 1505 of attaching the combination of die 1310 and die 1320 to an assembly 1502, according to some embodiments described herein. The combination of die 1310 and die 1320 of FIG. 14 may be flipped over (as shown in FIG. 15) before attaching to assembly 1502. Assembly 1502 may include components such as a substrate 1530 coupled to a heat dissipating device 1540 and a TIM 1545. These components may be pre- attached before assembly 1502 is attached to the combination of die 1310 and die 1320. Substrate 1530 of assembly 1502 includes a side (e.g., surface) 1531 and a side (e.g., surface) 1532 opposite from side 1531. Side 1531 may include electrical connections (e.g., solder balls, solder bumps, or another type of conductive connection) 1552 formed thereon. Substrate 1530 may include an opening 1533. Substrate 1530 may correspond to substrate 130 (e.g., FIG. 1). Thus, opening 1533 of substrate 1530 may correspond to opening 133 of substrate 130. [0058] In FIG. 15, attaching the combination of die 1310 and die 1320 to assembly 1502 in method 1505 may include positioning (e.g., aligning) die 1310 directly over opening 1533 of substrate 1530, such that after the combination of die 1310 and die 1320 is attached to assembly 1502, at least a portion of die 1310 may be located inside opening 1533 of substrate 1530 to occupy at least a portion of opening 1533. [0059] Attaching the combination of die 1310 and die 1320 to assembly 1502 may also include positioning (e.g., aligning) electrical connections 1552 of substrate 1530 in direct contact with corresponding electrical connections (not shown) on side 1321 of die 1320. Then, a reflow process (e.g., reflow soldering process) may be performed to bond electrical connections 1352 of substrate 1530 to corresponding electrical connections side 1321 of die 1320 to form a connection (e.g., controlled collapse chip connection) between die 1320 and substrate 1530. [0060] FIG. 16 shows a package 1601 after the combination of die 1310 and 1320 have been attached (e.g., bonded) to assembly 1502 (FIG. 15). As shown in FIG. 16, material (e.g., underfill material) 1662 may be formed between die 1320 and substrate 1530 and around electrical connections 1552. [0061] Package 1601 may correspond to package 101 (e.g., FIG. 1) described above with reference to FIG. 1 through FIG. 12. In FIG. 16, electrical connections 1552 between substrate 1530 and die 1320 may correspond to electrical connections 152 (e.g., FIG. 1). As shown in FIG. 16, package 1601 may include electrical connections 1653 formed on side 1531 of substrate 1530. Electrical connections 1653 may be formed after the combination of die 1310 and 1320 have been attached to assembly 1502 (FIG. 15). Electrical connections 1653 may include solder balls or another type of conductive connection. Electrical connections 1653 may enable package 1601 to be electrically coupled to other components (e.g., to a circuit board (e.g., PCB) of electronic equipment. [0062] FIG. 17 shows a method 1705 of attaching package 1601 of FIG. 16 to a base 1790, according to some embodiments described herein. Package 1601 of FIG. 16 may be flipped over (as shown in FIG. 17) before attaching to base 1790 (e.g., by surface mounting technique). As shown in FIG. 17, base 1790 includes a side (e.g., surface) 1791 and side (e.g., surface) 1792 opposite from side 1791. Base 1790 may include an opening 1793. Base 1790 may correspond to base 190 (e.g., FIG. 1 and FIG. 4). Thus, opening 1793 of base 1790 may be similar to or identical to opening 193 of base 190. [0063] In FIG. 17, attaching package 1601 to base 1790 in method 1705 may include positioning (e.g., aligning) die 1320 directly over opening 1793 of base 1790, such that after package 1601 is attached to base 1790, at least a portion of die 1320 may be located inside opening 1793 of base 1790 to occupy at least a portion of opening 1793. [0064] Attaching package 1601 to base 1790 in method 1705 may also include positioning (e.g., aligning) electrical connections 1653 of substrate 1530 in direct contact with corresponding electrical connections (not shown) on side 1791 of base 1790. Then, a reflow process (e.g., reflow soldering process) may be performed to bond electrical connections 1653 of substrate 1530 to corresponding electrical connections on side 1791 of base 1790. [0065] FIG. 18 shows package 1601 after it has been attached (e.g., bonded) to base 1790. Electrical connections 1653 between substrate 1530 and base 1790 may correspond to electrical connections 153 (e.g., FIG. 1). [0066] The above description with respect to method 1705 of FIG. 17 and FIG. 18 shows an example where method 1705 may attach package 1601 to base 1790 such that at least a portion of die 1320 may be located inside opening 1793 of base 1790 (FIG. 18). In an alternative method, package 1601 may be attached to base 1790 such that no portions of die 1320 may occupy any portion of opening 1793 (e.g., the entire die 1320 is outside opening 1793). The arrangement of die 1320 and base 1790 (FIG. 17) in such an alternative method may be similar to or identical to the arrangement of die 120 and base 190 shown in FIG. 6. In another alternative method, opening 1793 (FIG. 17) of base 1790 may have a dimension (e.g., a length similar to length 894 in FIG. 8), such that the arrangement of die 1320 and base 1790 (FIG. 17) may be similar to or identical to the arrangement of die 120 and base 890 shown in FIG. 8. [0067] The above description with respect to method 1705 of FIG. 17 and FIG. 18 show an example where method 1705 may use a base (e.g., base 1790) having an opening (e.g., opening 1793). In an alternative method, a base without an opening may be used. In such an alternative method, the arrangement of die 1320 and the base (without openings) may be similar to or identical to the arrangement of die 120 and base 1090 of FIG. 10. [0068] FIG. 19 shows a method 1905 of attaching a heat dissipating device 1940 to die 1320 of package 1601 of FIG. 18, according to some embodiments described herein. Heat dissipating device 1940 may correspond to heat dissipating device 540 (FIG. 5). In FIG. 19, method 1905 may include attaching a TIM 1945 to die 1320 such that TIM 1945 is between die 1320 and heat dissipating device 1940. TIM 1945 may correspond to TIM 545 (FIG. 5). [0069] Method 1905 may use a heat dissipating device and a TIM different from those shown in FIG. 19. For example, if the arrangement of die 1320 and base 1790 is similar to or identical to the arrangement of die 120 and base 190 of FIG. 6, then method 1905 may use a heat dissipating device similar to identical to heat dissipating device 740 of FIG. 7. In another example, if the arrangement of die 1320 and base 1790 in FIG. 19 is similar to or identical to the arrangement of die 120 and base 890 of FIG. 8, then method 1905 may use a heat dissipating device and a TIM similar to or identical to heat dissipating device 940 and TIM 945, respectively, of FIG. 9. [0070] The above description and the drawings sufficiently illustrate specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Embodiments set forth in the claims encompass all available equivalents of those claims. [0071] The Abstract is provided to comply with 37 C.F.R. Section 1.72(b) requiring an abstract that will allow the reader to ascertain the nature and gist of the technical disclosure. It is submitted with the understanding that it will not be used to limit or interpret the scope or meaning of the claims. The following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment.